What are you looking for?

Showing 9 of 963 Results for "net%20neutrality"

ICLE Comments on the COFECE Report on Marketplace Competition in Mexico

Regulatory Comments Executive Summary We are thankful for the opportunity to submit our comments to the Preliminary Report (hereinafter, the Report)[1] published by the Investigative Authority (IA) . . .

Executive Summary

We are thankful for the opportunity to submit our comments to the Preliminary Report (hereinafter, the Report)[1] published by the Investigative Authority (IA) of the Federal Economic Competition Commission (COFECE, after its Spanish acronym) following its investigation of competition in the retail electronic-commerce market. The International Center for Law & Economics (“ICLE”) is a nonprofit, nonpartisan global research and policy center founded with the goal of building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law & economics methodologies to inform public-policy debates and has longstanding expertise in the evaluation of competition law and policy. ICLE’s interest is to ensure that competition law remains grounded in clear rules, established precedent, a record of evidence, and sound economic analysis.

The Report stems from a procedure included in the Mexican Competition Act, known as “Investigations to Determine Essential Facilities or Barriers to Competition”. COFECE can initiate such investigations “when there are elements suggesting there are no effective competition conditions in a market.” The IA is responsible for issuing a preliminary investigative report and proposing corrective measures. COFECE’s Board of Commissioners can later adopt or reject the proposal.

Our comments respectfully suggest to COFECE Commissioners not to follow the recommendations of the IA concerning competition in the retail electronic-commerce market. While the Report is a laudable effort to understand the market and to protect the competition upon it—competition that has been beneficial to Mexican consumers—its conclusions and recommendations do not follow the evidence and the generally accepted methods and principles of Antitrust laws and best practices.

In first place, under the Mexican Competition Act, investigations should aim to eliminate only “restrictions to the efficient operation of markets”, the purpose of According to publicly available information, however, Amazon and Mercado Libre (MeLi), the two companies identified as “dominant” in the report, owe their success to consumer preferences and trust, rather than “barriers to competition”. Indeed, if these were present, they would lead to consumer dissatisfaction that is simply not the case here. The report also ignores the consumer benefits provided by Amazon and MeLi’s business models (i.e., cheaper products and services, fast delivery, easier access to information to compare products, etc.).

Second, the Report defines an unreasonably narrow relevant market that includes only “online marketplaces in multiple product categories and operating at the national level”. This market definition ignores other online retailers (like Shein or Temu) because they sell a narrower selection of goods?, e-commerce aggregators (like Google Shopping) because they are merely intermediaries that connect buyers and sellers, seller-owned websites (like Apple or Adidas) because they do not sell as many distinct product categories, as well as brick-and-mortar stores. By artificially narrowing the market in this way, the report drastically overstates Amazon and MeLi’s market shares.

Third, this gerrymandered relevant market leads to an artificial finding that Amazon and MeLi are “dominant” marketplaces—a key requirement for subsequent enforcement. This finding is problematic because the Report considers any costs faced by new entrants as “barriers to entry” that insulate the two marketplaces from competition. As we argue below, however, these “barriers” are merely regular business costs that do not prevent new players from entering. To wit, the record shows that new firms regularly enter the market.

Finally, the proposed remedies would harm rather than benefit consumers. The Report suggests forcing Amazon and MeLi to separate their streaming services (like Amazon Prime) from their loyalty programs. This would hurt consumers who currently enjoy bundled benefits at a lower price. Additionally, requiring the platforms to interoperate with other logistics providers would stifle innovation and investment as these platforms wouldn’t reap the benefits of their digital infrastructure. This mandated interoperability could also harm consumers who may attribute delivery-related failings to the marketplaces rather than logistics providers responsible for them, thereby creating a standard free-rider problem.

I. Introduction

The Report has been issued in the context of a procedure contemplated in Article 94 of the Mexican Competition Act, known as “Investigations to Determine Essential Facilities or Barriers to Competition”. According to this provision, COFECE shall initiate an investigation “when there are elements suggesting there are no effective competition conditions in a market”. The investigation should aim to determine the existence of “barriers to competition and free market access” or of “essential facilities”.

An IA is responsible for issuing a preliminary investigative report and to propose corrective measures. The Report must identify the market subject to the investigation with the purpose of allowing any person to provide elements during the investigation. Once the investigation is finished, the IA shall issue a Report, including corrective measures deemed necessary to eliminate the restrictions to the efficient operation of the market. Economic agents potentially affected by corrective measures proposed have the opportunity to comment and provide evidence. COFECE’s Board of Commissioners can later adopt or reject the proposals.

We understand and commend COFECE’s concerns for competition in the marketplaces market, but any investigation should aim to eliminate “restrictions to the efficient operation of markets”, the purpose of the Mexican Competition Act, according to its Article 2[2]. The conclusions and recommendations of the Report do not appear to consider the efficiency of the leading marketplaces, which may explain why consumers routinely choose them over rivals.

Indeed, according to publicly available information, Amazon and MeLi, the two companies identified as “dominant” in the report, owe their success to consumer preferences and trust.  According to one source[3], for instance:

The popularity of the Amazon marketplace in Mexico is largely based on customer satisfaction. Amazon is the second most appreciated e-commerce platform in Mexico, according to a Kantar survey, with a satisfaction index of 8.5 out of 10. Consumer feedback is also essential to the success of the Amazon marketplace, as it allows buyers to make successful purchases. Consumer reviews are also essential to the success of the Amazon marketplace, allowing buyers to make informed purchases. Good reviews highlight Amazon’s speed and reliability [emphasis added].

According to a study published by the Federal Institute of Telecommunications (IFT, after its Spanish acronym) about the use of digital platforms during the Covid-19 pandemic, 75.8% of users claim to be satisfied or very satisfied with the applications and webpages they use to buy online. Moreover, MeLi and Amazon were the most mentioned platforms with 67.3% and 30.3% of mentions, respectively.[4]

The report also appears to ignore the consumer benefits provided by Amazon MeLi’s business models (i.e., cheaper products and services, fast delivery, easier access to information to compare products, etc.).

The Report finds preliminary evidence to support the notion that “there are no conditions of effective competition in the Relevant Market of Sellers and in the Relevant Market of Buyers,” as well as the existence of “three Barriers to Competition” that generate restrictions on the efficient functioning of said markets.

The alleged barriers consist of:

  1. “Artificiality” in some components of the marketplaces’ loyalty programs (services embedded in loyalty programs that—without being directly linked to the marketplace’s ability to carry out or facilitate transactions between buyers and sellers, and coupled with “network effects”—affect buyers’ behavior);
  1. “Buy Box opacity”[5] (sellers on the marketplaces don’t have access to the ways that Amazon and MeLi choose the products placed into the Buy Box); and
  1. “Logistic solutions foreclosure,” because Amazon and MeLi don’t allow all logistics providers to access their platforms’ Application Programming Interfaces (APIs), but rather bundle marketplace services with their own fulfillment services.

To eliminate these alleged barriers, the Report proposes three remedies, to be applied to Amazon and MeLi:

  1. An obligation to “disassociate” streaming services from membership and/or loyalty programs (e.g., Amazon Prime), as well as any other service unrelated to use of the marketplace (e.g., games and music, among others);
  2. An obligation to carry out all actions that are “necessary and sufficient” to allow sellers to freely adjust their commercial strategies with full knowledge of the Buy Box selection processes; and
  3. An obligation to allow third-party logistics companies to integrate into the platform through their respective APIs, and to ensure that Buy Box selection doesn’t depend on the choice of logistics provider unless it affects “efficiency and performance criteria.”

We disagree with the findings and recommendations of the Report for the reasons stated below:

II. An Unreasonably Narrow Market Definition

Rather than an “abuse of dominance” procedure, the market investigation that led to the report was a “quasi-regulatory procedure.” But the wording of Article 94 of the Mexican Federal Economic Competition Act (under which the investigation was authorized) strongly suggests that COFECE has to establish (not simply assert) an “absence of effective competition.” This would entail either that there is a “market failure” that impedes competition, or that there is an economic agent with a dominant position. The report unconvincingly tries to show the latter.

To determine if any given company has a “dominant position” (monopoly power), competition agencies must first define a “relevant market” in which the challenged conduct or business model has an effect. Although it is common for antitrust enforcers to define relevant markets narrowly (often, the smaller the market, the easier it is to find that the hypothetical monopolist is, in fact, a monopolist), we think the Report goes too far in the case at hand.

The Report appears to follow the bad example of its American counterpart, the Federal Trade Commission (FTC). As Geoffrey Manne explains in an Issue Brief about the FTC’s recent monopolization complaint[6] against Amazon the agency:

The FTC’s complaint against Amazon describes two relevant markets in which anticompetitive harm has allegedly occurred: (1) the “online superstore market” and (2) the “online marketplace services market.”

the FTC’s complaint limits the online-superstore market to online stores only, and further limits it to stores that have an “extensive breadth and depth” of products. The latter means online stores that carry virtually all categories of products (“such as sporting goods, kitchen goods, apparel, and consumer electronics”) and that also have an extensive variety of brands within each category (such as Nike, Under Armor, Adidas, etc.). In practice, this definition excludes leading brands’ private channels (such as Nike’s online store), as well as online stores that focus on a particular category of goods (such as Wayfair’s focus on furniture). It also excludes the brick-and-mortar stores that still account for the vast majority of retail transactions. Firms with significant online and brick-and-mortar sales might count, but only their online sales would be considered part of the market. [7]

The Report does something similar. It defines two relevant markets;

  1. Sellers Relevant Market: consists of the marketplace service for sellers, with a national geographical dimension.
  2. Buyers Relevant Market: consists of the service of marketplaces and multi-category online stores for buyers in the national territory, which includes marketplace business models (hybrid and non-hybrid) and online stores with multiple categories of products.

Both markets, however, are defined in an unreasonably narrow way. By alleging that large online marketplaces “have positioned themselves as an important choice,” the agency ignores competition from other online and offline retailers. The Report ignores other e-commerce platforms—like China’s Shein[8] and Temu[9]—that have gained both popularity and advertising-market share. The report also neglects to mention e-commerce aggregators like Google Shopping, which allow consumers to search for almost any product, compare them, and find competitive offers; as well as competition from e-commerce websites owned by sellers, such as Apple or Adidas.

This exclusion seems wrong. To compete with and “online superstores”, online stores do not need the scope of products that Amazon or MeLi have, because “consumers buy products, not store types”[10]:

Indeed, part of the purported advantage of online shopping—when it’s an advantage—is that consumers don’t have to bundle purchases together to minimize the transaction costs of physically visiting a brick-and-mortar retailer. Meanwhile, another part of the advantage of online shopping is the ease of comparison shopping: consumers don’t even have to close an Amazon window on their computers to check alternatives, prices, and availability elsewhere. All of this undermines the claim that one-stop shopping is a defining characteristic of the alleged market.[11]

The Report also appears to ignore the competitive constraints imposed by brick-and-mortar retailers, especially if Amazon or MeLi tried to exploit their market power. Of course, how many consumers might switch, and the extent to which that would affect the marketplaces, are empirical questions. But there is no question that some consumers might switch. In that respect, it is important to remember that competition takes place on the margins. Accordingly, it is not necessary for all consumers to switch to affect a company’s sales and profits.

The report does mention selling through social media but does not include such sales in the relevant market. We think that social media should as a sales channel should be considered as reasonable substitute for Amazon and MeLi, considering the fact that 85% of small and medium enterprises turned to Facebook, Instagram, and WhatsApp during the Covid-19 pandemic to advertise and sell their products.[12] The Commercial Guide for Mexico published by the U.S. Department of Commerce’s International Trade Administration reports that “Mexican buyers are highly influenced by social networks when making purchases. Forty-three percent of eCommerce buyers have bought via Conversational Commerce or C-commerce (selling via Facebook or WhatsApp), and 29 percent through “lives” or livestreams”.[13]

There is also empirical evidence that Amazon not only competes, but competes intensively with other distribution channels, and has a net-positive welfare effect on Mexican consumers. A 2022 paper[14] found that:

  1. E-commerce and brick-and-mortar retailers in Mexico operate in a single, highly competitive retail market; and
  2. Amazon’s entry has generated a significant pro-competitive effect by reducing brick-and-mortar retail prices and increasing product selection for Mexican consumers.

The paper finds the market entry of products sold and delivered by Amazon gave rise to price reductions of up to 28%.[15] In light of this evidence, we think that is wrong to assume that marketplaces like Amazon and MeLi do not compete with other retailers. The latter should thus be included in the relevant market.

As if this narrow definition were not enough, the report conflates Amazon and MeLi’s market shares, to conclude that, together, both hold more than 85% of the sales and transactions in the Relevant Seller Market during the period analyzed and the Herfindahl-Hirschman Index (HHI) exceeds two thousand points (therefore, the market is highly concentrated). Likewise, in the “Relevant Buyers Market,” the HHI was estimated, for 2022, at 1,614 units and the main three participants concentrate 61% (sixty-one percent) of the market. In both markets, the other participants have a significantly smaller share.

But why combine the market share of Amazon and MeLi, as if they were acting as a single firm? Given the IO’s market definition, it must at least be the case that Amazon and MeLi at least competing with each other. The market’s continuous growth and the evolution of the companies’ respective market shares indicate that they do. A news article from 2020, for instance, reports that:

Supermarkets, department stores and digital-native chains have a common goal: to be the one that captures the most market in electronic commerce in Mexico. In this battle, Amazon and Mercado Libre take the lead, as they are the two firms that concentrate almost a quarter of the total market in this area.

At the end of 2019, Amazon had a market share of 13.4%, which placed it ahead of other competitors. That same year, Mercado Libre was with 11.4%.[16]

Also inconsistent with the hypothesis of a market with “barriers to competition” is the fact that the e-commerce market is continuously growing (and adding market players) in Mexico, which is now the second-largest e-commerce market in Latin America.[17]

It is only on the basis of this distorted depiction of the market that the Report reaches the conclusion that Amazon and MeLi have “the power to fix prices” (another form of saying “monopoly power”). Given what precedes, that conclusion should be rejected.

III. An Unwarranted Finding of a ‘Dominant Position’

Even if one accepts the Report’s market definition, and Amazon and MeLi thus have a significant market share, both firms could still face competition from new entrants, attracted to the market by the higher prices (or other “exploitative” conditions) charged to consumers. According to the Report, alas, there are various barriers to hinder “the entry and expansion” in both relevant markets. Among them, the Report mentions, for instance:

  1. Barriers to entry related to the high amounts of investment for the development of the marketplace, as well as for the development of technological tools integrated into it…. In addition, high investment amounts are required related to the development of logistics infrastructure and in working capital related to funds necessary to cover operating expenses, inventories, accounts receivable and other current liabilities; and
  2. Barriers to entry related to considerable investments in advertising, marketing and public relations. To attract a significant number of buyers and sellers to the platform that guarantees the success of the business, it is imperative to have a well-positioned, recognized brand with a good reputation.

Contrary to what the report claims, however, these are costs, not “barriers to entry.” As Richard Posner convincingly explained, the term “barrier to entry” is commonly used to describe any obstacle or cost faced by entrants. [18] But by this definition (embraced by the Report, apparently), any cost is a barrier to entry. Relying on George Stigler’s more precise definition, Posner suggested defining a barrier to entry as “a condition that imposes higher long-run costs of production on a new entrant than are borne by the firms already in the market.”[19] In other words, properly understood, a barrier to entry is a cost borne by new entrants that was not borne by incumbents.

The authority’s definition of barriers to entry is also at odds with the definition given by the Section IV of Article 3 of the Mexican Competition Act, according to which a barrier to competition is:

Any structural market characteristic, act or deed performed by Economic Agents with the purpose or effect of impeding access to competitors or limit their ability to compete in the markets; which impedes or distorts the process of competition and free market access, as well as any legal provision issued by any level of government that unduly impedes or distorts the process of competition and free market access.

Of course, Amazon and MeLi have some advantages over other firms in terms of their infrastructure, know-how, scale, and goodwill. But those advantages didn’t fall from the sky. Amazon and MeLi built them over time, investing (and continuing to invest) often enormous amounts to do so. Even “network effects” often considered as an inevitable source of monopoly, are not a definite obstacle to competition. As Evans and Schmalensee, have pointed out:

Systematic research on online platforms by several authors, including one of us, shows considerable churn in leadership for online platforms over periods shorter than a decade. Then there is the collection of dead or withered platforms that dot this sector, including Blackberry and Windows in smartphone operating systems, AOL in messaging, Orkut in social networking, and Yahoo in mass online media.[20]

The notion that Amazon and MeLi are shielded by barriers to entry is also contradicted by the entry of new rivals, such as Shein and Temu.

As explained above, the Report also erroneously conflates the market shares of Mercado Libre and Amazon, to reach a combined market share of 85% (eighty-five percent) of sales and transactions in the Sellers Relevant Market; and then combines the market share of the main three market participants in the Buyers Relevant Market to reach a market share of 61% (sixty-one percent) of the market. This is highly problematic as those firms are not a single economic entity, they thus presumably compete against each other.

If anything, the market shares produced by the Report only lead to a high HHI, which in turn shows that the market is “highly concentrated” (if one accepts the Report’s narrow market definition). But concentration is a poor proxy for market power. Economists have been studying the relationship between concentration and various potential indicia of anticompetitive effects—price, markup, profits, rate of return, etc.—for decades, and the empirical evidence is more than enough to say that concentration could lead to competition problems. [21] It is not per se evidence of a lack of competition, let alone a dominant position.

As Chad Syverson recently summarized:

Perhaps the deepest conceptual problem with concentration as a measure of market power is that it is an outcome, not an immutable core determinant of how competitive an industry or market is… As a result, concentration is worse than just a noisy barometer of market power. Instead, we cannot even generally know which way the barometer is oriented.[22]

IV. The Proposed Remedies Would Harm, Rather than Benefit, Consumers

Even if one accepts the Report’s suggested market definition and its assessment of market power, the report’s proposed remedies—which could be summarized as the mandated unbundling of Amazon’s and MeLi’s streaming services from their loyalty programs (like Amazon’s Prime) and to make (at least part of) their platforms “interoperable” with other logistic services—would harm consumers, rather than benefit them.

Amazon Prime, for instance, provides consumers with many attractive benefits: access to video and music streaming; special deals and discounts; and last, but not least, two-day free shipping. According to the Report, “this is an artificial strategy that attracts and retains buyers and, at the same time, hinders buyers and sellers from using alternative marketplaces.”

It’s not entirely clear what “artificial” means in this context, but it appears to imply something outside of the bounds of “normal” competition. Yet what the Report describes is the very definition of competition. Firms competing in a market always choose to combine a “bundle” of features into a single product. They to some extent “bet” on a bundle of features (functionality, materials, terms and conditions) that imply assuming some costs, that they later offer at a given price, that may be met by willing customers (or not). Even with imperfect information, markets (that is, sellers and customers) are the best qualified agents to “decide” the appropriate level of “bundling” on a product, not competition agencies or courts.

A mandate to unbundle streaming services would degrade the online experience of consumers, who would instead have to contract and pay for those services separately.[23] The independent provision of such services would not benefit from Amazon’s or MeLi’s economies of scale and scope and would, therefore, be more expensive. And providing more benefits for consumers at a given price is what we want competitors to do. Treating consumer benefit as a harm turns competition enforcement—and, indeed, the very notion of competition itself—on its head.

The report also proposes to open the Buy Box and modifying its rules so as to be neutral to all logistics providers. This effectively amounts to treating Amazon and MeLi as “common carriers,” like regulators did with telephone networks from the 20th century onwards. Unfortunately, this classification and the rules that follow from it (neutrality and price regulation, among others) was designed for markets with natural monopolies—where competition is not possible or even undesirable[24]—but there is no evidence to suggest this is the case in the case at hand. Instead, Digital platform markets are far more competitive. Given this, common-carrier rules would only foster free riding and dampen incentives to invest and innovate (for both incumbents and new entrants). Sellers and logistics providers have many other options to access consumers. There is no economic or legal justification to mandate their access to Amazon or MeLi’s platforms.

In sum, the Report’s flawed findings lead to even worse remedies. Such remedies would neither promote competition in Mexico nor benefit consumers.

[1] The full text of the report (public version), available at https://www.cofece.mx/wp-content/uploads/2024/02/Dictamen_Preliminar_Version_Publica.pdf.

[2] Mexican Competition Act. Article 2. “The purpose of this Law is to promote, protect and guarantee free market access and economic competition, as well as to prevent, investigate, combat, prosecute effectively, severely punish and eliminate monopolies, monopolistic practices, unlawful concentrations, barriers to entry and to economic competition, as well as other restrictions to the efficient operation of markets.”

[3] ¿Qué Tan Popular es el Marketplace de Amazon en México?, La Patria (Apr. 23, 2023), https://www.lapatria.com/publirreportaje/que-tan-popular-es-el-marketplace-de-amazon-en-mexico. Free translation of the following text in Spanish: “La popularidad del mercado de Amazon en México se basa en gran medida en la satisfacción de los clientes. Amazon es la segunda plataforma de comercio electrónico más apreciada en México, según una encuesta de Kantar, con un índice de satisfacción de 8,5 sobre 10. Los comentarios de los consumidores también son esenciales para el éxito del mercado de Amazon, ya que permiten a los compradores realizar compras acertadas. Las opiniones de los consumidores también son esenciales para el éxito del mercado de Amazon, ya que permiten a los compradores realizar compras acertadas. Las buenas opiniones ponen de relieve la rapidez y fiabilidad de Amazon.”

[4] Instituto Federal de Telecomunicaciones, Uso y Satisfacción de las Aplicaciones y Herramientas Digitales para Compras y Banca en Línea, Videollamadas, Redes Sociales, Salud y Trámites Gubernamentales en Tiempos de Covid-19, Adopción (Jan 19, 2022), available at https://www.ift.org.mx/sites/default/files/contenidogeneral/usuarios-y-audiencias/aplicacionesyherramientasdigitalesentiemposdecovid19.pdf.

[5] The “Buy Box” is a box, normally found on the right side of a marketplace product page after the clients search for a product. Being in this box is an advantage for the seller because it not only highlights its product, but also makes the payment process easier. This is, of course, also an advantage for consumers, who can find and buy products faster.

[6] See https://www.ftc.gov/legal-library/browse/cases-proceedings/1910129-1910130-amazoncom-inc-amazon-ecommerce.

[7] Geoffrey A. Manne, Gerrymandered Market Definitions in FTC v. Amazon,  (Jan. 26, 2024), https://laweconcenter.org/resources/gerrymandered-market-definitions-in-ftc-v-amazon.

[8] See, e.g., Krystal Hu & Arriana McLymore, Exclusive: Fast-Fashion Giant Shein Plans Mexico Factory, Reuters (May 24, 2023), https://www.reuters.com/business/retail-consumer/fast-fashion-giant-shein-plans-mexico-factory-sources-2023-05-24.

[9] See, e.g., Rising E-commerce Star: The Emergence of Temu in Mexico, BNN (Sep. 25, 2023), https://bnnbreaking.com/finance-nav/rising-e-commerce-star-the-emergence-of-temu-in-mexico.

[10] Manne, supra note 7.

[11] Id.

[12] El 85% de las Pymes USA Redes Sociales para Vender en Línea, Expansión (Jul. 28, 2021), https://expansion.mx/tecnologia/2021/07/28/el-85-de-las-pymes-usa-redes-sociales-para-vender-en-linea.

[13] Mexico – Country Commercial Guide, International Trade Organization (Nov. 5, 2023), https://www.trade.gov/country-commercial-guides/mexico-ecommerce.

[14] Raymundo Campos Vázquez et al., Amazon’s Effect on Prices: The Case of Mexico, Centro de Estudios Económicos, Documentos de Trabajo, Nro. II (2022), available at https://cee.colmex.mx/dts/2022/DT-2022-2.pdf.

[15] Id., at 23.

[16] Amazon y Mercado Libre se Disputan la Corona del Comercio Electrónico en México, El CEO (Mar 17, 2020), https://elceo.com/negocios/amazon-y-mercado-libre-se-discuten-la-corona-del-comercio-electronico-en-mexico. Free translation of the following text, in Spanish: “Cadenas de autoservicios, departamentales y nativas digitales tienen un objetivo en común: ser quien acapare más mercado en el comercio electrónico en México. En esta batalla, Amazon y Mercado Libre se ponen a la cabeza, pues son las dos firmas que concentran casi un cuarto del total de mercado de este rubro. Al cierre de 2019, Amazon contaba con un cuota de mercado del 13.4%, que lo colocaba al frente de los demás competidores. Ese mismo año, con 11.4% se encontraba Mercado Libre.”

[17] Stephanie Chevalier, E-commerce Market Share in Latin American and the Caribbean 2023, By Country, Statista (Mar. 25, 2024), https://www.statista.com/statistics/434042/mexico-most-visited-retail-websites (“Over the last few years, online buying and selling have gained considerable ground in Mexico, so much so that the country has positioned itself as the second largest e-commerce market in Latin America. With a rapidly increasing online buying population, it was forecast that nearly 70 million Mexicans would be shopping on the internet in 2023, a figure that would grow by over 26 percent by 2027.”).

[18] Richard Posner, Antitrust Law (2nd. Ed. 2001), at 73-74.

[19] Id., at 74.

[20] David S. Evans & Richard Schmalensee, Debunking the “Network Effects” Bogeyman, Regulation (Winter 2017-2018), at 39, available at https://www.cato.org/sites/cato.org/files/serials/files/regulation/2017/12/regulation-v40n4-1.pdf.

[21] For a few examples from a very large body of literature, seee.g., Steven Berry, Martin Gaynor, & Fiona Scott Morton, Do Increasing Markups Matter? Lessons from Empirical Industrial Organization, 33J. Econ. Perspectives 44 (2019); Richard Schmalensee, Inter-Industry Studies of Structure and Performance, in 2 Handbook of Industrial Organization 951-1009 (Richard Schmalensee & Robert Willig, eds., 1989); William N. Evans, Luke M. Froeb, & Gregory J. Werden, Endogeneity in the Concentration-Price Relationship: Causes, Consequences, and Cures, 41 J. Indus. Econ. 431 (1993); Steven Berry, Market Structure and Competition, Redux, FTC Micro Conference (Nov. 2017), available at https://www.ftc.gov/system/files/documents/public_events/1208143/22_-_steven_berry_keynote.pdf; Nathan Miller, et al., On the Misuse of Regressions of Price on the HHI in Merger Review, 10 J. Antitrust Enforcement 248 (2022).

[22] Chad Syverson, Macroeconomics and Market Power: Context, Implications, and Open Questions 33 J. Econ. Persp. 23 (2019), at 26.

[23] See, relatedly, Alden Abbott, FTC’s Amazon Complaint: Perhaps the Greatest Affront to Consumer and Producer Welfare in Antitrust History, Truth on the Market (Sep. 27, 2023), https://truthonthemarket.com/2023/09/27/ftcs-amazon-complaint-perhaps-the-greatest-affront-to-consumer-and-producer-welfare-in-antitrust-history.

[24] See, e.g., Giuseppe Colangelo & Oscar Borgogno, App Stores as Public Utilities?, Truth on the Market (Jan. 19, 2022), https://truthonthemarket.com/2022/01/19/app-stores-as-public-utilities.

Continue reading
Antitrust & Consumer Protection

ICLE Comments to Federal Reserve Board on Regulation II NPRM

Regulatory Comments Executive Summary In this comment, we argue that the Federal Reserve Board’s interpretation of the Durbin amendment has had the opposite effect to that intended . . .

Executive Summary

In this comment, we argue that the Federal Reserve Board’s interpretation of the Durbin amendment has had the opposite effect to that intended by the legislation. Specifically, it has harmed lower-income consumers and benefited the shareholders of large merchants. To understand how and why this has happened, we look at two aspects of the provision’s implementation: the price controls the Board imposed through Regulation II, and the competitive-routing requirement included in the Durbin amendment itself. We then consider the likely effects of the changes proposed in the NPRM and conclude that these will exacerbate the harms already inflicted by Regulation II. We encourage the Board to consider alternative approaches that would mitigate Regulation II’s harms, including raising or, ideally, eliminating the cap on interchange fees.

I. Introduction

The International Center for Law & Economics (“ICLE”) thanks the Board of Governors of the Federal Reserve System (“Board”) for the opportunity to comment on this notice of proposed rulemaking (“NPRM”), which calls for updates to components of the interchange-fee cap established by Regulation II.[1]

Section 1075 of the Dodd-Frank Wall Street Reform and Consumer Protection Act (the “Dodd-Frank Act”)—commonly referred to as the “Durbin amendment”—required the Board to issue regulations that would limit debit-card interchange fees charged by lenders with assets of more than $10 billion (“covered banks”), such that:

The amount of any interchange transaction fee that an issuer may receive or charge with respect to an electronic debit transaction shall be reasonable and proportional to the cost incurred by the issuer with respect to the transaction.[2]

Sen. Richard Durbin (D-Ill.) stated in 2010 that his amendment “would enable small businesses and merchants to lower their costs and provide discounts for their customers.”[3] Yet the evidence to date demonstrate that, in practice, the provision has done little, if anything, to reduce costs for small businesses and merchants; indeed, many have seen costs rise.[4] Meanwhile, consumers have seen little, if any, savings from merchants, and have been harmed by higher banking fees.[5]

These problems are, at least in part, a consequence of the way the Board chose to interpret the phrase “reasonable and proportional to the cost incurred by the issuer with respect to the transaction.” Specifically, as the Board notes in its summary of the present NPRM:

Under the current rule, for a debit card transaction that does not qualify for a statutory exemption, the interchange fee can be no more than the sum of a base component of 21 cents, an ad valorem component of 5 basis points multiplied by the value of the transaction, and a fraud-prevention adjustment of 1 cent if the issuer meets certain fraud-prevention- standards.

The Board now proposes to reduce further the interchange fees that covered banks may charge for debit-card transactions. Specifically:

Initially, under the proposal, the base component would be 14.4 cents, the ad valorem component would be 4.0 basis points (multiplied by the value of the transaction), and the fraud-prevention adjustment would be 1.3 cents for debit card transactions performed from the effective date of the final rule to June 30, 2025.

In this comment, we question the Board’s interpretation of the underlying legislation by citing, among other things, research conducted by employees of the Board and published by the Board.

II. Can Price Controls Be Reasonable and Proportional?

The heart of the matter is the meaning of “reasonable and proportional to the cost incurred by the issuer with respect to the transaction.” In most respects, the Board has chosen to interpret this phrase narrowly to refer to the pecuniary costs directly associated with the electronic processing of each transaction ($0.21 plus 0.05% of the value of the transaction). But even in deploying this narrow interpretation, the Board has been inconsistent, as:

  1. These fees represent, at best, an average of the pecuniary cost; and
  2. The Board permits issuers to add $0.01 if it “meets certain fraud-prevention standards.” [6]

This latter component clearly is not transaction-specific, as it is intended to cover the cost of investments made in security infrastructure.

A. What’s in a Cost?

The Board’s approach to “cost” fails to consider the two-sided nature of payment-card markets. A 2017 staff working paper by Board economists Mark D. Manuszak and Krzysztof Wozniak notes:

Interchange fees play a central role in theoretical models of payment card networks, which emphasize the card market’s two-sided nature (for example, Rochet and Tirole (2002)).[7] On one side of the market, interchange fees alter acquirers’ costs, influencing the transaction fees they charge merchants. On the other side of the market, inter- change fees provide a source of revenue that defrays issuers’ costs of card services for accountholders, and, thus, influence fees that banks charge accountholders. As a result, these theoretical models broadly predict that a reduction in interchange fees will induce issuers to increase prices for accountholders.

However, theoretical models of two-sided markets rely on an overly simple characterization of issuers, which diverges from reality in three important ways. First, issuers use nonlinear, account-based pricing rather than per-transaction fees typically assumed by the theory but rarely observed in reality. The theoretical literature on nonlinear pricing emphasizes the sensitivity of consumer demand to different price components. For the debit card industry, it predicts that higher costs will result in increases in prices for which consumers’ demand is less sensitive, and lower or no rises in prices to which the demand is more sensitive.

Second, issuers are multiproduct firms, cross-selling a variety of products in addition to card transactions. The theoretical literature on multiproduct pricing predicts that a firm’s price for one good will internalize its impact on the demand for the firm’s other products. In the debit card industry, this implies that, since a bank is best positioned to offer additional services to consumers who are already its accountholders, the price for such an account is less likely to reflect higher costs than it would otherwise.

Finally, issuers are heterogeneous firms, subject to idiosyncratic cost shocks based on their status under the regulation, and compete for customers in the market for banking services. An issuer’s prices are not determined in isolation by its costs and the market demand, but rather jointly with other issuers’ prices…. [8]

In the decade prior to the Dodd-Frank Act, banks had increased the availability of free checking accounts (Figure I) and reduced the fees on non-interest-bearing checking accounts (Figure II), which had widespread benefits. First, it enabled more people to open and maintain bank accounts, thereby reducing the proportion of unbanked and underbanked Americans. Second, it contributed to a shift toward electronic payments, as many consumers who previously lacked access to payment cards now had a debit card (Figure III). This shift was driven along further by banks offering rewards that encouraged the use of debit cards. Since the provision of checking accounts generates associated costs, banks that expanded their offerings of free and/or low-fee accounts had to recoup those costs elsewhere. They did so, in part, through revenue from interchange fees on debit cards.

In a 2014 staff working paper, Board economists Benjamin S. Kay, Mark D. Manuszak, and Cindy M. Vojtech found that Regulation II reduced annual interchange-fee revenue at covered banks by $14 billion.[9] Meanwhile, in their aforementioned 2017 paper, Manuszak and Wozniak showed that, following Regulation II’s implementation, covered banks sought to recoup the revenue lost due to lower interchange fees by increasing fees on checking accounts; reducing the availability of free checking accounts; and increasing the minimum balance required to maintain a free checking account. This resulted in “lower availability of free accounts, higher monthly fees, lower likelihood that the monthly fee could be avoided, and a higher minimum balance to avoid the fee.”[10]

Moreover, Manuszak and Wozniak show that “checking account pricing at covered banks appears primarily driven by the interchange fee restriction rather than other factors related to the financial crisis or subsequent regulatory initiatives.”[11] Finally, in the version of Kay et al.’s paper published in the Journal of Financial Intermediation, the authors “find that retail banks subject to the cap were able to offset nearly all of lost interchange income through higher fees on deposit services.”[12]

In a more recent study, Georgetown University economist Vladimir Mukharlyamov and University of Pennsylvania economist Natasha Sarin estimated that Regulation II caused covered banks to lose $5.5 billion annually, but that they recouped 42% of those losses from account holders. As a result:

the share of free checking accounts fell from 61 percent to 28 percent as a result of Durbin. Average checking account fees rose from $3.07 per month to $5.92 per month. Monthly minimums to avoid these fees rose by 21 percent, and monthly fees on interest-bearing checking accounts also rose by nearly 14 percent. These higher fees are disproportionately borne by low-income consumers whose account balances do not meet the monthly minimum required for fee waiver.[13]

FIGURE 1: Proportion of Banks Offering Free Checking Accounts, 2003-2016

SOURCE: Bankrate

FIGURE II: Average Fees for Checking Accounts, 1998-2023

SOURCE: Bankrate

FIGURE III: US Shares of Noncash Payments by Transaction Volume, 2000-2020

SOURCE: Authors’ calculations based on data from Federal Reserve payment studies

B. Effects on ‘Exempt’ Banks and Credit Unions

In a letter to Senate Banking Committee Chairman Chris Dodd (D-Conn.), House Financial Services Committee Chairman Barney Frank (D-Mass.), and the conferees selected to finalize the Dodd-Frank Act, Durbin claimed that:

Under the Durbin amendment, the requirement that debit fees be reasonable does not apply to debit cards issued by institutions with assets under $10 billion. This means that Visa and MasterCard can continue to set the same debit interchange rates that they do today for small banks and credit unions. Those institutions would not lose any interchange revenue that they currently receive.[14]

Yet as can be seen clearly in Figure IV, average per-transaction debit-card interchange fees fell across the board. For covered issuers, average interchange fees per-transaction fell to the regulated maximum for both covered dual-message (signature) transactions and single-message (PIN) transactions immediately following implementation of Regulation II in October 2011. Meanwhile, adjusting for inflation, average fees per-transaction for exempt issuers fell by about 10% for dual-message transactions.

Average fees per-transaction for single-message transactions, however, fell by 30% over the course of eight years. By 2019, they were only marginally higher than the regulated maximum for covered banks, despite the claimed intent to protect smaller issuers from the effects of the debit-interchange cap. The cause of this decline was the addition of the following subsections to the Electronic Fund Transfer Act (EFTA):[15]

  • EFTA Section 920(b)(1) prohibits issuers and payment networks from imposing network-exclusivity arrangements. In particular, all issuers must ensure that debit-card payments can be routed over at least two unaffiliated networks.
  • EFTA Section 920(b)(1)(B) prohibits issuers and payment networks from restricting merchants and acquirers’ ability to choose the network over which to route a payment.

These changes, which were dictated by the Durbin amendment, enabled merchants to route transactions over lower-cost networks. That has effectively forced the networks subject to such competition—primarily single-message (PIN) networks—to reduce the fees set for exempt banks so that they are in line with those set for covered banks.

This has inevitably caused many exempt banks and credit unions to experience losses similar to those experienced by covered banks. Indeed, in some cases, the effects have been markedly worse, because smaller banks and credit unions lack the advantage of scale.

FIGURE IV: Fee Per Transaction, Covered v Exempt Users, Single v Dual Message Networks (2011 Dollars)

SOURCE: Federal Reserve, St. Louis FRED[16]

C. Asymmetric Pass-Through

In a 2014 paper published by the Federal Reserve Bank of Richmond, Zhu Wang, Scarlett Schwartz, and Neil Mitchell analyzed the results of a then-recent merchant survey conducted by the Federal Reserve Bank of Richmond and Javelin Strategy & Research, which sought to understand the Durbin amendment’s effects on merchants and the response of those merchants. The authors found that, while some merchants enjoyed reductions following Regulation II’s implementation in the merchant-discount rate they paid, others saw their debit-card acceptance costs rise.[17] They also found an asymmetric response: merchants who saw their prices increase typically passed those increased costs onto their customers, while very few of those who saw their debit costs decrease passed those savings onto customers.

Using proprietary data from banks and one of the card networks, economists Vladimir Mukharlyamov and Natasha Sarin estimated that merchants passed through “at most” 28% of their debit-card interchange-fee savings to consumers.[18] The “at most” is worth qualifying: the authors base their analysis on savings at gas stations, but they note that:

It turns out, however, that the standard deviation of per-gallon gas prices ($0.252) is 168 times larger than the average per-gallon debit interchange savings ($0.0015). Relatedly, total Durbin savings for gas merchants amount to less than 0.07% of total sales. These points render the quantification of merchants’ pass-through with statistical significance virtually impossible. The existence of payment instruments exempt from Durbin and the presence of a fixed component in the regulation’s interchange-fee formula further complicate pass-through even for merchants willing to share savings, however small, with consumers.[19]

Meanwhile, as noted, they estimated that banks passed through 42% of their interchange-fee revenue losses to consumers. They estimate that the net result of this was a $4 billion transfer to merchants, of which $3.2 billion came directly from banks and $0.8 billion from consumers, who paid $2.3 billion more in higher checking fees, but received only $1.5 billion in lower retail prices.

D. Effects on Lower-Income Consumers

In a 2014 ICLE paper, Todd Zywicki, Geoffrey Manne, and Julian Morris offered a back-of-the-envelope calculation of the best-case scenario for the net effect of Regulation II on the “average” American consumer:

In 2012, the average household spent $30,932 in total on food, apparel, transportation, entertainment, healthcare, and other items that could have been purchased using a payment card (out of a total household expenditure of $51,442). If all of those items were purchased on debit cards and all were purchased from larger retailers and those larger retailers passed on all their savings (averaging 0.7%), then the average household would have saved $216.50. And that is the absolute best case – and most unlikely – scenario. But now assume that average household has two earners, each with a bank account that was previously free but now costs $12 per month. In that case, the household’s costs would have risen by $71.50 as a result of the Durbin Amendment. In other words, even in the best case, lower-middle income and poorer households who have lost access to a free current account—which is likely a majority—will be worse off after the Durbin Amendment.[20]

While the average consumer likely fared poorly, Regulation II was, quite frankly, a disaster for many lower-income consumers. Using data from the Board’s Survey of Consumer Finances, Mukharlyamov and Sarin found that:

over 70 percent of consumers in the lowest income quintile (annual household income of $22,500 or less) bear higher account fees, since they fall below the average post-Durbin account minimum required to avoid a monthly maintenance fee ($1,400). In contrast, only 5 percent of consumers in the highest income quintile (household income of $157,000 or more) fall below this threshold.[21]

Worse, Regulation II almost certainly resulted in an increase in the number of unbanked Americans. Mukharlyamov and Sarin note:

Nearly 8 percent of Americans were unbanked in 2013, with nearly 10 percent of this group becoming unbanked in the last year. Using data from the FDIC National Survey of Unbanked and Underbanked Households, in Table 12 we show that immediately following Durbin there is a significant growth (81 percent increase relative to survey pre-Durbin) in the share of the unbanked population that credits high account fees as the main reason for their not having a bank account. This difference is significant at the 1 percent level.

Respondents in states most impacted by Durbin (those with the highest share of deposits at banks above the $10 billion threshold) are most likely to attribute their unbanked status post-Durbin to high fees (over 15 percent of those surveyed in the highest Durbin tercile). The growth in the recently unbanked (those who had accounts previously but closed them within the last year) is also highest in states with the most Durbin banks, where the increase in account fees is most pronounced. As with the overall sample, these differences are significant at the 1 percent level. This suggests that at least some bank customers respond to Durbin fee increases by severing their banking relationship and potentially turning to more expensive alternative financial services providers such as payday lenders and check-cashing facilities.[22]

III. Conclusion

It is worth noting that the Board was well aware of the two-sided nature of payment-network markets and the implications for setting interchange fees prior to issuing Regulation II. A 2009 staff working paper by Robin A. Prager, Mark D. Manuszak, Elizabeth K. Kiser, and Ron Borzekowski stated:

A few characteristics of an efficient interchange fee are worth noting:

  • In general, an efficient interchange fee is not solely dependent on the cost of producing a card-based transaction nor is it equal to zero.

  • An efficient interchange fee may yield prices for card services to each side of the market that are “unbalanced” in the sense that one side pays a higher price than the other.

  • The efficient interchange fee for a particular card network is difficult to determine empirically.[23]

Based on the foregoing analysis, it appears clear that the optimal debit-card interchange fee is higher than that currently permitted for covered banks under Regulation II—and for exempt banks subject to Durbin’s routing mandates. It is, therefore, rather disconcerting that the Board would contemplate reducing the interchange fee further still in the NPRM to which this comment is addressed. If the Board wished to establish a “reasonable and proportional” fee for debit-card interchange, it would instead raise the cap. Indeed, since it remains “difficult to determine empirically” the efficient interchange fee for any card network, the Board should acknowledge that markets are the best mechanism to establish such fees, and remove the price controls altogether.

[1] Debit Card Interchange Fees and Routing, 88 Fed. Reg. 78100 (Nov. 14, 2023), https://www.federalregister.gov/documents/2023/11/14/2023-24034/debit-card-interchange-fees-and-routing.

[2] Pub. L. 111–203, 124 Stat. 1376 (2010), available at https://www.govinfo.gov/content/pkg/PLAW-111publ203/pdf/PLAW-111publ203.pdf.

[3] Press Release, Durbin Sends Letter to Wall Street Reform Conferees on Interchange Amendment, Office of Sen. Richard Durbin (May 25, 2010), https://www.durbin.senate.gov/newsroom/press-releases/durbin-sends-letter-to-wall-street-reform-conferees-on-interchange-amendment.

[4] Zhu Wang, Scarlett Schwartz, & Neil Mitchell, The Impact of the Durbin Amendment on Merchants: A Survey Study, 100 (3) Econ Quar. (Fed. Rsrv. Bank of Richmond) 183-208 (2014).

[5] Todd J. Zywicki, Geoffrey A. Manne, & Julian Morris, Price Controls on Payment Card Interchange Fees: The U.S. Experience, George Mason Law & Economics Research Paper No. 14-18 (2014); Geoffrey A. Manne, Julian Morris, & Todd J. Zywicki, Unreasonable and Disproportionate: How the Durbin Amendment Harms Poorer Americans and Small Businesses, Int’l. Ctr. Law & Econ. (Apr. 25, 2017), available at https://laweconcenter.org/wp-content/uploads/2017/08/icle-durbin_update_2017_final-1.pdf.

[6] 76 Fed. Reg. 43394 (Jul. 20, 2011), https://www.federalregister.gov/documents/2011/07/20/2011-16861/debit-card-interchange-fees-and-routing and specifically 76 Fed. Reg. 43466 (Jul. 20, 2011), available at https://www.govinfo.gov/content/pkg/FR-2011-07-20/pdf/2011-16861.pdf; Debit Card Interchange Fees and Routing (Regulation II), 12 C.F.R. § 235 (2011), https://www.ecfr.gov/current/title-12/part-235.

[7] Jean-Charles Rochet & Jean Tirole, Cooperation Among Competitors: Some Economics of Payment Card Associations, 33(4) RAND J. Econ. 549-570 (2002).

[8] Mark D. Manuszak & Krzysztof Wozniak, The Impact of Price Controls in Two-sided Markets: Evidence from US Debit Card Interchange Fee Regulation, Finance and Economics Discussion Series 2017-074, Fed. Rsrv. (Jul. 2017), https://www.federalreserve.gov/econres/feds/the-impact-of-price-controls-in-two-sided-markets-evidence-from-us-debit-card-interchange-fee-regulation.htm.

[9] Benjamin S. Kay, Mark D. Manuszak, & Cindy M. Vojtech, Bank Profitability and Debit Card Interchange Regulation: Bank Responses to the Durbin Amendment, Finance and Economics Discussion Series 2014-77, Fed. Rsrv. (Sep. 2014), https://www.federalreserve.gov/econres/feds/bank-profitability-and-debit-card-interchange-regulation-bank-responses-to-the-durbin-amendment.htm.

[10] Supra note 7 at 21.

[11] Id.

[12] Benjamin S. Kay, Mark D. Manuszak, & Cindy M. Vojtech, Competition and Complementarities In Retail Banking: Evidence from Debit Card Interchange Regulation, 34 J. Financ. Intermed. 91–108 (2018), at 104.

[13] Vladimir Mukharlyamov & Natasha Sarin, Price Regulation in Two-Sided Markets: Empirical Evidence from Debit Cards, SSRN (Nov. 24, 2022), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3328579.

[14] Supra note 3.

[15] 12 C.F.R. § 235.1

[16] Regulation II (Debit Card Interchange Fees and Routing), Fed. Rsrv., https://www.federalreserve.gov/paymentsystems/regii-data-collections.htm; Consumer Price Index: All Items for the United States, Fed. Rsrv. Bank of St. Louis, https://fred.stlouisfed.org/series/USACPIALLMINMEI (last visited Aug. 10, 2022).

[17]  Wang et al., supra note 4. Some merchants saw their acceptance costs increase because—prior to Dodd-Frank’s price controls—some merchants, especially smaller merchants, had received discounts on acceptance costs. But the imposition of price ceilings also effectively created a price floor, leading some merchants to pay higher fees than previously.

[18] Supra note 13.

[19] Id. at 4.

[20] Manne, Zywicki, & Morris, supra note 5.

[21] Id. at 30.

[22] Id. at 30-31.

[23] Robin A. Prager, Mark D. Manuszak, Elizabeth K. Kiser, & Ron Borzekowski, Interchange Fees and Payment Card Networks: Economics, Industry Developments, and Policy Issues, Finance and Economics Discussion Series 2009-23, Fed. Rsrv. (Jun. 2009), available at https://www.federalreserve.gov/pubs/feds/2009/200923/200923pap.pdf.

Continue reading
Financial Regulation & Corporate Governance

ICLE Comments on India’s Draft Digital Competition Act

Regulatory Comments A year after it was created by the Government of India’s Ministry of Corporate Affairs to examine the need for a separate law on competition . . .

A year after it was created by the Government of India’s Ministry of Corporate Affairs to examine the need for a separate law on competition in digital markets, India’s Committee on Digital Competition Law (CDCL) in February both published its report[1] recommending adoption of such rules and submitted the draft Digital Competition Act (DCA), which is virtually identical to the European Union’s Digital Markets Act (DMA).[2]

The EU has touted its new regulation as essential to ensure “fairness and contestability” in digital markets. And since it entered into force early last month,[3] the DMA has imposed strict pre-emptive rules on so-called digital “gatekeepers,”[4] a cohort of mostly American tech giants like Google, Amazon, Apple, Meta, and Microsoft.

But despite the impressive public-relations campaign[5] that the DMA’s proponents have been able to mount internationally, India should be wary of reflexively importing these ready-made and putatively infallible solutions that promise to “fix” the world’s most successful digital platforms at little or no cost.

I. Not So Fast

The first question India should ask itself is why?[6] Echoing the European Commission, the CDCL argues that strict ex-ante rules are needed because competition-law investigations in digital markets are too time-consuming. But this could be a feature, not a bug, of competition law. Digital markets often involve novel business models and zero or low-price products, meaning that there is nearly always a plausible pro-competitive explanation for the impugned conduct.

When designing rules and presumptions in a world of imperfect information, the general theme is that, as confidence in public harm goes up, the evidentiary burden must go down. This is why antitrust law tilts the field in the enforcer’s favor in cases involving practices that are known to always, or almost always, be harmful. But none of the conduct covered by the DCA falls into this category. Unlike with, say, price-fixing cartels or territorial divisions, there is currently no consensus that the practices the DMA would prohibit are generally harmful or anticompetitive. To the contrary, when assessing a self-preferencing case against Google in 2018, the Competition Commission of India (CCI) found important consumer benefits[7] that outweighed any inconveniences they may impose on competitors.

By imposing per se rules with no scope for consumer-welfare or efficiency exemptions, the DCA could capture swaths of procompetitive conduct. This is a steep—and possibly irrational—price to pay for administrative expediency. Rather than adopt a “speed-at-all-costs” approach, India should design its rules to minimize error costs and ensure the system’s overall efficiency.

II. The Costs of Ignoring Cost-Benefit Analysis

But this cannot be done, or it cannot be done rationally, unless India is crystal clear about what the costs and benefits of digital-competition regulation are. As things stand, it is unclear whether this question has been given sufficient thought.

For one, the DCA’s goals do not seem to align well with competition law. While competition law protects competition for the ultimate benefit of consumers, the DCA—like the DMA—is concerned with aiding rivals, rather than benefiting consumers. Unmooring digital competition regulation from consumer welfare is ill-advised. It opens the enforcer to aggressive rent seeking by private parties with a vested interest in never being satisfied,[8] who may demand far-reaching product-design changes that don’t jibe with what consumers—i.e., the public at-large—actually want.

Indeed, when the system’s lodestar shifts from benefiting consumers to facilitating competitors, there is a risk that the only tangible measure of the law’s success will be the extent to which rivals are satisfied[9] with gatekeepers’ product-design changes, and their relative market-share fluctuations. Sure enough, the European Commission recently cited stakeholders’ dissatisfaction[10] as one of the primary reasons to launch five DMA noncompliance investigations, mere weeks after the law’s entry into force. In the DCA’s case, the Central Government’s ability to control CCI decisions further exacerbates the risk of capture and political decision making.

While digital-competition regulation’s expected benefits remain unclear and difficult to measure, there are at least three concrete types of costs that India can, and should, consider.

First, there is the cost of harming consumers and diminishing innovation. Mounting evidence from the EU demonstrates this to be a very real risk. For example, Meta’s Threads was delayed[11] in the EU block due to uncertainties about compliance with the DMA. The same happened with Gemini, Google’s AI program.[12] Some product functionalities have also been degraded. For instance, in order to comply with the DMA’s strict self-preferencing prohibitions, maps that appear in Google’s search results no longer link to Google Maps, much to the chagrin of European users.[13]

Google has also been forced to remove[14] features like hotel bookings and reviews from its search results. Until it can accommodate competitors who offer similar services (assuming that is even possible), these specialized search results will remain buried several clicks away from users’ general searches. Not only is this inconvenient for consumers, but it has important ramifications for business users.

Early estimates suggest that clicks from Google ads to hotel websites decreased by 17.6%[15]as a result of the DMA. Meanwhile, on iOS, rivals like Meta[16] and Epic Games[17] are finding it harder than they expected to offer competing app stores or payment services. At least some of this is due to the reality that offering safe online services is a costly endeavour. Apple reviews millions of apps every year[18] to weed out bad actors, and replicating this business is easier said than done. In other words, the DMA is falling short even on its own terms.

In other cases, consumers are likely to be saddled with a litany of pointless choices, as well as changes in product design that undermine user experience. For example, the European Commission appears to believe that the best way to ensure that Apple doesn’t favor its own browser on iOS is by requiring consumers to sift through 12 browser offerings[19] presented on a choice screen.[20] But consumers haven’t asked for this “choice.” The simple explanation for the policy’s failure is that, despite the DMA’s insistence to the contrary, users were always free to choose their preferred browser.

Supporters of digital-competition regulation will no doubt retort that India should also consider the costs of inaction. This is certainly true. But it should do so against the background of the existing legal framework, not a hypothetical legal and regulatory vacuum. Digital platforms are already subject to general (and fully functional) competition law, as well as to a range of other sector-specific regulations.

For instance, Amazon and Flipkart are precluded by India’s foreign-direct-investment (FDI) policy from offering first-party sales[21] to end-users on their e-commerce platforms. In addition, the CCI has launched several investigations of digital-platform conduct that would presumably be caught by the DCA, including by Google,[22] Amazon,[23] Meta,[24] Apple,[25] and Flipkart.[26]

The facile dichotomy made between digital-competition regulation and “the digital wild west[27] is essentially a red herring. Nobody is saying that digital platforms should be above the law. Rather, the question is whether a special competition law is necessary and justified considering the costs such a law would engender, as well as the availability of other legal and regulatory instruments to tackle the same conduct.

This is particularly the case when these legal and regulatory instruments incorporate time-honed analytical tools, heuristics, and procedural safeguards. In 2019, India’s Competition Law Review Committee[28] concluded that a special law was unnecessary. In a report titled “Competition Policy for the Digital Era,”[29] a panel of experts retained by the European Commission reached the same conclusion.

Complicating the question further still is that the DCA would mark a paradigm shift for Indian competition policy. In 2000, the Raghavan Committee Report was crucial in aligning Indian competition law with international best practices, including by moving analysis away from blunt structural presumptions and toward the careful observance of economic effects. As such, it paved the way for the 2002 Competition Act—a milestone of Indian law.

The DCA, by contrast, would overturn these advancements to target companies based on size, obviating any effects analysis. This would amount to taking Indian competition law back to the era of the Monopolies and Restrictive Trade Practices Act of 1969 (MRTP). Again, is the hodgepodge of products and services known collectively as “digital markets” sufficiently unique to warrant such a drastic deviation from well-established antitrust doctrine?

The third group of costs that the government must consider are the DCA’s enforcement costs. The five DMA noncompliance investigations launched recently by the European Commission have served to dispel the once-common belief that the law would be “self-executing[30] and that its enforcement would be collaborative, rather than adversarial. With just 80 dedicated staff,[31] many believe the Commission is understaffed[32] to enforce the DMA (initially, the most optimistic officials asked for 220 full-time employees).[33] If the EU—a sprawling regulatory superstate[34]—struggles to find the capacity to deploy digital-competition rules, can India expect to fare any better?

Enforcing the DCA would require expertise in a range of fields, including competition law, data privacy and security, telecommunications, and consumer protection, among others. Either India can produce these new experts, or it will have to siphon them from somewhere else. This raises the question of opportunity costs. Assuming that India even can build a team to enforce the DCA, the government would also need to be reasonably certain that, given the significant overlaps in expertise, these resources wouldn’t yield better returns if allocated elsewhere—such as, for example, in the fight against cartels or other more obviously nefarious conduct.

In short, if the government cannot answer the question of how much the Indian public stands to gain for every Rupee of public money invested into enforcing the DCA, it should go back to the drawing board and either redesign or drop the DCA altogether.

III. India Is Not Europe

When deciding whether to adopt digital-competition rules, India should consider its own interests and play to its strengths. These need not be the same as Europe’s and, indeed, it would be surprising if they were. Despite the European Commission’s insistence to the contrary, the DMA is not a law that enshrines general or universal economic truths. It is, and always has been, an industrial policy tool,[35] designed to align with the EU’s strengths, weaknesses, and strategic priorities. One cannot just assume that these idiosyncrasies translate into the Indian context.

As International Center for Law & Economics President Geoffrey Manne has written,[36] promotion of investment in the infrastructure required to facilitate economic growth and provision of a secure environment for ongoing innovation are both crucial to the success of developing markets like India’s. Securing these conditions demands dynamic and flexible competition policymaking.

For young, rapidly growing industries like e-commerce and other digital markets, it is essential to attract consistent investment and industry know-how in order to ensure that such markets are able to innovate and evolve to meet consumer demand. India has already witnessed a few leading platforms help build the necessary infrastructure during the nascent stages of sectoral development; continued investment along these lines will be essential to ensure continued consumer benefits.

In the above context, emulating the EU’s DMA approach could be a catastrophic mistake. Indian digital platforms are still not as mature as the EU’s, and a copy and paste of the DMA may prove unfit for the particular attributes of India’s market. The DCA could potentially capture many Indian companies. Paytm, Zomato, Ola Cabs, Nykaa, AllTheRooms, Squeaky, FlipCarK, MakeMyTrip, and Meesho (among others) are some of the companies that could be stifled by this new regulatory straitjacket.

This would not only harm India’s competitiveness, but would also deny consumers important benefits. Despite India’s remarkable economic growth over the last decade, it remains underserved by the most powerful consumer and business technologies, relative to its peers in Europe and North America. The priority should be to continue to attract and nurture investment, not to impose regulations that may further slow the deployment of critical infrastructure.

Indeed, this also raises the question of whether the EU’s objectives with the DMA are even ones that India would want to emulate. While the DMA’s effects are likely to be varied, it is clear that one major impetus for the law is distributional: to ensure that platform users earn a “fair share” of the benefits they generate. Such an approach could backfire, however, as using competition policy to reduce profits may simply lead to less innovation and significantly reduced benefits for the very consumers it is supposed to help. This risk is significantly magnified in India, where the primary need is to ensure the introduction and maintenance of innovative technology, rather than fine tuning the precise distribution of its rewards.

A DMA-like approach could imperil the domestic innovation that has been the backbone of initiatives like Digital India[37] and Startup India.[38] Implementation of a DMA-like regime would discourage growing companies that may not be able to cope with the increased compliance burden. It would also impose enormous regulatory burdens on the government and great uncertainty for businesses, as a DMA-like regime would require the government to define and quantify competitive benchmarks for industries that have not yet even grown out of their nascent stages. At a crucial juncture when India is seen as an investment-friendly nation,[39] implementation of a DMA-like regime could create significant roadblocks to investment—all without any obligation on the part of the government to ensure that consumers benefit.

This is because ex-ante regimes impose preemptive constraints on digital platforms, with no consideration of possible efficiencies that benefit consumers. While competition enforcement in general may tend to promote innovation, jurisdictions that do not allow for efficiency defenses tend to produce relatively less innovation, as careful, case-by-case competition enforcement is replaced with preemptive prohibitions that impede experimentation.

Regulation of digital markets that have yet to reach full maturity is bound to create a more restrictive environment that will harm economic growth, technological advancement, and investment. For India, it is crucial that a nuanced approach is taken to ensure that digital markets can sustain their momentum, without being bogged down by various and unnecessary compliance requirements that are likely to do more harm than good.

IV. Conclusion

In a multi-polar world, developing countries can no longer be expected to mechanically adopt the laws and regulations demanded of them by senior partners to trade agreements and international organizations. Nor should they blindly defer to foreign legislatures, who may (and likely do) have vastly different interests and priorities than their own.

Nobody is denying that the EU has provided many useful legal and regulatory blueprints in the past, many of which work just as well abroad as they do at home. But based on what we know so far, the DMA is not poised to become one of them. It is overly stringent, ignores efficiencies, is indifferent about effects on consumers, incorporates few procedural safeguards, is lukewarm on cost-benefit analysis, and risks subverting well-established competition-law principles. These notably include that the law should ultimately protect competition, not competitors.

Rather than instinctively playing catch up, India could ask the hard questions that the EU eschewed for the sake of a quick political victory against popular bogeymen. What is this law trying to achieve? What are the DCA’s supposed benefits? What are its potential costs? Do those benefits outweigh those costs? If the answer to these questions is ambivalent or negative, India’s digital future may well lay elsewhere.

[1] Report of the Committee on Digital Competition Law, Government of India Ministry of Corporate Affairs (Feb. 27, 2024), https://www.mca.gov.in/bin/dms/getdocument?mds=gzGtvSkE3zIVhAuBe2pbow%253D%253D&type=open.

[2] Regulation (EU) 2022/1925 of the European Parliament and of the Council, on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) (Text with EEA relevance), Official Journal of the European Union, available at https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32022R1925.

[3] Press Release, Designated Gatekeepers Must Now Comply With All Obligations Under the Digital Markets Act, European Commission (Mar. 7, 2024), https://digital-markets-act.ec.europa.eu/designated-gatekeepers-must-now-comply-all-obligations-under-digital-markets-act-2024-03-07_en.

[4] Press Release, Digital Markets Act: Commission Designates Six Gatekeepers, European Commission (Sep. 6, 2023), https://ec.europa.eu/commission/presscorner/detail/en/ip_23_4328.

[5] Press Release, Cade and European Commission Discuss Collaboration on Digital Market Agenda Ministério da Justiça e Segurança Pública (Mar. 29, 2023), https://www.gov.br/cade/en/matters/news/cade-and-european-commission-discuss-collaboration-on-digital-market-agenda.

[6] Summary of Remarks by Jean Tirole, Analysis Group (Sep. 27, 2018), available at https://www.analysisgroup.com/globalassets/uploadedimages/content/insights/ag_features/summary-of-remarks-by-jean-tirole_english.pdf.

[7] Geoffrey A. Manne, Google’s India Case and a Return to Consumer-Focused Antitrust, Truth on the Market (Feb. 8, 2018), https://truthonthemarket.com/2018/02/08/return-to-consumer-focused-antitrust-in-india.

[8] Adam Kovacevich, The Digital Markets Act’s “Statler & Waldorf” Problem, Chamber of Progress, Medium (Mar. 7, 2024), https://medium.com/chamber-of-progress/the-digital-markets-acts-statler-waldorf-problem-2c9b6786bb55.

[9] Id.

[10] Remarks by Executive-Vice President Vestager and Commissioner Breton on the Opening of Non-Compliance Investigations Under the Digital Markets Act, European Commission (Mar. 25, 2024), https://ec.europa.eu/commission/presscorner/detail/en/speech_24_1702.

[11] Makena Kelly, Here’s Why Threads Is Delayed in Europe, The Verge (Jul. 10, 2023), https://www.theverge.com/23789754/threads-meta-twitter-eu-dma-digital-markets.

[12] Andrew Grush, Did You Know Google Gemini Isn’t Available in Europe Yet?, Android Authority (Dec. 7, 2023), https://www.androidauthority.com/did-you-know-google-gemini-isnt-available-in-europe-yet-3392451.

[13] Edith Hancock, ‘Severe Pain in the Butt’: EU’s Digital Competition Rules Make New Enemies on the Internet, Politico (Mar. 25, 2024), https://www.politico.eu/article/european-union-digital-markets-act-google-search-malicious-compliance.

[14] Oliver Bethell, An Update on Our Preparations for the DMA, Google Blog (Jan. 17, 2024), https://blog.google/around-the-globe/google-europe/an-update-on-our-preparations-for-the-dma.

[15] Mirai, Linkedin (Apr. 17, 2024), https://www.linkedin.com/feed/update/urn:li:activity:7161330551709138945.

[16] Alex Heath, Meta Says Apple Has Made It ‘Very Difficult’ To Build Rival App Stores in the EU, The Verge (Feb. 2, 2024), https://www.theverge.com/2024/2/1/24058572/zuckerberg-meta-apple-app-store-iphone-eu-sideloading.

[17] Id.

[18] 2022 App Store Transparency Report, Apple Inc. (2023), available at https://www.apple.com/legal/more-resources/docs/2022-App-Store-Transparency-Report.pdf.

[19] About the Browser Choice Screen in iOS 17, Apple Developer, (Feb. 2024), https://developer.apple.com/support/browser-choice-screen.

[20] Remarks by Executive-Vice President Vestager and Commissioner Breton on the Opening of Non-Compliance Investigations Under the Digital Markets Act, EUROPEAN COMMISSION, https://ec.europa.eu/commission/presscorner/detail/en/speech_24_1702.

[21] Saheli Roy Choudhury, If You Hold Amazon Shares, Here’s What You Need to Know About India’s E-Commerce Law, CNBC (Feb. 4, 2019), https://www.cnbc.com/2019/02/05/amazon-how-india-ecommerce-law-will-affect-the-retailer.html.

[22] Press Release, CCI Imposes a Monetary Penalty of Rs.1337.76 Crore on Google for Anti-Competitive Practices in Relation to Android Mobile Devices, Competition Commission of India (Oct. 20, 2022), https://www.cci.gov.in/antitrust/press-release/details/261/0; CCI Orders Probe Into Google’s Play Store Billing Policies, The Economic Times, (Sep. 7, 2023), https://economictimes.indiatimes.com/tech/startups/competition-watchdog-orders-probe-into-googles-play-store-billing-policies/articleshow/108528079.cms.

[23] Why Competition Commission of India Is Investigating Amazon, Outlook, (May. 1, 2022), https://business.outlookindia.com/news/explained-why-is-competition-commission-of-india-probing-amazon-news-194362.

[24] HC Dismisses Facebook India’s Plea Challenging CCI Probe Into Whatsapp’s 2021 Privacy Policy, The Economic Times (Sep. 7, 2023), https://economictimes.indiatimes.com/tech/technology/women-participation-in-tech-roles-in-non-tech-sectors-to-grow-by-24-3-by-2027-report/articleshow/109374509.cms.

[25] Case No. 24 of 2021, Competition Commission of India, (Dec. 31, 2021), https://www.cci.gov.in/antitrust/orders/details/32/0.

[26] Supra note 23.

[27] Anne C. Witt, The Digital Markets Act: Regulating the Wild West, 60(3) Common Market Law Review 625 (2023).

[28] Report of Competition Law Review Committee, Indian Economic Service (Jul. 2019), available at https://www.ies.gov.in/pdfs/Report-Competition-CLRC.pdf.

[29] Jacques Crémer, Yves-Alexandre de Montjoye, & Heike Schweitzer, Competition Policy for the Digital Era, European Commission Directorate-General for Competition (2019), https://data.europa.eu/doi/10.2763/407537.

[30] Strengthening the Digital Markets Act and Its Enforcement, Bundesministerium für Wirtschaft und Klimaschutz (Sep. 7, 2021), available at https://www.bmwk.de/Redaktion/DE/Downloads/XYZ/zweites-gemeinsames-positionspapier-der-friends-of-an-effective-digital-markets-act.pdf.

[31] Meghan McCarty Carino, A New EU Law Aims to Tame Tech Giants. But Enforcing It Could Turn out to Be Tricky Marketplace (Mar. 7, 2024), https://www.marketplace.org/2024/03/07/a-new-eu-law-aims-to-tame-tech-giants-but-enforcing-it-could-turn-out-to-be-tricky.

[32] Id.

[33] Luca Bertuzzi & Molly Killeen, Digital Brief: DSA Fourth Trilogue, DMA Diverging Views, France’s Fine for Google, EurActiv (Apr. 1, 2022), https://www.euractiv.com/section/digital/news/digital-brief-dsa-fourth-trilogue-dma-diverging-views-frances-fine-for-google.

[34] Anu Bradford, The Brussels Effect: The Rise of a Regulatory Superstate in Europe, Columbia Law School (Jan. 8, 2013), https://www.law.columbia.edu/news/archive/brussels-effect-rise-regulatory-superstate-europe.

[35] Lazar Radic, Gatekeeping, the DMA, and the Future of Competition Regulation, Truth on the Market (Nov. 8, 2023), https://truthonthemarket.com/2023/11/08/gatekeeping-the-dma-and-the-future-of-competition-regulation.

[36] Geoffrey A. Manne, European Union’s Digital Markets Act Not Suitable for Developing Economies, Including India, The Times of India (Feb. 14, 2023), https://timesofindia.indiatimes.com/blogs/voices/european-unions-digital-markets-act-not-suitable-for-developing-economies-including-india.

[37] Digital India, Common Services Centre (Apr. 18, 2024), https://csc.gov.in/digitalIndia.

[38] Startup India, Government of India (Apr. 16, 2024), https://www.startupindia.gov.in.

[39] Invest India, Government of India (Mar. 20, 2024), https://www.investindia.gov.in/why-india.

 

Continue reading
Antitrust & Consumer Protection

Kristian Stout on Minnesota’s Right-to-Repair Law

ICLE Director of Innovation Policy Kristian Stout was quoted by Racket about Minnesota’s new right-to-repair law. You can read the full piece here. Not everyone . . .

ICLE Director of Innovation Policy Kristian Stout was quoted by Racket about Minnesota’s new right-to-repair law. You can read the full piece here.

Not everyone is as enthusiastic about right-to-repair legislation. Kristian Stout is a programmer and lawyer who is the director of innovation policy at the International Center for Law & Economics. Stout loves to tinker with computers, but he’s not convinced that the right to repair makes sense for people who aren’t as technically inclined.

According to Stout, restricting repair to shops that have a special deal with a manufacturer can mean more peace of mind for consumers, because not every shop will have the resources to protect consumer data. “There’s more incentive for smaller firms to actually not invest as much in cybersecurity and data privacy,” he says.

Stout uses the example of Apple, which keeps a tight lid on its repair network in order to protect their business. “They want to make sure that consumer devices and data, specifically, are protected.” He adds that this isn’t just out of the goodness of their hearts, either—big tech companies want to protect their own brand by preventing things from going wrong with their devices.

Continue reading

Children’s Online Safety and Privacy Legislation

TL;DR TL;DR Background: There has been recent legislative movement on a pair of major bills related to children’s online safety and privacy. H.R. 7891, the Kids . . .

TL;DR

Background: There has been recent legislative movement on a pair of major bills related to children’s online safety and privacy. H.R. 7891, the Kids Online Safety Act (KOSA) has 62 cosponsors in the U.S. Senate. Meanwhile, H.R. 7890, the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) also has bipartisan support within the U.S. Senate Commerce Committee. At the time of publication, these and a slate of other bills related to children’s online safety and privacy were scheduled to be marked up April 17 by the U.S. House Energy and Commerce Committee.

But… If enacted, the primary effect of these bills is likely to be less free online content for minors. Raising the regulatory burdens on online platforms that host minors, as well as restricting creators’ ability to monetize their content, are both likely to yield greater investment in identifying and excluding minors from online spaces, rather than creating safe and vibrant online ecosystems and content that cater to them. In other words, these bills could lead to minors losing the many benefits of internet usage. A more cost-effective way to address potential online harms to teens and children would be to encourage parents and minors to make use of available tools to avoid those harms and to dedicate more resources to prosecuting those who use online platforms to harm minors.

KEY TAKEAWAYS

RAISING THE COST TO SERVE MINORS COULD LEAD TO THEIR EXCLUSION

If the costs of serving minors surpass the revenues that online platforms can generate from serving them, those platforms will invest in excluding underage users, rather than creating safe and vibrant content and platforms for them. 

KOSA will substantially increase the costs that online platforms bear for serving minors. The bill would require a “high impact online company” to exercise “reasonable care” in its design features to “prevent and mitigate” certain harms. These harms include certain mental-health disorders and patterns indicating or encouraging compulsive use by minors, as well as physical violence, cyberbullying, and discriminatory harassment. Moreover, KOSA requires all covered platforms to implement default safeguards to limit design features that encourage minors’ use of the platforms and to control the use of personalized recommendation systems.

RESTRICTING TARGETED ADVERTISING LEADS TO LESS FREE CONTENT

A significant portion of internet content is delivered by what economists call multisided platforms. On one side of the platform, users enjoy free access to content, while on the other side, advertisers are granted a medium to reach users. In effect, advertisers subsidize users’ access to online content. Platforms also collect data from users in order to serve them targeted ads, the most lucrative form of advertising. Without those ads, there would be less revenue to fund access to, and creation of, content. This is no less true when it comes to content of interest to minors.

COPPA 2.0 would expand the protections granted by the Children’s Online Privacy Protection Act of 1998 to users under age 13 to also cover those between 13 and 17 years of age. Where the current law requires parental consent to collect and use persistent identifiers for “individual-specific advertising” directed to children under age 13, COPPA 2.0 would require the verifiable consent of the teen or a parent to serve such ads to teens. 

Obtaining verifiable consent has proven sufficiently costly under the current COPPA rule that almost no covered entities make efforts to obtain it. COPPA has instead largely prevented platforms from monetizing children’s content, which has meant that less of it is created. Extending the law to cover teens would generate similar results. Without the ability to serve them targeted ads, platforms will have less incentive to encourage the creation of teen-focused content.

DE-FACTO AGE VERIFICATION REQUIREMENTS

To comply with laws designed to protect minors, online platforms will need to verify whether its users are minors. While both KOSA and COPPA 2.0 disclaim establishing any age-verification requirements or the collection of any data not already collected “in the normal course of business,” they both establish constructive knowledge standards for violators (i.e., “should have known” or “knowledge fairly implied on the basis of objective circumstances”). Online platforms will need to be able to identify their users who are minors in order to comply with the prohibition on serving them personalized recommendations (KOSA) or targeted advertising (COPPA 2.0). 

Age-verification requirements have been found to violate the First Amendment, in part because they aren’t the least-restrictive means to protect children online. As one federal district court put it: “parents may rightly decide to regulate their children’s use of social media—including restricting the amount of time they spend on it, the content they may access, or even those they chat with. And many tools exist to help parents with this.”

A BETTER WAY FORWARD

Educating parents and minors about those widely available practical and technological tools to mitigate the harms of internet use is a better way to protect minors online, and would pass First Amendment scrutiny. Another way to address the problem would be to increase the resources available to law enforcement to go after predators. The Invest in Child Safety Act of 2024 is one such proposal to give overwhelmed investigators the necessary resources to combat child sexual exploitation.

For more on how to best protect minors online, see “A Law & Economics Approach to Social Media Regulation” and “A Coasean Analysis of Online Age-Verification and Parental-Consent Regimes.” 

Continue reading
Innovation & the New Economy

Clearing the Telecom Logjam: A Modest Proposal

TOTM In this “Age of the Administrative State,” federal agencies have incredible latitude to impose policies without much direction or input from Congress. President Barack Obama . . .

In this “Age of the Administrative State,” federal agencies have incredible latitude to impose policies without much direction or input from Congress. President Barack Obama fully pulled off the mask in 2014, when he announced “[w]e are not just going to be waiting for legislation,” declaring “I’ve got a pen, and I’ve got a phone.” Subsequent presidents have similarly discovered that they had pens and phones, too.

Read the full piece here.

Continue reading
Telecommunications & Regulated Utilities

Comments to UK Information Commissioner’s Office on ‘Pay or Consent’

Regulatory Comments I thank the ICO for the opportunity to submit comments on “pay or consent.” My focus will be on the question of how to deal with . . .

I thank the ICO for the opportunity to submit comments on “pay or consent.” My focus will be on the question of how to deal with consent to personal data processing needed to fund the provision of a service that does not fit the legal basis of contractual necessity.[1]

Personalised Advertising: Contractual Necessity or Consent?

Under the GDPR, personal data may only be processed if one of the lawful bases from Article 6 applies. They include, in particular, consent, contractual necessity, and legitimate interests. When processing is necessary for the performance of a contract (Article 6(1)(b)), then that is the basis on which the controller should rely. One may think that if data processing (e.g., for targeting ads) is necessary to fund a free-of-charge service, that should count as contractual necessity. I am unaware of data protection authorities disputing this in principle, but there is a tendency to interpret contractual necessity narrowly.[2] Notably, the EDPB decided in December 2022 that Facebook and Instagram shouldn’t have relied on that ground for personalisation of advertising.[3] Subsequently, the EDPB decided that Meta should also not rely on the legitimate interests basis.[4]

The adoption of a narrow interpretation of contractual necessity created an interpretative puzzle. If we set aside the legitimate interests basis under Article 6(1)(f)), in many commercial contexts, we are only left with consent as an option (Article 6(1)(a)). This is especially true where consent is required not due to the GDPR but under national laws implementing the ePrivacy Directive (Directive 2002/58/EC), including the UK Privacy and Electronic Communications Regulations (PECR). That is, for solutions like cookies or browser storage. Importantly, though, these are not always needed for personalised advertising. Perhaps the biggest puzzle is how to deal with consent to processing needed to fund the provision of a service that does not fit the narrow interpretation of contractual necessity.

Consent, as we know from Articles 4(11) and 7(4) GDPR, must be “freely given.” In addition, Recital 42 states that: “Consent should not be regarded as freely given if the data subject has no genuine or free choice or is unable to refuse or withdraw consent without detriment.” The EDPB provided self-contradictory guidance by first saying that withdrawing consent should “not lead to any costs for the data subjects,” but soon after adding that the GDPR “does not preclude all incentives” for consenting.[5]

Despite some differences, at least the Austrian, Danish, French, German (DSK), and Spanish data protection authorities generally acknowledge that paid alternatives to consent may be lawful.[6] Notably, the Norwegian Privacy Board—in a Gridnr appeal—also explicitly allowed that possibility.[7] I discuss below the conditions those authorities focus on in their assessment of “pay or consent” implementations.

The CJEU and ‘Necessity’ to Charge ‘An Appropriate Fee’

In its Meta decision from July 2023, the EU Court of Justice weighed in, though in the context of third-party-collected data, by saying that if that kind of data processing by Meta does not fall under contractual necessity, then:

(…) those users must be free to refuse individually, in the context of the contractual process, to give their consent to particular data processing operations not necessary for the performance of the contract, without being obliged to refrain entirely from using the service offered by the online social network operator, which means that those users are to be offered, if necessary for an appropriate fee, an equivalent alternative not accompanied by such data processing operations.[8]

Intentionally or not, the Court highlighted the interpretative problem stemming from a narrow interpretation of contractual necessity. The Court said that even if processing does not fall under contractual necessity, it may still be “necessary” to charge data subjects “an appropriate fee” if they refuse to consent. Disappointing some activists, the Court did not endorse the EDPB’s first comment I cited (that refusal to consent should not come with “any costs”).

Even though the Court did not explain this further, we can speculate that the Court was not willing to accept the view that all business models simply have to be adjusted to a maximally prohibitive interpretation of the GDPR. The Court may have attempted to save the GDPR from a likely political backlash to an attempt to use the GDPR to deny Europeans a choice of free-of-charge services funded by personalised advertising. Perhaps, the Court also noted that other EU laws rely on the GDPR’s definition of consent (e.g., the Digital Markets Act) and that this gives an additional reason to be very cautious in interpreting this concept in ways that are not in line with current expectations.

Remaining Questions

Several questions will likely be particularly important for future assessments of “pay or consent” implementations under the GDPR and ePrivacy/PECRs. The following list may not be exhaustive but aims to identify the main issues.

How Specific Should the Choice Be?

The extent to which service providers batch consent to processing for different purposes, especially if users cannot (in a “second step”) adjust consent more granularly, is likely to be questioned. This is problematic because giving users complete freedom to adjust their consent could also defeat the purpose of having a paid alternative.

In a different kind of bundling, service providers may make the paid alternative to consent more attractive by adding incentives like access to additional content or the absence of ads (including non-personalised ads). On the one hand, this means that service providers incentivise users not to consent, making consent less attractive. This could be seen as reducing the pressure to consent and making the choice more likely to be freely given. On the other hand, a more attractive paid option could be more costly for the service provider and thus require a higher price.

What Is an ‘Appropriate’ Price?

The pricing question is a potential landmine for data protection authorities, who are decidedly ill-suited to deal with it. Just to show one aspect of the complexity: setting as a benchmark the service’s historical average revenue per user (ARPU) from (personalised) advertising may be misleading. Users are not identical. Wealthier, less price-sensitive users, who may be more likely to pay for a no-ads option, are also worth more to advertisers. Hence, the loss of income from advertising may be higher than just “old ARPU multiplied by the number of users on a no-ads tier,” suggesting a need to charge the paying users more than historical ARPU merely to retain the same level of revenue. Crucially, the situation will likely be dynamic due to subscription “churn” (users canceling their subscriptions) and other market factors. The economic results of the “pay or consent” scheme may continue to change, and setting the price level will always involve business judgment based on predictions and intuition.

Some authorities may be tempted to approach the issue from the perspective of users’ willingness to pay, but this also raises many issues. First, the idea of price regulation by privacy authorities, capping prices at a level defined by the authorities’ view of what is acceptable to a user, may face jurisdictional scrutiny. Second, taking users’ willingness to pay as a benchmark implicitly assumes a legally protected entitlement to access the service for a price they like. In other words, to assume that users are entitled to specific private services, like social media services.[9] This is not something that can be simply assumed; it would require a robust argument—and arguably constitute a legal change that is appropriate only for the political, legislative process.

Imbalance

Recital 43 of the GDPR explains that consent may not be free when there is “a clear imbalance between the data subject and the controller.” In the Meta decision, the EU Court of Justice admitted the possibility of such an imbalance between a business with a dominant position, as understood in competition law, and its customers.[10] This, too, may be a difficult issue for data protection authorities to deal with, both for expertise and competence reasons.

The Scale of Processing and Impact on Users

Distinct from market power (dominance), though sometimes conflated with it, are the issues of the scale of processing and its impact on users. An online service provider, e.g., a newspaper publisher, may have relatively little market power but may be using a personalised advertising framework (e.g., an RTB scheme facilitated by third parties[11]) that is very large in scale and with more potential for a negative impact on users than an advertising system internal to a large online platform. A large online platform can offer personalised advertising to its business customers (advertisers) while sharing little or no information about who the ads are being shown to. Large platforms have economic incentives to keep user data securely within the platform’s “walled garden,” not sharing it with outsiders. Smaller publishers participate in open advertising schemes (RTB), where user data is shared more widely with advertisers and other participants.

Given the integration of smaller publishers in such open advertising schemes, an attempt by data protection authorities to set a different standard for consent just for large platforms may fail as based on an arbitrary distinction. In other words, however attractive it may seem for the authorities to target Meta without targeting the more politically powerful legacy media, this may not be an option.

[1] The comments below build on my ‘“Pay or consent:” Personalized ads, the rules and what’s next’ (IAPP, 20 November 2023) < https://iapp.org/news/a/pay-or-consent-personalized-ads-the-rules-and-whats-next/ >.

[2] On this issue, I highly recommend the article by Professor Martin Nettesheim on ‘Data Protection in Contractual Relationships (Art. 6 (1) (b) GDPR)’ (May 2023) < https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4427134 >.

[3] https://www.edpb.europa.eu/news/news/2023/facebook-and-instagram-decisions-important-impact-use-personal-data-behavioural_en

[4] https://www.edpb.europa.eu/news/news/2023/edpb-urgent-binding-decision-processing-personal-data-behavioural-advertising-meta_en

[5] https://www.edpb.europa.eu/sites/default/files/files/file1/edpb_guidelines_202005_consent_en.pdf

[6] David Pfau, ‘PUR models: Status quo on the European market’ (BVDW, October 2023) < https://iabeurope.eu/knowledge_hub/bvdws-comprehensive-market-overview-pur-models-in-europe-legal-framework-and-future-prospects-in-english/ >; for the view of the Spanish authority, see ??https://www.aepd.es/prensa-y-comunicacion/notas-de-prensa/aepd-actualiza-guia-cookies-para-adaptarla-a-nuevas-directrices-cepd

[7] https://www.personvernnemnda.no/pvn-2022-22

[8] https://curia.europa.eu/juris/document/document.jsf?mode=lst&pageIndex=1&docid=276478&part=1&doclang=EN&text=&dir=&occ=first&cid=163129

[9] See also Peter Caddock, ‘Op-ed: “Pay or data” has its reasons – even if you disagree’, https://www.linkedin.com/pulse/op-ed-pay-data-has-its-reasons-even-you-disagree-peter-craddock

[10] See para [149]. This is also referenced in the Joint EDPB-EDPS contribution to the public consultation on the draft template relating to the description of consumer profiling techniques (Art.15 DMA) (September 2023), page 14.

[11] https://en.wikipedia.org/wiki/Real-time_bidding

Continue reading
Data Security & Privacy

Knowledge and Decisions in the Information Age: The Law & Economics of Regulating Misinformation on Social-Media Platforms

ICLE White Paper “If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in . . .

“If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein. If there are any circumstances which permit an exception, they do not now occur to us.” – West Virginia Board of Education v. Barnette (1943)[1]

“Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth.” – United States v. Alvarez (2012)[2]

Introduction

In April 2022, the U.S. Department of Homeland Security (DHS) announced the creation of the Disinformation Governance Board, which would be designed to coordinate the agency’s response to the potential effects of disinformation threats.[3] Almost immediately upon its announcement, the agency was met with criticism. Congressional Republicans denounced the board as “Orwellian,”[4] and it was eventually disbanded.[5]

The DHS incident followed years of congressional hearings in which Republicans had castigated leaders of the so-called “Big Tech” firms for allegedly censoring conservatives, while Democrats had criticized those same leaders for failing to combat and remove misinformation.[6] Moreover, media outlets have reported on systematic attempts by government officials to encourage social-media companies to remove posts and users based on alleged misinformation. For example, The Intercept in 2022 reported on DHS efforts to set up backchannels with Facebook for flagging posts and misinformation.[7]

The “Twitter Files” released earlier this year by the company’s CEO Elon Musk—and subsequently reported on by journalists Barry Weiss, Matt Taibbi, and Michael Shellenberger—suggest considerable efforts by government agents to encourage Twitter to remove posts as misinformation and to bar specific users for being purveyors of misinformation.[8] What’s more, communications unveiled as part of discovery in the Missouri v. Biden case have offered further evidence a variety of government actors cajoling social-media companies to remove alleged misinformation, along with the development of a considerable infrastructure to facilitate what appears to be a joint project to identify and remove the same.[9]

With all of these details coming into public view, the question that naturally arises is what role, if any, does the government have in regulating misinformation disseminated through online platforms? The thesis of this paper is that the First Amendment forecloses government agents’ ability to regulate misinformation online, but it protects the ability of private actors—i.e., the social-media companies themselves—to regulate misinformation on their platforms as they see fit.

The primary reason for this conclusion is the state-action doctrine, which distinguishes public and private action. Public actions are subject to constitutional constraints (such as the First Amendment), while private actors are free from such regulation.[10] A further thesis of this paper is that application of the state-action doctrine to the question of misinformation on online platforms promotes the bedrock constitutional value of “protect[ing] a robust sphere of individual liberty,”[11] while also creating outlets for more speech to counteract false speech.[12]

Part I of this paper outlines a law & economics theory of state-action requirements under the First Amendment and explains its importance for the online social-media space. The right to editorial discretion and Section 230 will also be considered as part of this background law, which places the responsibility for regulating misinformation on private actors like social-media platforms. Such platforms must balance the interests of each side of their platforms to maximize value. This means, in part, setting moderation rules on misinformation that keep users engaged in order to provide increased opportunities to generate revenue from advertisers.

Part II considers various theories of state action and whether they apply to social-media platforms. It appears clear that some state-action theory—like the idea that social-media companies exercise a “traditional, exclusive public function”—are foreclosed in light of Manhattan Community Access Corp. v. Halleck. But it remains an open question whether a social-media company could be found a state actor under a coercion or collusion theory under facts that have been revealed in the Twitter Files and litigation over this question.

Part III completes the First Amendment analysis of what government agents can do to regulate misinformation on social media. The answer: not much. The U.S. Constitution forbids direct regulation of false speech simply because it is false. A more difficult question concerns how to define truth and falsity in contested areas of fact, where legal questions may run into vagueness concerns. We recommend that a better way forward is for government agents to invest in telling their own version of the facts, but where they have no authority to mandate or pressure social-media companies into regulating misinformation.

I.        A Theory of State Action and Speech Rights on Online Social-Media Platforms

Among the primary rationales for the First Amendment’s speech protections is to shield the “marketplace of ideas”:[13] in most circumstances, the best remedy for false or harmful speech is “more speech, not enforced silence.”[14] But this raises the question of why private abridgments of speech—such as those enforced by powerful online social-media platforms—should not be subject to the same First Amendment restrictions as government action.[15] After all, if the government can’t intervene in the marketplace of ideas by deciding what is true or false, then why should that privilege be held by Facebook or Google?

Here enters the state-action doctrine, which is the legal principle (discussed further below) that, in some cases, private entities may function as extensions of the state. Under this doctrine, the actions of such private actors would give rise to similar First Amendment concerns as if the state had acted on its own. It has been said that there is insufficient theorizing about the “why” of the state-action doctrine.[16] What follows is a theory of why the state-action doctrine is fundamental to protecting those private intermediaries who are best positioned to make marginal decisions about the benefits and harms of speech, including social-media companies through their moderation policies on misinformation.

Governance structures are put in place by online platforms as a response to market pressures to limit misinformation and other harmful speech. At the same time, there are also market pressures to not go too far in limiting speech.[17] The balance that must be struck by online intermediaries is delicate, and there is no reason to expect government regulators to do a better job than the marketplace in determining the optimal rules. The state-action doctrine protects a marketplace for speech governance by limiting the government’s reach into these spaces.

In order to discuss the state-action doctrine meaningfully, we must first outline its basic contours and the why identified by the Supreme Court. In Part I.A, we will provide a description of the Supreme Court’s most recent First Amendment state-action decision, Manhattan Community Access Corp. v. Halleck, where the Court both defines and defends the doctrine’s importance. We will also briefly consider how the state-action doctrine’s protection of private ordering is bolstered by the right to editorial discretion and by Section 230 of the Communications Decency Act of 1998.

We will then consider whether there are good theoretical reasons to support the First Amendment’s state-action doctrine. In Part I.B, we will apply insights from the law & economics tradition associated with the interaction of institutions and dispersed knowledge.[18] We argue that the First Amendment’s dichotomy between public and private action allows for the best use of dispersed knowledge in society by creating a marketplace for speech governance. We also argue that, by protecting this marketplace for speech governance from state action, the First Amendment creates the best institutional framework for reducing harms from misinformation.[19]

A.      The State-Action Doctrine, the Right to Editorial Discretion, and Section 230

At its most basic, the First Amendment’s state-action doctrine says that government agents may not restrict speech, whether through legislation, rules, or enforcement actions, or by putting undue burdens on speech exercised on government-owned property.[20] Such restrictions will receive varying levels of scrutiny from the courts, depending on the degree of incursion. On the other hand, the state-action doctrine means that, as a general matter, private actors may set rules for what speech they are willing to abide or promote, including rules for speech on their own property. With a few exceptions where private actors may be considered state actors,[21] these restrictions will receive no scrutiny from courts, and the government may actually help remove those who break privately set speech rules.[22]

In Halleck, the Court set out a strong defense of the state-action doctrine under the First Amendment. Justice Brett Kavanaugh, writing for the majority, defended the doctrine based on the text and purpose of the First Amendment:

Ratified in 1791, the First Amendment provides in relevant part that “Congress shall make no law … abridging the freedom of speech.” Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law ….” § 1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech…

In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty…

It is sometimes said that the bigger the government, the smaller the individual. Consistent with the text of the Constitution, the state-action doctrine enforces a critical boundary between the government and the individual, and thereby protects a robust sphere of individual liberty. Expanding the state-action doctrine beyond its traditional boundaries would expand governmental control while restricting individual liberty and private enterprise.[23]

Applying the state-action doctrine, the Court held that even the heavily regulated operation of cable companies’ public-access channels constituted private action. The Court opined that “merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.”[24] The Court went on to explain:

If the rule were otherwise, all private property owners and private lessees who open their property for speech would be subject to First Amendment constraints and would lose the ability to exercise what they deem to be appropriate editorial discretion within that open forum. Private property owners and private lessees would face the unappetizing choice of allowing all comers or closing the platform altogether.[25]

Similarly, the Court has found that private actors have the right to editorial discretion that can’t generally be overcome by a government compelling the carriage of speech.[26] In Miami Herald v. Tornillo, the Supreme Court ruled that a right-to-reply statute for political candidates was unconstitutional because it “compel[s] editors or publishers to publish that which ‘reason tells them should not be published.’”[27] The Court found that the marketplace of ideas was still worth protecting from government-compelled speech, even in a media environment where most localities only had one (monopoly) newspaper.[28] The effect of Tornillo was to establish a general rule whereby the limits on media companies’ editorial discretion were defined not by government edict but by “the acceptance of a sufficient number of readers—and hence advertisers —to assure financial success; and, second, the journalistic integrity of its editors and publishers.”[29]

Section 230 of the Communications Decency Act supplements the First Amendment’s protections by granting “providers and users of an interactive computer service” immunity from (most) lawsuits for speech generated by other “information content providers” on their platforms.[30] The effect of this statute is far-ranging in its implications for online speech. It protects online social-media platforms from lawsuits for the third-party speech they host, as well as for the platforms’ decisions to take certain third-party speech down.[31]

As with the underlying First Amendment protections, Section 230 augments social-media companies’ ability to manage misinformation on their services. Specifically, it shields them from an unwarranted flood of litigation for failing to remove the defamatory speech of third parties when they make efforts to remove some undesirable speech from their platforms.

B.      Regulating Speech in Light of Dispersed Knowledge[32]

One of the key insights of the late Nobel laureate economist F.A. Hayek was that knowledge is dispersed.[33] In other words, no one person or centralized authority has access to all the tidbits of knowledge possessed by countless individuals spread out through society. Even the most intelligent among us have but a little bit more knowledge than the least intelligent. Thus, the economic problem facing society is not how to allocate “given” resources, but how to “secure the best use of resources known to any of the members of society, for ends whose relative importance only these individuals know.”[34]

This is particularly important when considering the issue of regulating alleged misinformation. As noted above, the First Amendment is premised on the idea that a marketplace of ideas will lead to the best information eventually winning out, with false ideas pushed aside by true ones.[35] Much like the economic problem, there are few, if any, given answers that are true for all time when it comes to opinions or theories in science, the arts, or any other area of knowledge. Thus, the question is: how do we establish a system that promotes the generation and adoption of knowledge, recognizing there will be “market failures” (and possibly, corresponding “government failures”) along the way?

Like virtually any other human activity, there are benefits and costs to speech. It is ultimately subjective individual preference that determines how to manage those tradeoffs. Although the First Amendment protects speech from governmental regulation, that does not mean that all speech is acceptable or must be tolerated. As noted above, U.S. law places the power to decide what speech to allow in the public square firmly into the hands of the people. The people’s preferences are expressed individually and collectively through their participation in online platforms, news media, local organizations, and other fora, and it via that process that society arrives at workable solutions to such questions.

Very few people believe that all speech protected by the First Amendment should be without consequence. Just as very few people, if pressed, would really believe that it is, generally speaking, a wise idea to vest the power to determine what is true or false in a vast governmental bureaucracy. Instead, proposals for government regulation of misinformation generally are offered as an expedient to effect short-term political goals that are perceived to be desirable. But given the dispersed nature of knowledge and given that very few “facts” are set in stone for all time,[36] such proposals threaten to undermine the very process through which new knowledge is discovered and disseminated.

Moreover, such proposals completely fail to account for how “bad” speech has, in fact, long been regulated via informal means, or what one might call “private ordering.” In this sense, property rights have long played a crucial role in determining the speech rules of any given space. If a man were to come into another man’s house and start calling his wife racial epithets, he would not only have the right to ask that person to leave but could exercise his right as a property owner to eject the trespasser—if necessary, calling the police to assist him. One similarly could not expect to go to a restaurant and yell at the top of her lungs about political issues and expect the venue—even those designated as “common carriers” or places of public accommodation—to allow her to continue.[37] A Christian congregation may in most circumstances be extremely solicitous of outsiders with whom they want to share their message, but they would likewise be well within their rights to prevent individuals from preaching about Buddhism or Islam within their walls.

In each of these examples, the individual or organization is entitled to eject individuals on the basis of their offensive (or misinformed) speech with no cognizable constitutional complaint about the violation of rights to free speech. The nature of what is deemed offensive is obviously context- and listener-dependent, but in each example, the proprietors of the relevant space are able to set and enforce appropriate speech rules. By contrast, a centralized authority would, by its nature, be forced to rely on far more generalized rules. As the economist Thomas Sowell once put it:

The fact that different costs and benefits must be balanced does not in itself imply who must balance them?or even that there must be a single balance for all, or a unitary viewpoint (one “we”) from which the issue is categorically resolved.[38]

When it comes to speech, the balance that must be struck is between one individual’s desire for an audience and that prospective audience’s willingness to listen. Asking government to make categorical decisions for all of society is substituting centralized evaluation of the costs and benefits of access to communications for the individual decisions of many actors. Rather than incremental decisions regarding how and under what terms individuals may relate to one another—which can evolve over time in response to changes in what individuals find acceptable—governments can only hand down categorical guidelines: “you must allow a, b, and c speech” or “you must not allow z, y, and z speech.”

It is therefore a fraught proposition to suggest that government could have both a better understanding of what is true and false, and superior incentives to disseminate the truth, than the millions of individuals who make up society.[39] Indeed, it is a fundamental aspect of both the First Amendment’s Establishment Clause[40] and of free-speech jurisprudence[41] that the government is in no position to act as an arbiter of what is true or false.

Thus, as much as the First Amendment protects a marketplace of ideas, by excluding the government as a truth arbiter, it also protects a marketplace for speech governance. Private actors can set the rules for speech on their own property, including what is considered true or false, with minimal interference from the government. And as the Court put it in Halleck, opening one’s property for the speech of third parties need not make the space take all-comers.[42]

This is particularly relevant in the social-media sphere. Social-media companies must resolve social-cost problems among their users.[43] In his famous work “The Problem of Social Cost,” the economist Ronald Coase argued that the traditional approach to regulating externalities was wrong, because it failed to apprehend the reciprocal nature of harms.[44] For example, the noise from a factory is a potential cost to the doctor next door who consequently can’t use his office to conduct certain testing, and simultaneously the doctor moving his office next door is a potential cost to the factory’s ability to use its equipment. In a world of well-defined property rights and low transaction costs, the initial allocation of a right would not matter, because the parties could bargain to overcome the harm in a beneficial manner—i.e., the factory could pay the doctor for lost income or to set up sound-proof walls, or the doctor could pay the factory to reduce the sound of its machines.[45] Similarly, on social media, misinformation and other speech that some users find offensive may be inoffensive or even patently true to other users. There is a reciprocal nature to the harms of offensive speech, much as with other forms of nuisance. But unlike the situation of the factory owner and the doctor, social-media users use the property of social-media companies, who must balance these varied interests to maximize the platform’s value.

Social-media companies are what economists call “multi-sided” platforms.[46] They are profit seeking, to be sure, but the way they generate profits is by acting as intermediaries between users and advertisers. If they fail to serve their users well, those users will abandon the platform. Without users, advertisers would have no interest in buying ads. And without advertisers, there is no profit to be made. Social-media companies thus need to maximize the value of their platform by setting rules that keep users sufficiently engaged that there are advertisers who will pay to reach them.

In the cases of Facebook, Twitter, and YouTube, the platforms have set content-moderation standards that restrict many kinds of speech, including misinformation. [47] In some cases, these policies are viewed negatively by some users, particularly given that the First Amendment would foreclose the government from regulating those same types of content. But social-media companies’ ability to set and enforce moderation policies could actually be speech-enhancing. Because social-media companies are motivated to maximize the value of their platforms, for any given policy that gives rise to enforcement actions that leave some users disgruntled, there are likely to be an even greater number of users who agree with the policy. Moderation policies end up being speech-enhancing when they promote more speech overall, as the proliferation of harmful speech may push potential users away from the platforms.

Currently, all social-media companies rely on an advertising-driven revenue model. As a result, their primary goal is to maximize user engagement. As we have recently seen, this can lead to situations where advertisers threaten to pull ads if they don’t like the platform’s speech-governance decisions. After Elon Musk began restoring the accounts of Twitter users who had been banned for what the company’s prior leadership believed was promoting hate speech and misinformation, major advertisers left the platform.[48] A different business model (about which Musk has been hinting for some time[49]) might generate different incentives for what speech to allow and disallow. There would, however, still be a need for any platform to allow some speech and not other speech, in line with the expectations of its user base and advertisers. The bottom line is that the motive to maximize profits and the tendency of markets to aggregate information leaves the platforms themselves best positioned to make these incremental decisions about their users’ preferences, in response to the feedback mechanism of consumer demand.

Moreover, there is a fundamental difference between private action and state action, as alluded to by the Court in Halleck: one is voluntary, and the other based on coercion. If Facebook or Twitter suspends a user for violating community rules, that decision terminates a voluntary association. When the government removes someone from a public forum for expressing legal speech, its censorship and use of coercion are inextricably intertwined. The state-action doctrine empowers courts to police this distinction because the threats to liberty are much greater when one party in a dispute over the content of a particular expression is also empowered to impose its will with the use of force.

Imagine instead that courts were to decide that they, in fact, were best situated to balance private interests in speech against other interests, or even among speech interests. There are obvious limitations on courts’ access to knowledge that couldn’t be easily overcome through the processes of adjudication, which depend on the slow development of articulable facts and categorical reasoning over a lengthy period of time and an iterative series of cases. Private actors, on the other hand, can act relatively quickly and incrementally in response to ever-changing consumer demand in the marketplace. As Sowell put it:

The courts’ role as watchdogs patrolling the boundaries of governmental power is essential in order that others may be secure and free on the other side of those boundaries. But what makes watchdogs valuable is precisely their ability to distinguish those people who are to be kept at bay and those who are to be left alone. A watchdog who could not make that distinction would not be a watchdog at all, but simply a general menace.

The voluntariness of many actions—i.e., personal freedom—is valued by many simply for its own sake. In addition, however, voluntary decision-making processes have many advantages which are lost when courts attempt to prescribe results rather than define decision-making boundaries.[50]

The First Amendment’s complementary right of editorial discretion also protects the right of publishers, platforms, and other speakers to be free from an obligation to carry or transmit government-compelled speech.[51] In other words, not only is private regulation of speech not state action, but as a general matter, private regulation of speech is protected by the First Amendment from government action. The limits on editorial discretion are marketplace pressures, such as user demand and advertiser support, and social mores about what is acceptable to be published.[52]

There is no reason to think that social-media companies today are in a different position than was the newspaper in Tornillo.[53] These companies must determine what, how, and where content is presented within their platform. While this right of editorial discretion protects social-media companies’ moderation decisions, its benefits accrue to society at-large, who get to use those platforms to interact with people from around the world and to thereby grow the “marketplace of ideas.”

Moreover, Section 230 amplifies online platforms’ ability to make editorial decisions by immunizing most of their choices about third-party content. In fact, it is interesting to note that the heading for Section 230 is “Protection for private blocking and screening of offensive material.”[54] In other words, Section 230 is meant, along with the First Amendment, to establish a market for speech governance free from governmental interference.

Social-media companies’ abilities to differentiate themselves based on functionality and moderation policies are important aspects of competition among them.[55] How each platform is used may differ depending on those factors. In fact, many consumers use multiple social-media platforms throughout the day for different purposes.[56] Market competition, not government power, has enabled internet users to have more avenues than ever to get their message out.[57]

If social-media users and advertisers demand less of the kinds of content commonly considered to be misinformation, platforms will do their best to weed those things out. Platforms won’t always get these determinations right, but it is by no means clear that centralizing decisions about misinformation by putting them in the hands of government officials would promote the societal interest in determining the truth.

It is true that content-moderation policies make it more difficult for speakers to communicate some messages, but that is precisely why they exist. There is a subset of protected speech to which many users do not wish to be subject, including at least some perceived misinformation. Moreover, speakers have no inherent right to an audience on a social-media platform. There are always alternative means to debate the contested issues of the day, even if it may be more costly to access the desired audience.

In sum, the First Amendment’s state-action doctrine assures us that government may not make the decision about what is true or false, or to restrict a citizen’s ability to reach an audience with ideas. Governments do, however, protect social-media companies’ rights to exercise editorial discretion on their own property, including their right to make decisions about regulating potential misinformation. This puts the decisions in the hands of the entities best placed to balance the societal demands for online speech and limits on misinformation. In other words, the state-action doctrine protects the marketplace of ideas.

II.      Are Online Platforms State Actors?

As the law currently stands, the First Amendment grants online platforms the right to exercise their own editorial discretion, free from government intervention. By contrast, if government agents pressure or coerce platforms into declaring certain speech misinformation, or to remove certain users, a key driver of the marketplace of ideas—the action of differentiated actors experimenting with differing speech policies—will be lost.[58]

Today’s public debate is not actually centered on a binary choice between purely private moderation and legislatively enacted statutes to literally define what is true and what is false. Instead, the prevailing concerns relate to the circumstances under which some government activity—such as chastising private actors for behaving badly, or informing those actors about known threats—might transform online platforms’ moderation policies into de facto state actions. That is, at what point do private moderation decisions constitute state action? To this end, we will now consider sets of facts under which online platforms could be considered state actors for the purposes of the First Amendment.

In Halleck, the Supreme Court laid out three exceptions to the general rule that private actors are not state actors:

Under this Court’s cases, a private entity can qualify as a state actor in a few limited circumstances—including, for example, (i) when the private entity performs a traditional, exclusive public function; (ii) when the government compels the private entity to take a particular action; or (iii) when the government acts jointly with the private entity.[59]

Below, we will consider each of these exceptions, as applied to online social-media platforms. Part II.A will make the case that Halleck decisively forecloses the theory that social-media platforms perform a “traditional, exclusive public function,” as has been found by many federal courts. Part II.B will consider whether government agents have coerced or encouraged platforms to make specific enforcement decisions on misinformation in ways that would transform their moderation actions into state action. Part II.C will look at whether the social-media companies have essentially colluded with government actors, through either joint action or in a relationship sufficiently intertwined as to be symbiotic.

A.      ‘Traditional, Exclusive Public Function’

The classic case that illustrates the traditional, exclusive public function test is Marsh v. Alabama.[60] There, the Supreme Court found that a company town, while private, was a state actor for the purposes of the First Amendment. At issue was whether the company town could prevent a Jehovah’s Witness from passing out literature on the town’s sidewalks. The Court noted that “[o]wnership does not always mean absolute dominion. The more an owner, for his advantage, opens up his property for use by the public in general, the more do his rights become circumscribed by the statutory and constitutional rights of those who use it.”[61] The Court then situated the question as one where it was being asked to balance property rights with First Amendment rights. Within that framing, it found that the First Amendment’s protections should be in the “preferred position.”[62]

Despite nothing in Marsh suggesting a limitation to company towns or the traditional, exclusive public function test, future courts eventually cabined it. But there was a time when it looked like the Court would expand this reasoning to other private actors who were certainly not engaged in a traditional, exclusive public function. A trio of cases involving shopping malls eventually ironed this out.

First, in Food Employees v. Logan Valley Plaza,[63] the Court—noting the “functional equivalence” of the business block in Marsh and the shopping center[64] —found that the mall could not restrict the peaceful picketing of a grocery store by a local food-workers union.[65]

But then, the Court seemingly cabined-in both Logan Valley and Marsh just a few years later in Lloyd Corp. v. Tanner.[66] Noting the “economic anomaly” that was company towns, the Court said Marsh “simply held that where private interests were substituting for and performing the customary functions of government, First Amendment freedoms could not be denied where exercised in the customary manner on the town’s sidewalks and streets.”[67] Moreover, the Court found that Logan Valley applied “only in a context where the First Amendment activity was related to the shopping center’s operations.”[68] The general rule, according to the Court, was that private actors had the right to restrict access to property for the purpose of exercising free-speech rights.[69] Importantly, “property does not lose its private character merely because the public is generally invited to use it for designated purposes.”[70] Since the mall did not dedicate any part of its shopping center to public use in a way that would entitle the protestors to use it, the Court allowed it to restrict hand billing by Vietnam protestors within the mall.[71]

Then, in Hudgens v. NLRB,[72] the Court went a step further and reversed Logan Valley and severely cabined-in Marsh. Now, the general rule was that “the constitutional guarantee of free speech is a guarantee only against abridgment by government, federal or state.”[73] Marsh is now a narrow exception, limited to situations where private property has taken on all attributes of a town.[74] The Court also found that the reasoning—if not the holding—of Tanner had already reversed Logan Valley.[75] The Court concluded bluntly that “under the present state of the law the constitutional guarantee of free expression has no part to play in a case such as this.”[76] In other words, private actors, even those that open themselves up to the public, are not subject to the First Amendment. Following Hudgens, the Court would further limit the public-function test to “the exercise by a private entity of powers traditionally exclusively reserved to the State.”[77] Thus, the “traditional, exclusive public function” test.

Despite this history, recent litigants against online social-media platforms have argued, often citing Marsh, that these platforms are the equivalent of public parks or other public forums for speech.[78] On top of that, the Supreme Court itself has described social-media platforms as the “modern public square.”[79] The Court emphasized the importance of online platforms because they:

allow[] users to gain access to information and communicate with one another about it on any subject that might come to mind… [give] access to what for many are the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge. These websites can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard. They allow a person with an Internet connection to “become a town crier with a voice that resonates farther than it could from any soapbox.”[80]

Seizing upon this language, many litigants have argued that online social-media platforms are public forums for First Amendment purposes. To date, all have failed in federal court under this theory,[81] and the Supreme Court officially foreclosed it in Halleck.

In Halleck, the Court considered whether a public-access channel operated by a cable provider was a government actor for purposes of the First Amendment under the traditional, exclusive public function test. Summarizing the caselaw, the Court said the test required more than just a finding that the government at some point exercised that function, or that the function serves the public good. Instead, the government must have “traditionally and exclusively performed the function.”[82]

The Court then found that operating as a public forum for speech is not a function traditionally and exclusively performed by the government. On the contrary, a private actor that provides a forum for speech normally retains “editorial discretion over the speech and speakers in the forum”[83] because “[it] is not an activity that only governmental entities have traditionally performed.”[84] The Court reasoned that:

If the rule were otherwise, all private property owners and private lessees who open their property for speech would be subject to First Amendment constraints and would lose the ability to exercise what they deem to be appropriate editorial discretion within that open forum. Private property owners and private lessees would face the unappetizing choice of allowing all comers or closing the platform altogether.[85]

If the applicability of Halleck to the question of whether online social-media platforms are state actors under the “traditional, exclusive public function” test isn’t already clear, there have been appellate courts who have squarely addressed the question. In Prager University v. Google, LLC,[86] the 9th U.S. Circuit Court of Appeals took on the question of whether social-media platforms are state actors subject to First Amendment. Prager relied primarily upon Marsh and Google’s representations that YouTube is a “public forum” to argue that YouTube is a state actor under the traditional, public function test.[87] Citing primarily Halleck, along with a healthy dose of both Hudgens and Tanner, the 9th Circuit rejected this argument, for the reasons noted above. [88] YouTube was not a state actor just because it opened itself up to the public as a forum for free speech.

In sum, there is no basis for arguing that online social-media platforms fit into the narrow Marsh exception to the general rule that private actors can use their own editorial discretion over own their digital property to set their own rules for speech, including misinformation policies.

That this exception to the general private/state action dichotomy has been limited as applied to social-media platforms is consistent with the reasoning laid out above on the law & economics of the doctrine. Applying the Marsh theory to social-media companies would make all of their moderation decisions subject to First Amendment analysis. As will be discussed more below in Part III.A, this would severely limit the platforms’ ability to do anything at all with regard to online misinformation, since government actors can do very little to regulate such speech consistent with the First Amendment.

The inapplicability of the Marsh theory of state action means that a robust sphere of individual liberty will be protected. Social-media companies will be able to engage in a vibrant “market for speech governance” with respect to misinformation, responding to the perceived demands of users and advertisers and balancing those interests in a way that maximizes the value of their platforms in the presence of market competition.

B.      Government Compulsion or Encouragement

In light of the revelations highlighted in the introduction of this paper from The Intercept, the “Twitter Files,” and subsequent litigation in Missouri v. Biden,[89] the more salient theory of state action is that online social-media companies were either compelled by or colluded in joint action with the federal government to censor speech under their misinformation policies. This section will consider the government compulsion or encouragement theory and Part II.C below will consider the joint action/entwinement theory.

At a high level, the government may not coerce or encourage private actors to do what it may itself not do constitutionally.[90] But state action can be found for a private decision under this theory “only when it has exercised coercive power or has provided such significant encouragement, either overt or cover, that the choice must in law be deemed to be that of the State.”[91] But “[m]ere approval of or acquiescence in the initiatives of a private party is not sufficient to justify holding the State responsible” for private actions.[92] While each case is very fact-specific,[93] courts have developed several tests to determine when government compulsion or encouragement would transform a private actor into a state actor for constitutional purposes.

For instance, in Bantam Books v. Sullivan,[94] the Court considered whether letters sent by a legislatively created commission to book publishers declaring certain books and magazines objectionable for sale or distribution was sufficient to transform into state action the publishers’ subsequent decision not to publish further copies of the listed publications. The commission had no legal power to apply formal legal sanctions and there were no bans or seizures of books.[95] In fact, the book distributors were technically “free” to ignore the commission’s notices.[96] Nonetheless, the Court found “the Commission deliberately set about to achieve the suppression of publications deemed ‘objectionable’ and succeeded in its aim.”[97] Particularly important to the Court was that the notices could be seen as a threat to refer them for prosecution, regardless how the commission styled them. As the Court stated:

People do not lightly disregard public officers’ thinly veiled threats to institute criminal proceedings against them if they do not come around, and [the distributor’s] reaction, according to uncontroverted testimony, was no exception to this general rule. The Commission’s notices, phrased virtually as orders, reasonably understood to be such by the distributor, invariably followed up by police visitations, in fact stopped the circulation of the listed publications ex proprio vigore. It would be naive to credit the State’s assertion that these blacklists are in the nature of mere legal advice, when they plainly serve as instruments of regulation…[98]

Similarly, in Carlin Communications v. Mountain States Telephone Co.,[99] the 9th U.S. Circuit Court of Appeals found it was state action when a deputy county attorney threatened prosecution of a regional telephone company for carrying an adult-entertainment messaging service.[100] “With this threat, Arizona ‘exercised coercive power’ over Mountain Bell and thereby converted its otherwise private conduct into state action…”[101] The court did not find it relevant whether or not the motivating reason for the removal was the threat of prosecution or the telephone company’s independent decision.[102]

In a more recent case dealing with Backpage.com, the 7th U.S. Circuit Court of Appeals found a sheriff’s campaign to shut down the site by cutting off payment processing for ads from Visa and Mastercard was impermissible under the First Amendment.[103] There, the sheriff sent a letter to the credit-card companies asking them to “cease and desist” from processing payment for advertisements on Backpage.com and for “contact information” for someone within the companies he could work with.[104] The court spent considerable time distinguishing between “attempts to convince and attempts to coerce,”[105] coming to the conclusion that “Sheriff Dart is not permitted to issue and publicize dire threats against credit card companies that process payments made through Backpage’s website, including threats of prosecution (albeit not by him, but by other enforcement agencies that he urges to proceed against them), in an effort to throttle Backpage.”[106] The court also noted “a threat is actionable and thus can be enjoined even if it turns out to be empty—the victim ignores it, and the threatener folds his tent.”[107]

In sum, the focus under the coercion or encouragement theory is on what the state objectively did and not on the subjective understanding of the private actor. In other words, the question is whether the state action is reasonably understood as coercing or encouraging private action, not whether the private actor was actually responding to it.

To date, several federal courts have dismissed claims that social-media companies are state actors under the compulsion/encouragement theory, often distinguishing the above cases on the grounds that the facts did not establish a true threat, or were not sufficiently connected to the enforcement action again the plaintiff.

For instance, in O’Handley v. Weber,[108] the 9th U.S. Circuit Court of Appeals dealt directly with the question of the coercion theory in the context of social-media companies moderating misinformation, allegedly at the behest of California’s Office of Elections Cybersecurity (OEC). The OEC flagged allegedly misleading posts on Facebook and Twitter and the social-media companies removed most of those flagged posts.[109] First, the court found there was no threats from the OEC like those in Carlin, nor any incentive offered to take the posts down.[110]  The court then distinguished between “attempts to convince and attempts to coerce,”[111] noting that “[a] private party can find the government’s stated reasons for making a request persuasive, just as it can be moved by any other speaker’s message. The First Amendment does not interfere with this communication so long as the intermediary is free to disagree with the government and to make its own independent judgment about whether to comply with the government’s request.”[112] The court concluded that the OEC did not pressure Twitter to take any particular action against the plaintiff, but went even further by emphasizing that, even if their actions could be seen as a specific request to remove his post, Twitter’s compliance was “purely optional.”[113] In other words, if there is no threat in a government actor’s request to take down content, then it is not impermissible coercion or encouragement.

In Hart v. Facebook,[114] the plaintiff argued that the federal government defendants had—through threats of removing Section 230 immunity and antitrust investigations, as well as comments by President Joe Biden stating that social-media companies were “killing people” by not policing misinformation about COVID-19—coerced Facebook and Twitter into removing his posts.[115] The plaintiff also pointed to recommendations from Biden and an advisory from Surgeon General Vivek Murthy as further evidence of coercion or encouragement. The court rejected this evidence, stating that “the government’s vague recommendations and advisory opinions are not coercion. Nor can coercion be inferred from President Biden’s comment that social media companies are ‘killing people’… A President’s one-time statement about an industry does not convert into state action all later decisions by actors in that industry that are vaguely in line with the President’s preferences.”[116] But even more importantly, the court found that there was no connection between the allegations of coercion and the removal of his particular posts: “Hart has not alleged any connection between any (threat of) agency investigation and Facebook and Twitter’s decisions… even if Hart had plausibly pleaded that the Federal Defendants exercised coercive power over the companies’ misinformation policies, he still fails to specifically allege that they coerced action as to him.”[117]

Other First Amendment cases against social-media companies alleging coercion or encouragement from state actors have been dismissed for reasons similar to those in Hart.[118] In Missouri et al. v. Biden, et al.,[119] the U.S. District Court for the Western District of Louisiana became the first court to find social-media companies could be state actors for purposes of the First Amendment due to a coercion or encouragement theory. After surveying (most of the same) cases as above, the court found that:

Here, Plaintiffs have clearly alleged that Defendants attempted to convince social-media companies to censor certain viewpoints. For example, Plaintiffs allege that Psaki demanded the censorship of the “Disinformation Dozen” and publicly demanded faster censorship of “harmful posts” on Facebook. Further, the Complaint alleges threats, some thinly veiled and some blatant, made by Defendants in an attempt to effectuate its censorship program. One such alleged threat is that the Surgeon General issued a formal “Request for Information” to social-media platforms as an implied threat of future regulation to pressure them to increase censorship. Another alleged threat is the DHS’s publishing of repeated terrorism advisory bulletins indicating that “misinformation” and “disinformation” on social-media platforms are “domestic terror threats.” While not a direct threat, equating failure to comply with censorship demands as enabling acts of domestic terrorism through repeated official advisory bulletins is certainly an action social-media companies would not lightly disregard. Moreover, the Complaint contains over 100 paragraphs of allegations detailing “significant encouragement” in private (i.e., “covert”) communications between Defendants and social-media platforms.

The Complaint further alleges threats that far exceed, in both number and coercive power, the threats at issue in the above-mentioned cases. Specifically, Plaintiffs allege and link threats of official government action in the form of threats of antitrust legislation and/or enforcement and calls to amend or repeal Section 230 of the CDA with calls for more aggressive censorship and suppression of speakers and viewpoints that government officials disfavor. The Complaint even alleges, almost directly on point with the threats in Carlin and Backpage, that President Biden threatened civil liability and criminal prosecution against Mark Zuckerberg if Facebook did not increase censorship of political speech. The Court finds that the Complaint alleges significant encouragement and coercion that converts the otherwise private conduct of censorship on social-media platforms into state action, and is unpersuaded by Defendants’ arguments to the contrary.[120]

There is obvious tension between Missouri v. Biden and the O’Handley and Hart opinions. As noted above, the Missouri v. Biden court did attempt to incorporate O’Handley into its opinion. That court tried to distinguish O’Handley on the grounds that the OEC’s conduct at issue was a mere advisory, whereas the federal defendants in Missouri v. Biden made threats against the plaintiffs.[121]

It is perhaps plausible that Hart can also be read as consistent with Missouri v. Biden, in the sense that while Hart failed to allege sufficient facts of coercion/encouragement or a connection with his specific removal, the plaintiffs in Missouri v. Biden did. Nonetheless, the Missouri v. Biden court accepted many factual arguments that were rejected in Hart, such as those about the relevance of certain statements made by President Biden and his press secretary; threats to revoke Section 230 liability protections; and threats to start antitrust proceedings. Perhaps the difference is that the factual allegations in Missouri v. Biden were substantially longer and more detailed than those in Hart. And while the Missouri v. Biden court did not address it in its First Amendment section, they did note that the social-media companies’ censorship actions generated sufficient injury-in-fact to the plaintiffs to establish standing.[122] In other words, it could just be that what makes the difference is the better factual pleading in Missouri v. Biden, due to more available revelations of government coercion and encouragement.[123]

On the other hand, there may be value to cabining Missouri v. Biden with some of the criteria in O’Handley and Hart. For instance, there could be value in the government having the ability to share information with social-media companies and make requests to review certain posts and accounts that may purvey misinformation. O’Handley emphasizes that there is a difference between convincing and coercing. This is not only important for dealing with online misinformation, but with things like terrorist activity on the platforms. Insofar as Missouri v. Biden is too lenient in allowing cases to go forward, this may be a fruitful distinction for courts to clarify.[124]

Similarly, the requirement in Hart that a specific moderation decision be connected to a particular government action is very important to limit the universe of activity subject to First Amendment analysis. The Missouri v. Biden court didn’t deal sufficiently with whether the allegations of coercion and encouragement were connected to the plaintiffs’ content and accounts being censored. As Missouri v. Biden reaches the merits stage of the litigation, the court will also need to clarify the evidence needed to infer state action, assuming there is no explicit admission of direction by state actors.[125]

Under the law & economics theory laid out in Part I, the coercion or encouragement exception to the strong private/state action distinction is particularly important. The benefits of private social-media companies using their editorial judgment to remove misinformation in response to user and advertiser demand is significantly reduced when the government coerces, encourages, or otherwise induces moderation decisions. In such cases, the government is essentially engaged in covert regulation by deciding for private actors what is true and what is false. This is inconsistent with a “marketplace of ideas” or the “marketplace for speech governance” that the First Amendment’s state-action doctrine protects.

There is value, however, to limiting the Missouri v. Biden holding to ensure that not all requests by government agents automatically transform moderation decisions into state action, and in connecting coercion or encouragement to particular allegations of censorship. Government actors, as much as private actors, should be able to alert social-media companies to the presence of misinformation and even persuade social-media companies to act in certain cases, so long as that communication doesn’t amount to a threat. This is consistent with a “marketplace for speech governance.” Moreover, social-media companies shouldn’t be considered state actors for all moderation decisions, or even all moderation decisions regarding misinformation, due to government coercion or encouragement in general. Without a nexus between the coercion or encouragement and a particular moderation decision, social-media companies would lose the ability to use their editorial judgment on a wide variety of issues in response to market demand, to the detriment of their users and advertisers.

C.      Joint Action or Symbiotic Relationship

There is also state action for the purposes of the First Amendment when the government acts jointly with a private actor,[126] when there is a “symbiotic relationship” between the government and a private actor,[127] or when there is “inextricable entwinement” between a private actor and the government.[128] None of these theories is necessarily distinct,[129] and it is probably easier to define them through examples.[130]

In Lugar v. Edmonson Oil Co., the plaintiff, an operator of a truck stop, was indebted to his supplier.[131] The defendant was a creditor who used a state law in Virginia to get a prejudgment attachment to the truck-stop operator’s property, which was then executed by the county sheriff.[132] A hearing was held 34 days later, pursuant to the relevant statute.[133] The levy at-issue was dismissed because the creditor failed to satisfy the statute. The plaintiff then brought a Section 1983 claim against the defendant on grounds that it had violated the plaintiff’s Due Process rights by taking his property without first providing him with a hearing. The Supreme Court took the case to clarify how the state-action doctrine applied in such matters. The Court, citing previous cases, stated that:

Private persons, jointly engaged with state officials in the prohibited action, are acting “under color” of law for purposes of the statute. To act “under color” of law does not require that the accused be an officer of the State. It is enough that he is a willful participant in joint activity with the State or its agents.[134]

The Court also noted that “we have consistently held that a private party’s joint participation with state officials in the seizure of disputed property is sufficient to characterize that party as a ‘state actor.’”[135] Accordingly, the Court found that the defendant’s use of the prejudgment statute was state action that violated Due Process.[136]

In Burton v. Wilmington Parking Authority,[137] the Court heard a racial-discrimination case in which the question was whether state action was involved when a restaurant refused to serve black customers in a space leased from a publicly owned building attached to a public parking garage.[138] The Court determined that it was state action, noting that “[i]t cannot be doubted that the peculiar relationship of the restaurant to the parking facility in which it is located confers on each an incidental variety of mutual benefits… Addition of all these activities, obligations and responsibilities of the Authority, the benefits mutually conferred, together with the obvious fact that the restaurant is operated as an integral part of a public building devoted to a public parking service, indicates that degree of state participation and involvement in discriminatory action which it was the design of the Fourteenth Amendment to condemn.”[139] While Court didn’t itself call this theory the “symbiotic relationship” test in Burton, later Court opinions did exactly that.[140]

Brentwood Academy v. Tennessee Secondary School Athletic Association arose concerned a dispute between a private Christian school and the statewide athletics association governing interscholastic sports over a series of punishments for alleged “undue influence” in recruiting athletes.[141] The central issue was whether the athletic association was a state actor. The Court analyzed whether state actors were so “entwined” with the private actors in the association to make the resulting action state action.[142] After reviewing the record, the Court noted that 84% of the members of the athletic association were public schools and the association’s rules were made by representatives from those schools.[143] The Court concluded that the “entwinement down from the State Board is therefore unmistakable, just as the entwinement up from the member public schools is overwhelming. Entwinement will support a conclusion that an ostensibly private organization ought to be charged with a public character and judged by constitutional standards; entwinement to the degree shown here requires it.”[144]

Other cases have also considered circumstances in which government regulation, combined with other government actions, can create a situation where private action is considered that of the government. In Skinner v. Railway Labor Executives Association,[145] the Court considered a situation where private railroads engaged in drug testing of employees, pursuant to a federal regulation that authorized them to adopt a policy of drug testing and preempted state laws restricting testing.[146] The Court stated that “[t]he fact that the Government has not compelled a private party to perform a search does not, by itself, establish that the search is a private one. Here, specific features of the regulations combine to convince us that the Government did more than adopt a passive position toward the underlying private conduct.”[147] The Court found the preemption of state law particularly important, finding “[t]he Government has removed all legal barriers to the testing authorized by Subpart D and indeed has made plain not only its strong preference for testing, but also its desire to share the fruits of such intrusions.”[148]

Each of these theories has been pursued by litigants who have had social-media posts or accounts removed by online platforms due to alleged misinformation, including in the O’Handley and Hart cases discussed earlier.

For instance, in O’Handley, the 9th Circuit rejected that Twitter was a state actor under the joint-action test. The court stated there were two ways to prove joint action: either by a conspiracy theory that required a “meeting of the minds” to violate constitutional rights, or by a “willful participant” theory that requires “a high degree of cooperation between private parties and state officials.”[149] The court rejected the conspiracy theory, stating there was no meeting of the minds to violate constitutional rights because Twitter had its own independent interest in “not allowing users to leverage its platform to mislead voters.”[150] The court also rejected the willful-participant theory because Twitter was free to consider and reject flags made by the OEC in the Partner Support Portal under its own understanding of its policy on misinformation.[151] The court analogized the case to Mathis v. Pac. Gas & Elec. Co.,[152] finding this “closely resembles the ‘consultation and information sharing’ that we held did not rise to the level of joint action.”[153] The court concluded that “this was an arm’s-length relationship, and Twitter never took its hands off the wheel.”[154]

Similarly, in Hart, the U.S. District Court for the Northern District of California rejected the joint action theory as applied to Twitter and Facebook. The court found that much of the complained-of conduct by Facebook predated the communications with the federal defendants about misinformation, making it unlikely that there was a “meeting of the minds” to deprive the plaintiff of his constitutional rights.[155] The court also found “the Federal Defendants’ statements… far too vague and precatory to suggest joint action,” adding that recommendations and advisories are both vague and unenforceable.[156] Other courts followed similar reasoning in rejecting First Amendment claims against social-media companies.[157]

Finally, in Children’s Health Defense v. Facebook,[158] the court considered the argument of whether Section 230, much like the regulation at issue in Skinner, could make Facebook into a joint actor with the state when it removes misinformation. The U.S. District Court for the Northern District of California distinguished Skinner, citing a previous case finding “[u]nlike the regulations in Skinner, Section 230 does not require private entities to do anything, nor does it give the government a right to supervise or obtain information about private activity.”[159]

For the first time, a federal district court found state action under the joint action or entwinement theory in Missouri v. Biden. The court found that:

Here, Plaintiffs have plausibly alleged joint action, entwinement, and/or that specific features of Defendants’ actions combined to create state action. For example, the Complaint alleges that “[o]nce in control of the Executive Branch, Defendants promptly capitalized on these threats by pressuring, cajoling, and openly colluding with social-media companies to actively suppress particular disfavored speakers and viewpoints on social media.” Specifically, Plaintiffs allege that Dr. Fauci, other CDC officials, officials of the Census Bureau, CISA, officials at HHS, the state department, and members of the FBI actively and directly coordinated with social-media companies to push, flag, and encourage censorship of posts the Government deemed “Mis, Dis, or Malinformation.”[160]

The court also distinguished O’Handley, finding there was more than an “arms-length relationship” between the federal defendants and the social-media companies:

Plaintiffs allege a formal government-created system for federal officials to influence social-media censorship decisions. For example, the Complaint alleges that federal officials set up a long series of formal meetings to discuss censorship, setting up privileged reporting channels to demand censorship, and funding and establishing federal-private partnership to procure censorship of disfavored viewpoints. The Complaint clearly alleges that Defendants specifically authorized and approved the actions of the social-media companies and gives dozens of examples where Defendants dictated specific censorship decisions to social-media platforms. These allegations are a far cry from the complained-of action in O’Handley: a single message from an unidentified member of a state agency to Twitter.[161]

Finally, the court also found similarities between Skinner and Missouri v Biden that would support a finding of state action:

Section 230 of the CDA purports to preempt state laws to the contrary, thus removing all legal barriers to the censorship immunized by Section 230. Federal officials have also made plain a strong preference and desire to “share the fruits of such intrusions,” showing “clear indices of the Government’s encouragement, endorsement, and participation” in censorship, which “suffice to implicate the [First] Amendment.”

The Complaint further explicitly alleges subsidization, authorization, and preemption through Section 230, stating: “[T]hrough Section 230 of the Communications Decency Act (CDA) and other actions, the federal government subsidized, fostered, encouraged, and empowered the creation of a small number of massive social-media companies with disproportionate ability to censor and suppress speech on the basis of speaker, content, and viewpoint.” Section 230 immunity constitutes the type of “tangible financial aid,” here worth billions of dollars per year, that the Supreme Court identified in Norwood, 413 U.S. at 466, 93 S.Ct. 2804. This immunity also “has a significant tendency to facilitate, reinforce, and support private” censorship. Id. Combined with other factors such as the coercive statements and significant entwinement of federal officials and censorship decisions on social-media platforms, as in Skinner, this serves as another basis for finding government action.[162]

Again, there is tension in the opinions of these cases on the intersection of social media and the First Amendment under the joint-action or symbiotic-relationship test. But there are ways to read the cases consistently. First, there were far more factual allegations in Missouri v. Biden relative to the O’Handley, Hart, or Children’s Health Defense cases, particularly regarding how involved the federal defendants were in prodding social-media companies to moderate misinformation. There is even a way to read the different legal conclusions on Section 230 and Skinner consistently. The court in Biden v. Missouri made clear that it wasn’t Section 230 alone that made it like Skinner, but the combination of Section 230 immunity with other factors present:

The Defendants’ alleged use of Section 230’s immunity—and its obvious financial incentives for social-media companies—as a metaphorical carrot-and-stick combined with the alleged back-room meetings, hands-on approach to online censorship, and other factors discussed above transforms Defendants’ actions into state action. As Defendants note, Section 230 was designed to “reflect a deliberate absence of government involvement in regulating online speech,” but has instead, according to Plaintiffs’ allegations, become a tool for coercion used to encourage significant joint action between federal agencies and social-media companies.[163]

While there could be dangers inherent in treating Section 230 alone as an argument that social-media companies are state actors, the court appears inclined to say it is not Section 230 but rather the threat of removing it, along with the other dealings and communications from the federal government, that makes this state action.

Under the law & economics theory outlined in Part I, the joint-action or symbiotic-relationship test is also an important exception to the general dichotomy between private and state action. In particular, it is important to deter state officials from engaging in surreptitious speech regulation by covertly interjecting themselves into social-media companies’ moderation decisions. The allegations in Missouri v. Biden, if proven true, do appear to outline a vast and largely hidden infrastructure through which federal officials use backchannels to routinely discuss social-media companies’ moderation decisions and often pressure them into removing disfavored content in the name of misinformation. This kind of government intervention into the “marketplace of ideas” and the “market for private speech governance” takes away companies’ ability to respond freely to market incentives in moderating misinformation, and replaces their own editorial discretion with the opinions of government officials.

III.    Applying the First Amendment to Government Regulation of Online Misinformation

A number of potential consequences might stem from a plausible claim of state action levied against online platforms using one of the theories described above. Part III.A will explore the likely result, which is that a true censorship-by-deputization scheme enacted through social-media companies would be found to violate the First Amendment. Part III.B will consider the question of remedies: even if there is a First Amendment violation, those whose content or accounts have been removed may not be restored. Part III.C will then offer alternative ways for the government to deal with the problem of online misinformation without offending the First Amendment.

A.      If State Action Is Found, Removal of Content Under Misinformation Policies Would Violate the First Amendment

At a high level, First Amendment jurisprudence does allow for government regulation of speech in limited circumstances. In those cases, the threshold question is whether the type of speech at issue is protected speech and whether the regulation is content-based.[164] If it is, then the government must show the state action is narrowly tailored to a compelling governmental interest: this is the so-called “strict scrutiny” standard.[165] A compelling governmental interest is the highest interest the state has, something considered necessary or crucial, and beyond simply legitimate or important.[166] “Narrow tailoring” means the regulation uses the least-restrictive means “among available, effective alternatives.”[167] While not an impossible standard for the government to reach, “[s]trict scrutiny leave[s] few survivors.”[168] Moreover, prior restraints of speech, which are defined as situations where speech is restricted before publication, are presumptively unconstitutional.[169]

Only for content- and viewpoint-neutral “time, place, and manner restrictions” will regulation of protected speech receive less than strict scrutiny.[170] In those cases, as long as the regulation serves a “significant” government interest, and there are alternative channels available for the expression, the regulation is permissible.[171]

There are also situations where speech regulation—whether because the regulation aims at conduct but has speech elements or because the speech is not fully protected for some other reason—receives “intermediate scrutiny.”[172] In those cases, the government must show the state action is narrowly tailored to an important or substantial governmental interest, and burdens no more speech than necessary.[173] Beyond the levels of scrutiny to which speech regulation is subject, state actions involving speech also may be struck down for overbreadth[174] or vagueness.[175] Together, these doctrines work to protect a very large sphere of speech, beyond what is protected in most jurisdictions around the world.

The initial question that arises with alleged misinformation is how to even define it. Neither social-media companies nor the government actors on whose behalf they may be acting are necessarily experts in misinformation. This can result in “void-for-vagueness” problems.

In Høeg v. Newsom,[176] the U.S. District Court for the Eastern District of California considered California’s state law AB 2098, which would charge medical doctors with “unprofessional conduct” and subject them to discipline if they shared with patients “false information that is contradicted by contemporary scientific consensus contrary to the standard of care” as part of treatment or advice.[177] The court stated that “[a] statute is unconstitutionally vague when it either ‘fails to provide a person of ordinary intelligence fair notice of what is prohibited, or is so standardless that it authorizes or encourages seriously discriminatory enforcement’”[178] and that “[v]ague statutes are particularly objectionable when they ‘involve sensitive areas of First Amendment freedoms” because “they operate to inhibit the exercise of those freedoms.’”[179] The court rejected the invitation to apply a lower vagueness standard typically used for technical language because “contemporary scientific consensus” has no established technical meaning in the scientific community.[180] The court also asked a series of questions that would be particularly relevant to social-media companies acting on behalf of government actors in efforts to combat misinformation:

[W]ho determines whether a consensus exists to begin with? If a consensus does exist, among whom must the consensus exist (for example practicing physicians, or professional organizations, or medical researchers, or public health officials, or perhaps a combination)? In which geographic area must the consensus exist (California, or the United States, or the world)? What level of agreement constitutes a consensus (perhaps a plurality, or a majority, or a supermajority)? How recently in time must the consensus have been established to be considered “contemporary”? And what source or sources should physicians consult to determine what the consensus is at any given time (perhaps peer-reviewed scientific articles, or clinical guidelines from professional organizations, or public health recommendations)?[181]

The court noted that defining the consensus with reference to pronouncements from the U.S. Centers for Disease Control and Prevention or the World Health Organization would be unhelpful, as those entities changed their recommendations on several important health issues over the course of the COVID-19 pandemic:

Physician plaintiffs explain how, throughout the course of the COVID-19 pandemic, scientific understanding of the virus has rapidly and repeatedly changed. (Høeg Decl. ¶¶ 15-29; Duriseti Decl. ¶¶ 7-15; Kheriaty Decl. ¶¶ 7-10; Mazolewski Decl. ¶¶ 12-13.) Physician plaintiffs further explain that because of the novel nature of the virus and ongoing disagreement among the scientific community, no true “consensus” has or can exist at this stage. (See id.) Expert declarant Dr. Verma similarly explains that a “scientific consensus” concerning COVID-19 is an illusory concept, given how rapidly the scientific understanding and accepted conclusions about the virus have changed. Dr. Verma explains in detail how the so-called “consensus” has developed and shifted, often within mere months, throughout the COVID-19 pandemic. (Verma Decl. ¶¶ 13-42.) He also explains how certain conclusions once considered to be within the scientific consensus were later proved to be false. (Id. ¶¶ 8-10.) Because of this unique context, the concept of “scientific consensus” as applied to COVID-19 is inherently flawed.[182]

As a result, the court determined that “[b]ecause the term ‘scientific consensus’ is so ill-defined, physician plaintiffs are unable to determine if their intended conduct contradicts the scientific consensus, and accordingly ‘what is prohibited by the law.’”[183] The court upheld a preliminary injunction against the law because of a high likelihood of success on the merits.[184]

Assuming the government could define misinformation in a way that wasn’t vague, the next question is what level of First Amendment scrutiny would such edicts receive? It is clear for several reasons that regulation of online misinformation would receive, and fail, the highest form of constitutional scrutiny.

First, the threat of government censorship of speech through social-media misinformation policies could be considered a prior restraint. Prior restraints occur when the government (or actors on their behalf) restrict speech before publication. As the Supreme Court has put it many times, “any system of prior restraints of expression comes to this Court bearing a heavy presumption against its constitutional validity.”[185]

In Missouri v. Biden, the court found the plaintiffs had plausibly alleged prior restraints against their speech, and noted that “[t]hreatening penalties for future speech goes by the name of ‘prior restraint,’ and a prior restraint is the quintessential first-amendment violation.”[186] The court found it relevant that social-media companies could “silence” speakers’ voices at a “mere flick of the switch,”[187] and noted this could amount to “a prior restraint by preventing a user of the social-media platform from voicing their opinion at all.”[188] The court further stated that “bans, shadow-bans, and other forms of restrictions on Plaintiffs’ social-media accounts, are… de facto prior restraints, [a] clear violation of the First Amendment.”[189]

Second, it is clear that any restriction on speech based upon its truth or falsity would be a content-based regulation, and likely a viewpoint-based regulation, as it would require the state actor to take a side on a matter of dispute.[190] Content-based regulation requires strict scrutiny, and a reasonable case can be made that viewpoint-based regulation of speech is per se inconsistent with the First Amendment.[191]

In Missouri v. Biden, the court noted that “[g]overnment action, aimed at the suppression of particular views on a subject which discriminates on the basis of viewpoint, is presumptively unconstitutional.”[192] The court found that “[p]laintiffs allege a regime of censorship that targets specific viewpoints deemed mis-, dis-, or malinformation by federal officials. Because Plaintiffs allege that Defendants are targeting particular views taken by speakers on a specific subject, they have alleged a clear violation of the First Amendment, i.e., viewpoint discrimination.”[193]

Third, even assuming there is clearly false speech that government agents (and social-media companies acting on their behalf) could identify, false speech presumptively receives full First Amendment protection. In United States v. Alvarez[194] the Supreme Court stated that while older cases may have stated that false speech does not receive full protection, those were “confined to the few ‘historic and traditional categories [of expression] long familiar to the bar.’”[195] In other words, there was no “general exception to the First Amendment for false statements.”[196] Thus, as protected speech, any regulation of false speech, as such, would run into strict scrutiny.

In order to survive First Amendment scrutiny, government agents acting through social-media companies would have to demonstrate a parallel or alternative justification to regulate the sort of low-value speech the Supreme Court has recognized as outside the protection of the First Amendment.[197] These exceptions include defamation, fraud, the tort of false light, false statements to government officials, perjury, falsely representing oneself as speaking for the government (and impersonation), and other similar examples of fraud or false speech integral to criminal conduct.[198]

But the Alvarez Court noted that, even in areas where false speech does not receive protection, such as fraud and defamation, the Supreme Court has found the First Amendment requires that claims of fraud be based on more than falsity alone.[199]

When it comes to fraud,[200] for instance, the Supreme Court has repeatedly noted that the First Amendment offers no protection.[201] But “[s]imply labeling an action one for ‘fraud’… will not carry the day.”[202] Prophylactic rules aimed at protecting the public from the (sometimes fraudulent) solicitation of charitable donations, for instance, have been found to be unconstitutional prior restraints on several occasions by the Court.[203] The Court has found that “in a properly tailored fraud action the State bears the full burden of proof. False statement alone does not subject a fundraiser to fraud liability… Exacting proof requirements… have been held to provide sufficient breathing room for protected speech.”[204]

As for defamation,[205] the Supreme Court found in New York Times v. Sullivan[206] that “[a]uthoritative interpretations of the First Amendment guarantees have consistently refused to recognize an exception for any test of truth—whether administered by judges, juries, or administrative officials—and especially one that puts the burden of proving truth on the speaker.”[207] In Sullivan, the Court struck down an Alabama defamation statute, finding that in situations dealing with public officials, the mens rea must be actual malice: knowledge that the statement was false or reckless disregard for whether it was false.[208]

Since none of these exceptions would apply to online misinformation dealing with medicine or election law, social-media companies’ actions on behalf of the government against such misinformation would likely fail strict scrutiny. While it is possible that a court would find protecting public health or election security to be a compelling interest, the government would still face great difficulty showing that a ban on false information is narrowly tailored. It is highly unlikely that a ban on false information, as such, will ever be the least-restrictive means of controlling a harm. As the Court put it in Alvarez:

The remedy for speech that is false is speech that is true… Freedom of speech and thought flows not from the beneficence of the state but from the inalienable rights of the person. And suppression of speech by the government can make exposure of falsity more difficult, not less so. Society has the right and civic duty to engage in open, dynamic, rational discourse. These ends are not well served when the government seeks to orchestrate public discussion through content-based mandates.[209]

As argued above in Part I, a vibrant marketplace of ideas requires that individuals have the ability to express their ideas, so that the best ideas win. This means counter-speech is better than censorship from government actors to help society determine what is true. The First Amendment’s protection against government intervention into the marketplace of ideas promotes a better answer to online misinformation. Thus, a finding that government actors can’t use social-media actors to censor, based on vague definitions of misinformation, through prior restraints and viewpoint discrimination, and aimed at protected speech, is consistent with an understanding of the world where information is dispersed.

B.      The Problem of Remedies for Social-Media ‘Censorship’: The First Amendment Still Only Applies to Government Action

There is a problem, however, for plaintiffs who win cases against social-media companies that are found to be state actors when they remove posts and accounts due to alleged misinformation: the remedies are limited.

First, once the state action is removed through injunction, social-media companies would be free to continue to moderate misinformation as they see fit, free from any plausible First Amendment claim. For instance, in Carlisle Communications, the 9th Circuit found that, once the state action was enjoined, the telecommunications company was again free to determine whether or not to extend its service to the plaintiff. As the court put it:

Mountain Bell insists that its new policy reflected its independent business judgment. Carlin argues that Mountain Bell was continuing to yield to state threats of prosecution. However, the factual question of Mountain Bell’s true motivations is immaterial.

This is true because, inasmuch as the state under the facts before us may not coerce or otherwise induce Mountain Bell to deprive Carlin of its communication channel, Mountain Bell is now free to once again extend its 976 service to Carlin. Our decision substantially immunizes Mountain Bell from state pressure to do otherwise. Should Mountain Bell not wish to extend its 976 service to Carlin, it is also free to do that. Our decision modifies its public utility status to permit this action. Mountain Bell and Carlin may contract, or not contract, as they wish.[210]

This is consistent with the district court’s actions in Missouri v. Biden. There, the court granted the motion for a preliminary injunction, but it only applied against government action and not against the social-media companies at all.[211] For instance, the injunction prohibits a number of named federal officials and agencies from:

(1) meeting with social-media companies for the purpose of urging, encouraging, pressuring, or inducing in any manner the removal, deletion, suppression, or reduction of content containing protected free speech posted on social-media platforms;

(2) specifically flagging content or posts on social-media platforms and/or forwarding such to social-media companies urging, encouraging, pressuring, or inducing in any manner for removal, deletion, suppression, or reduction of content containing protected free speech;

(3) urging, encouraging, pressuring, or inducing in any manner social-media companies to change their guidelines for removing, deleting, suppressing, or reducing content containing protected free speech;

(4) emailing, calling, sending letters, texting, or engaging in any communication of any kind with social-media companies urging, encouraging, pressuring, or inducing in any manner for removal, deletion, suppression ,or reduction of content containing protected free speech;

(5) collaborating, coordinating, partnering, switchboarding, and/or jointly working with the Election Integrity Partnership, the Virality Project, the Stanford Internet Observatory, or any like project or group for the purpose of urging, encouraging, pressuring, or inducing in any manner removal, deletion, suppression, or reduction of content posted with social-media companies containing protected free speech;

(6) threatening, pressuring, or coercing social-media companies in any manner to remove, delete, suppress, or reduce posted content of postings containing protected free speech;

(7) taking any action such as urging, encouraging, pressuring, or inducing in any manner social-media companies to remove, delete, suppress, or reduce posted content protected by the Free Speech Clause of the First Amendment to the United States Constitution;

(8) following up with social-media companies to determine whether the social-media companies removed, deleted, suppressed, or reduced previous social-media postings containing protected free speech;

(9) requesting content reports from social-media companies detailing actions taken to remove, delete, suppress, or reduce content containing protected free speech; and

(10) notifying social-media companies to Be on The Lookout (BOLO) for postings containing protected free speech.[212]

In other words, a social-media company would not necessarily even be required to reinstate accounts or posts of those who have been excluded under their misinformation policies. It would become a question of whether, responding to marketplace incentives sans government involvement, the social-media companies continue to find it in their interest to enforce such policies against those affected persons and associated content.

Another avenue for private plaintiffs may be with a civil rights claim under Section 1983.[213] If it can be proved that social-media companies participated in a joint action with government officials to restrict First Amendment rights, it may be possible to collect damages from them, as well as from government officials.[214] Plaintiffs may struggle, however, to prove compensatory damages, which would require proof of harm. Categories of harm like physical injury aren’t relevant to social-media moderation policies, leaving things like diminished earnings or impairment of reputation. In most cases, it is likely that the damages to plaintiffs are de minimis and hardly worth the expense of filing suit. To receive punitive damages, plaintiffs would have to prove “the defendant’s conduct is… motivated by evil motive or intent, or when it involves reckless or callous indifference to the federally protected rights of others.”[215] This seems like it would be difficult to establish against the social-media companies unless there was an admission in the record that those companies’ goal was to suppress rights, rather than that they were attempting in good faith to restrict misinformation or simply acceding to government inducements.

The remedies available for constitutional violations in claims aimed at government officials are consistent with a theory of the First Amendment that prioritizes protecting the marketplace of ideas from intervention. While it leaves many plaintiffs with limited remedies against the social-media companies once the government actions are enjoined or deterred, it does return the situation to one where the social-media companies can freely compete in a market for speech governance on misinformation, as well.

C.      What Can the Government Do Under the First Amendment in Response to Misinformation on Social-Media Platforms?

If direct government regulation or implicit intervention through coercion or collusion with social-media companies is impermissible, the question may then arise as to what, exactly, the government can do to combat online misinformation.

The first option was already discussed in Part III.A in relation to Alvarez and narrow tailoring: counter-speech. Government agencies concerned about health or election misinformation could use social=media platforms to get their own message out. Those agencies could even amplify and target such counter-speech through advertising campaigns tailored to those most likely to share or receive misinformation.

Similarly, government agencies could create their own apps or social-media platforms to publicize information that counters alleged misinformation. While this may at first appear to be an unusual step, the federal government does, through the Corporation for Public Broadcasting, subsidize public television and public radio. If there is a fear of online misinformation, creating a platform where the government can promote its own point of view could combat online misinformation in a way that doesn’t offend the First Amendment.

Additionally, as discussed above in Part II.B in relation to O’Handley and the distinction between convincing and coercion: the government may flag alleged misinformation and even attempt to persuade social-media companies to act, so long as such communications involve no implicit or explicit threats of regulation or prosecution if nothing is done. The U.S. District Court for the Western District of Louisiana distinguished between constitutional government speech and unconstitutional coercion or encouragement in its memorandum accompanying its preliminary injunction in Missouri v. Biden:

Defendants also argue that a preliminary injunction would restrict the Defendants’ right to government speech and would transform government speech into government action whenever the Government comments on public policy matters. The Court finds, however, that a preliminary injunction here would not prohibit government speech… The Defendants argue that by making public statements, this is nothing but government speech. However, it was not the public statements that were the problem. It was the alleged use of government agencies and employees to coerce and/or significantly encourage social-media platforms to suppress free speech on those platforms. Plaintiffs point specifically to the various meetings, emails, follow-up contacts, and the threat of amending Section 230 of the Communication Decency Act. Plaintiffs have produced evidence that Defendants did not just use public statements to coerce and/or encourage social-media platforms to suppress free speech, but rather used meetings, emails, phone calls, follow-up meetings, and the power of the government to pressure social-media platforms to change their policies and to suppress free speech. Content was seemingly suppressed even if it did not violate social-media policies. It is the alleged coercion and/or significant encouragement that likely violates the Free Speech Clause, not government speech, and thus, the Court is not persuaded by Defendants’ arguments here.[216]

As the court highlights, there is a special danger in government communications that remain opaque to the public. Requests for action from social-media companies on misinformation should all be public information and not conducted behind closed doors or in covert communications. Such transparency would make it much easier for the public and the courts to determine whether state actors are engaged in government speech or crossing the line into coercion or substantial encouragement to suppress speech.

On the other hand, laws like the recent SB 262 in Florida[217] go beyond the delicate First Amendment balance that courts have tried to achieve. That law would limit government officials’ ability to share any information with social-media companies regarding misinformation, limiting contacts to the removal of criminal content or accounts, or an investigation or inquiry to prevent imminent bodily harm, loss of life, or property damage.[218] While going beyond the First Amendment standard may be constitutional, these restrictions could be especially harmful when the government has information that may not be otherwise available to the public. As important as it is to restrict government intervention, it would harm the marketplace of ideas to prevent government participation altogether.

Finally, Section 230 reform efforts aimed at limiting immunity in instances where social-media companies have “red flag” knowledge of defamatory material would be another constitutional way to address misinformation.[219] For instance, if a social-media company was presented with evidence that a court or arbitrator finds certain statements to be untrue, it could be required to make reasonable efforts to take down such misinformation, and keep it down.

Such a proposal would have real-world benefits. For instance, in the recent litigation brought by Dominion Voting Systems against Fox News, the court found the various factual claims about Dominion rigging the election for Joseph Biden were false.[220] While there was no final finding of liability due to Fox and Dominion coming to a settlement,[221] if Dominion were to present the court’s findings to a social-media company, the company would, under this proposal, have an obligation to remove content that repeats the claims the court found to be false. Similarly, an arbitrator finding that MyPillow CEO Mike Lindell’s claims that he had evidence of Chinese interference in the election were demonstrably false[222] could be enough to have those claims removed, as well. Rudy Giuliani’s recent finding of liability for defamation against two Georgia election workers could similarly be removed.[223]

However, these benefits may be limited by the fact that not every defamation claim resolves with a court finding falsity of a statement. Some cases settle before it gets that far, and the underlying claims remain unproven allegations. And, as discussed above, defamation itself is not easy to prove, especially for public figures who must also be able to show “actual malice.”[224] As a result, many cases won’t even be brought. This means there could be quite a bit defamatory information put out into the world that courts or arbitrators are unlikely to have occasion to consider.

On the other hand, to make a social-media company responsible for removing allegedly defamatory information in the absence of some competent legal authority finding the underlying claim false could be ripe for abuses that could have drastic chilling effects on speech. Thus, any Section 230 reform must be limited to those occasions where a court or arbitrator of competent authority (and with some finality of judgment) has spoken on the falsity of a statement.

Conclusion

There is an important distinction in First Amendment jurisprudence between private and state action. To promote a free market in ideas, we must also protect private speech governance, like that of social-media companies. Private actors are best placed to balance the desires of people for speech platforms and the regulation of misinformation.

But when the government puts its thumb on the scale by pressuring those companies to remove content or users in the name of misinformation, there is no longer a free marketplace of ideas. The First Amendment has exceptions in its state-action doctrine that would allow courts to enjoin government actors from initiating coercion of or collusion with private actors to do that which would be illegal for the government to do itself. Government censorship by deputization is no more allowed than direct regulation of alleged misinformation.

There are, however, things the government can do to combat misinformation, including counter-speech and nonthreatening communications with social-media platforms. Section 230 could also be modified to require the takedown of adjudicated misinformation in certain cases.

At the end of the day, the government’s role in defining or policing misinformation is necessarily limited in our constitutional system. The production of true knowledge in the marketplace of ideas may not be perfect, but it is the least bad system we have yet created.

[1] West Virginia Bd. of Ed. v. Barnette, 319 U.S. 624, 642 (1943).

[2] United States v. Alvarez, 567 U.S. 709, 728 (2012).

[3] See Amanda Seitz, Disinformation Board to Tackle Russia, Migrant Smugglers, Associated Press (Apr. 28, 2022), https://apnews.com/article/russia-ukraine-immigration-media-europe-misinformation-4e873389889bb1d9e2ad8659d9975e9d.

[4] See, e.g., Rep. Doug Lamafa, Brave New World? Orwellian ‘Disinformation Governance Board’ Goes Against Nation’s Principles, The Hill (May 4, 2022), https://thehill.com/opinion/congress-blog/3476632-brave-new-world-orwellian-disinformation-governance-board-goes-against-nations-principles; Letter to Secretary Mayorkas from Ranking Members of the House Committee on Oversight and Reform (Apr. 29, 2022), available at https://oversight.house.gov/wp-content/uploads/2022/04/Letter-to-DHS-re-Disinformation-Governance-Board-04292022.pdf (stating “DHS is creating the Orwellian-named “Disinformation Governance Board”); Jon Jackson, Joe Biden’s Disinformation Board Likened to Orwell’s ‘Ministry of Truth’, Newsweek (Apr. 29, 2022), https://www.newsweek.com/joe-bidens-disinformation-board-likened-orwells-ministry-truth-1702190.

[5] See Geneva Sands, DHS Shuts Down Disinformation Board Months After Its Efforts Were Paused, CNN (Aug. 24, 2022), https://www.cnn.com/2022/08/24/politics/dhs-disinformation-board-shut-down/index.html.

[6] For an example of this type of hearing, see Preserving Free Speech and Reining in Big Tech Censorship, Hearing before the U.S. House Energy and Commerce Subcommittee on Communications and Technology (Mar. 28, 2023), https://www.congress.gov/event/118th-congress/house-event/115561.

[7] See Ken Klippenstein & Lee Fang, Truth Cops: Leaked Documents Outline DHS’s Plans to Police Disinformation, The Intercept (Oct. 31, 2022), https://theintercept.com/2022/10/31/social-media-disinformation-dhs.

[8] See Matt Taibbi, Capsule Summaries of all Twitter Files Threads to Date, With Links and a Glossary, Racket News (last updated Mar. 17, 2023), https://www.racket.news/p/capsule-summaries-of-all-twitter. For evidence that Facebook received similar pressure from and/or colluded with government officials, see Robby Soave, Inside the Facebook Files: Emails Reveal the CDC’s Role in Silencing COVID-19 Dissent, reason (Jan. 19, 2023), https://reason.com/2023/01/19/facebook-files-emails-cdc-covid-vaccines-censorship; Ryan Tracy, Facebook Bowed to White House Pressure, Removed Covid Posts, Wall St. J. (Jul. 28, 2023), https://www.wsj.com/articles/facebook-bowed-to-white-house-pressure-removed-covid-posts-2df436b7.

[9] See Missouri, et al. v. Biden, et al., No. 23-30445 (5th Cir. Sept. 8, 2023), slip op. at 2-14, available at https://www.ca5.uscourts.gov/opinions/pub/23/23-30445-CV0.pdf. Hearing on the Weaponization of the Federal Government, Hearing Before the Select Subcomm. on the Weaponization of the Fed. Gov’t (Mar. 30, 2023) (written testimony of D. John Sauer), available at https://judiciary.house.gov/sites/evo-subsites/republicans-judiciary.house.gov/files/2023-03/Sauer-Testimony.pdf.

[10] See infra Part I.

[11] Manhattan Community Access Corp. v. Halleck, 139 S. Ct. 1921, 1928 (2019).

[12] Cf. Whitney v. California274 U.S. 357, 377 (1927) (Brandeis, J., concurring) (“If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence”).

[13] See, e.g., Abrams v. United States, 250 U.S. 616, 630 (1919) (Holmes, J., dissenting) (“Persecution for the expression of opinions seems to me perfectly logical. If you have no doubt of your premises or your power and want a certain result with all your heart you naturally express your wishes in law and sweep away all opposition. To allow opposition by speech seems to indicate that you think the speech impotent, as when a man says that he has squared the circle, or that you do not care whole-heartedly for the result, or that you doubt either your power or your premises. But when men have realized that time has upset many fighting faiths, they may come to believe even more than they believe the very foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas — that the best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out. That at any rate is the theory of our Constitution. It is an experiment, as all life is an experiment. Every year if not every day we have to wager our salvation upon some prophecy based upon imperfect knowledge. While that experiment is part of our system I think that we should be eternally vigilant against attempts to check the expression of opinions that we loathe and believe to be fraught with death, unless they so imminently threaten immediate interference with the lawful and pressing purposes of the law that an immediate check is required to save the country.”).

[14] Whitney v. California, 274 U.S. 357, 377 (1927). See also, Alvarez, 567 U.S. at 727-28 (“The remedy for speech that is false is speech that is true. This is the ordinary course in a free society. The response to the unreasoned is the rational; to the uninformed, the enlightened; to the straight-out lie, the simple truth. The theory of our Constitution is ‘that the best test of truth is the power of the thought to get itself accepted in the competition of the market.’ The First Amendment itself ensures the right to respond to speech we do not like, and for good reason. Freedom of speech and thought flows not from the beneficence of the state but from the inalienable rights of the person. And suppression of speech by the government can make exposure of falsity more difficult, not less so. Society has the right and civic duty to engage in open, dynamic, rational discourse. These ends are not well served when the government seeks to orchestrate public discussion through content-based mandates.”) (citations omitted).

[15] See, e.g., Jonathan Peters, The “Sovereigns of Cyberspace” and State Action: The First Amendment’s Applications—or Lack Thereof—to Third-Party Platforms, 32 Berk. Tech. L. J. 989 (2017) .

[16] See id. at 990, 992 (2017) (emphasizing the need to “talk about the [state action doctrine] until we settle on a view both conceptually and functionally right.”) (citing Charles L. Black, Jr., The Supreme Court, 1966 Term—Foreword: “State Action,” Equal Protection, and California’s Proposition 14, 81 Harv. L. Rev. 69, 70 (1967)).

[17] Or, in the framing of some: to allow too much harmful speech, including misinformation, if it drives attention to the platforms for more ads to be served. See Karen Hao, How Facebook and Google Fund Global Misinformation, MIT Tech. Rev. (Nov. 20, 2021), https://www.technologyreview.com/2021/11/20/1039076/facebook-google-disinformation-clickbait.

[18] See, e.g., Thomas Sowell, Knowledge and Decisions (1980).

[19] That is to say, the marketplace will not perfectly remove misinformation, but will navigate the tradeoffs inherent in limiting misinformation without empowering any one individual or central authority to determine what is true.

[20] See, e.g., Halleck, 139 S. Ct. at 1928; Denver Area Ed. Telecommunications Consortium, Inc. v. FCC, 518 U.S. 727, 737 (1996) (plurality opinion); Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 U.S. 557, 566 (1995); Hudgens v. NLRB, 424 U.S. 507, 513 (1976).

[21] See Part II below.

[22] For instance, a person could order a visitor to leave their home for saying something offensive and the police would, if called upon, help to eject them as trespassers. In general, courts will enforce private speech restrictions that governments could never constitutionally enact. See Mark D. Rosen, Was Shelley v. Kraemer Incorrectly Decided? Some New Answers, 95 Cal. L. Rev. 451, 458-61 (2007) (listing a number of cases where the holding of Shelley v. Kraemer that court enforcement of private agreements was state action did not extend to the First Amendment, meaning that private agreements to limit speech are enforced).

[23] Halleck, 139 S. Ct. at 1928, 1934 (citations omitted) (emphasis added).

[24] Id. at 1930.

[25] Id. at 1930-31.

[26] It is worth noting that application of the right to editorial discretion to social-media companies is a question that will soon be before the Supreme Court in response to common-carriage laws passed in Florida and Texas that would require carriage of certain speech. The 5th and 11th U.S. Circuit Courts of Appeal have come to opposite conclusions on this point. Compare NetChoice, LLC v. Moody, 34 F.4th 1196 (11th Cir. 2022) (finding the right to editorial discretion was violated by Florida’s common-carriage law) and NetChoice, LLC v. Paxton, 49 F.4th 439 (5th Cir. 2022) (finding the right to editorial discretion was not violated by Texas’ common-carriage law).

[27] Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241, 256 (1974).

[28] See id. at 247-54.

[29] Id. at 255 (citing Columbia Broadcasting System, Inc. v. Democratic National Committee, 412 U. S. 94, 117 (1973)),

[30] 47 U.S.C. §230(c).

[31] For a further discussion, see generally Geoffrey A. Manne, Ben Sperry, & Kristian Stout, Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Computer & Tech. L. J. 26 (2022).

[32] Much of this section is adapted from Ben Sperry, An L&E Defense of the First Amendment’s Protection of Private Ordering, Truth on the Market (Apr. 23, 2021), https://truthonthemarket.com/2021/04/23/an-le-defense-of-the-first-amendments-protection-of-private-ordering.

[33] See F.A. Hayek, The Use of Knowledge in Society, 35 Am. Econ. Rev. 519 (1945).

[34] Id. at 520.

[35] See supra notes 13-14 and associated text. See also David Schultz, Marketplace of Ideas, First Amendment Encyclopedia, https://www.mtsu.edu/first-amendment/article/999/marketplace-of-ideas (last updated by Jun. 2017 by David L. Hudson) (noting the history of the “marketplace of ideas” justification by the Supreme Court for the First Amendment’s protection of free speech from government intervention); J.S. Mill, On Liberty, Ch. 2 (1859); John Milton, Areopagitica (1644).

[36] Without delving too far into epistemology, some argue that this is even the case in the scientific realm. See, e.g., Thomas Kuhn, The Structure of Scientific Revolutions (1962). Even according to the perspective that some things are universally true across time and space, they still amount to a tiny fraction of what we call human knowledge. “Information” may be a better term for what economists are actually talking about.

[37] The Supreme Court has recently affirmed that the government may not compel speech by businesses subject to public-accommodation laws. See 303 Creative LLC v. Elenis, No. 21-476, slip op. (Jun. 30, 2023), available at https://www.supremecourt.gov/opinions/22pdf/21-476_c185.pdf. The Court will soon also have to determine whether common-carriage laws can be applied to social-media companies consistent with the First Amendment in the NetChoice cases noted above. See supra note 26.

[38] Sowell, supra note 18, at 240.

[39] Even those whom we most trust to have considered opinions and an understanding of the facts may themselves experience “expert failure”—a type of market failure—that is made likelier still when government rules serve to insulate such experts from market competition. See generally Roger Koppl, Expert Failure (2018).

[40] See, e.g., West Virginia Bd. of Ed. v. Barnette, 319 U.S. 624, 642 (1943) (“If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein. If there are any circumstances which permit an exception, they do not now occur to us.”).

[41] See, e.g., Alvarez, 567 U.S. at 728 (“Permitting the government to decree this speech to be a criminal offense, whether shouted from the rooftops or made in a barely audible whisper, would endorse government authority to compile a list of subjects about which false statements are punishable. That governmental power has no clear limiting principle. Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth.”).

[42] Cf. Halleck, 131 S. Ct. at 1930-31.

[43] For a good explanation, see Jamie Whyte, Polluting Words: Is There a Coasean Case to Regulate Offensive Speech?, ICLE White Paper (Sep. 2021), available at https://laweconcenter.org/wp-content/uploads/2021/09/Whyte-Polluting-Words-2021.pdf.

[44] R.H. Coase, The Problem of Social Cost, 3 J. L. & Econ. 1, 2 (1960) (“The traditional approach has tended to obscure the nature of the choice that has to be made. The question is commonly thought of as one in which A inflicts harm on B and what has to be decided is: how should we restrain A? But this is wrong. We are dealing with a problem of a reciprocal nature. To avoid the harm to B would inflict harm on A. The real question that has to be decided is: should A be allowed to harm B or should B be allowed to harm A? The problem is to avoid the more serious harm.”).

[45] See id. at 8-10.

[46] See generally David S. Evans & Richard Shmalensee, Matchmakers: The New Economics of Multisided Platforms (2016).

[47] For more on how and why social-media companies govern online speech, see Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 HARV. L. REV. 1598 (2018).

[48] See Kate Conger, Tiffany Hsu, & Ryan Mac, Elon Musk’s Twitter Faces Exodus of Advertisers and Executives, The New York Times (Nov. 1, 2022), https://www.nytimes.com/2022/11/01/technology/elon-musk-twitter-advertisers.html (“[A]dvertisers — which provide about 90 percent of Twitter’s revenue — are increasingly grappling with Mr. Musk’s ownership of the platform. The billionaire, who is meeting advertising executives in New York this week, has spooked some advertisers because he has said he would loosen Twitter’s content rules, which could lead to a surge in misinformation and other toxic content.”); Ryan Mac & Tiffany Hsu, Twitter’s US Ad Sales Plunge 59% as Woes Continue, The New York Times (Jun. 5, 2013), https://www.nytimes.com/2023/06/05/technology/twitter-ad-sales-musk.html (“Six ad agency executives who have worked with Twitter said their clients continued to limit spending on the platform. They cited confusion over Mr. Musk’s changes to the service, inconsistent support from Twitter and concerns about the persistent presence of misleading and toxic content on the platform.”).

[49] See, e.g., Brian Fung, Twitter Prepares to Roll Out New Paid Subscription Service That Includes Blue Checkmark, CNN (Nov. 5, 2022), https://www.cnn.com/2022/11/05/business/twitter-blue-checkmark-paid-subscription/index.html.

[50] Sowell, supra note 18, at 244.

[51] See Halleck, 139 S. Ct. at 1931 (“The Constitution does not disable private property owners and private lessees from exercising editorial discretion over speech and speakers on their property.”).

[52] Cf. Tornillo, 418 U.S. at 255 (“The power of a privately owned newspaper to advance its own political, social, and economic views is bounded by only two factors: first, the acceptance of a sufficient number of readers—and hence advertisers —to assure financial success; and, second, the journalistic integrity of its editors and publishers.”).

[53] See Ben Sperry & R.J. Lehmann, Gov. Desantis’ Unconstitutional Attack on Social Media, Tampa Bay Times (Mar. 3, 2021), https://www.tampabay.com/opinion/2021/03/03/gov-desantis-unconstitutional-attack-on-social-media-column (“Social-media companies and other tech platforms find themselves in a very similar position [as the newspaper in Tornillo] today. Just as newspapers do, Facebook, Google and Twitter have the right to determine what kind of content they want on their platforms. This means they can choose whether and how to moderate users’ news feeds, search results and timelines consistent with their own views on, for example, what they consider to be hate speech or misinformation. There is no obligation for them to carry speech they don’t wish to carry, which is why DeSantis’ proposal is certain to be struck down.”).

[54] See 47 U.S.C. §230.

[55] See, e.g., Jennifer Huddleston, Competition and Content Moderation: How Section 230 Enables Increased Tech Marketplace Entry, at 4, Cato Policy Analysis No. 922 (Jan. 31, 2022), available at https://www.cato.org/sites/cato.org/files/2022-01/policy-analysis-922.pdf (“The freedom to adopt content moderation policies tailored to their specific business model, their advertisers, and their target customer base allows new platforms to please internet users who are not being served by traditional media. In some cases, the audience that a new platform seeks to serve is fairly narrowly tailored. This flexibility to tailor content moderation policies to the specific platform’s community of users, which Section 230 provides, has made it possible for websites to establish online communities for a highly diverse range of people and interests, ranging from victims of sexual assault, political conservatives, the LGBTQ+ community, and women of color to religious communities, passionate stamp collectors, researchers of orphan diseases, and a thousand other affinity groups. Changing Section 230 to require websites to accept all comers, or to limit the ability to moderate content in a way that serves specific needs, would seriously curtail platforms’ ability to serve users who might otherwise be ignored by incumbent services or traditional editors.”). 

[56] See, e.g., Rui Gu, Lih-Bin Oh, & Kanliang Wang, Multi-Homing On SNSS: The Role of Optimum Stimulation Level and Perceived Complementarity in Need Gratification, 53 Information & Management 752 (2016), available at https://kd.nsfc.gov.cn/paperDownload/ZD19894097.pdf (“Given the increasingly intense competition for social networking sites (SNSs), ensuring sustainable growth in user base has emerged as a critical issue for SNS operators. Contrary to the common belief that SNS users are committed to using one SNS, anecdotal evidence suggests that most users use multiple SNSs simultaneously. This study attempts to understand this phenomenon of users’ multi-homing on SNSs. Building upon optimum stimulation level (OSL) theory, uses and gratifications theory, and literature on choice complementarity, a theoretical model for investigating SNS users’ multi-homing intention is proposed. An analysis of survey data collected from 383 SNS users shows that OSL positively affects users’ perceived complementarity between different SNSs in gratifying their four facets of needs, namely, interpersonal communication, self-presentation, information, and entertainment. Among the four dimensions of perceived complementarity, only interpersonal communication and information aspects significantly affect users’ intention to multi-home on SNSs. The results from this study offer theoretical and practical implications for understanding and managing users’ multi-homing use of SNSs.”).

[57] See, e.g., How Has Social Media Emerged as a Powerful Communication Medium, University Canada West Blog (Sep. 25, 2022), https://www.ucanwest.ca/blog/media-communication/how-has-social-media-emerged-as-a-powerful-communication-medium:

Social media has taken over the business sphere, the advertising sphere and additionally, the education sector. It has had a long-lasting impact on the way people communicate and has now become an integral part of their lives. For instance, WhatsApp has redefined the culture of IMs (instant messaging) and taken it to a whole new level. Today, you can text anyone across the globe as long as you have an internet connection. This transformation has not only been brought about by WhatsApp but also Facebook, Twitter, LinkedIn and Instagram. The importance of social media in communication is a constant topic of discussion.

Online communication has brought information to people and audiences that previously could not be reached. It has increased awareness among people about what is happening in other parts of the world. A perfect example of the social media’s reach can be seen in the way the story about the Amazon Rainforest fire spread. It started with a single post and was soon present on everyone’s newsfeed across different social media platforms.

Movements, advertisements and products are all being broadcasted on social media platforms, thanks to the increase in the social media users. Today, businesses rely on social media to create brand awareness as well as to promote and sell their products. It allows organizations to reach customers, irrespective of geographical boundaries. The internet has facilitated a resource to humankind that has unfathomable reach and benefits.

[58] Governmental intervention here could be particularly destructive if it leads to the imposition of “expert” opinions from insulated government actors from the “intelligence community.” Koppl, in his study on expert failure, described the situation as “the entangled deep state,” stating in relevant part:

The entangled deep state is an only partially hidden informal network linking the intelligence community, military, political parties, large corporations including defense contractors, and others. While the interests of participants in the entangled deep state often conflict, members of the deep state share a common interest in maintaining the status quo of the political system independently of democratic processes. Therefore, denizens of the entangled deep state may sometimes have an incentive to act, potentially in secret, to tamp down resistant voices and to weaken forces challenging the political status quo… The entangled deep state produces the rule of experts. Experts must often choose for the people because the knowledge on the basis of which choices are made is secret, and the very choice being made may also be a secret involving, supposedly, “national security.”… The “intelligence community” has incentives that are not aligned with the general welfare or with democratic process. Koppl, supra note 39, at 228, 230-31.

[59] Halleck, 139 S. Ct. at 1928 (internal citations omitted).

[60] 326 U.S. 501 (1946).

[61] Id. at 506.

[62] Id. at 509 (“When we balance the Constitutional rights of owners of property against those of the people to enjoy freedom of press and religion, as we must here, we remain mindful of the fact that the latter occupy a preferred position.”).

[63] 391 U.S. 308 (1968).

[64] See id. at 316-19. In particular, see id. at 318 (“The shopping center here is clearly the functional equivalent of the business district of Chickasaw involved in Marsh.”).

[65] See id. at 325.

[66] 407 U.S. 551 (1972).

[67] Id. at 562.

[68] Id.

[69] See id. at 568 (“[T]he courts properly have shown a special solicitude for the guarantees of the First Amendment, this Court has never held that a trespasser or an uninvited guest may exercise general rights of free speech on property privately owned and used nondiscriminatorily for private purposes only.”).

[70] Id. at 569.

[71] See id. at 570.

[72] 424 U.S. 507 (1976).

[73] Id. at 513.

[74] See id. at 516 (“Under what circumstances can private property be treated as though it were public? The answer that Marsh gives is when that property has taken on all the attributes of a town, i. e., `residential buildings, streets, a system of sewers, a sewage disposal plant and a “business block” on which business places are situated.’ (Logan Valley, 391 U.S. at 332 (Black, J. dissenting) (quoting Marsh, 326 U.S. at 502)).

[75] See id. at 518 (“It matters not that some Members of the Court may continue to believe that the Logan Valley case was rightly decided. Our institutional duty is to follow until changed the law as it now is, not as some Members of the Court might wish it to be. And in the performance of that duty we make clear now, if it was not clear before, that the rationale of Logan Valley did not survive the Court’s decision in the Lloyd case.”).

[76] Id. at 521.

[77] Jackson v. Metropolitan Edison Co., 419 U.S. 345, 352 (1974).

[78] See, e.g., the discussion about Prager University v. Google below.

[79] Packingham v. North Carolina, 137 S. Ct. 1730, 1737 (2017).

[80] Id. (internal citation omitted).

[81] See, e.g., Brock v. Zuckerberg, 2021 WL 2650070, at *3 (S.D.N.Y. Jun. 25, 2021); Freedom Watch, Inc. v. Google Inc., 816 F. App’x 497, 499 (D.C. Cir. 2020); Zimmerman v. Facebook, Inc., 2020 WL 5877863 at *2 (N.D. Cal. Oct. 2, 2020); Ebeid v. Facebook, Inc., 2019 WL 2059662 at *6 (N.D. Cal. May 9, 2019); Green v. YouTube, LLC, 2019 WL 1428890, at *4 (D.N.H. Mar. 13, 2019); Nyabwa v. FaceBook, 2018 WL 585467, at *1 (S.D. Tex. Jan. 26, 2018); Shulman v. Facebook.com, 2017 WL 5129885, at *4 (D.N.J. Nov. 6, 2017).

[82] Halleck, 139 S. Ct. at 1929 (emphasis in original).

[83] Id. at 1930.

[84] Id.

[85] Id. at 1930-31.

[86] 951 F.3d 991 (9th Cir. 2020).

[87] See id. at 997-98. See also, Prager University v. Google, LLC, 2018 WL 1471939, at *6 (N.D. Cal. Mar. 26, 2018) (“Plaintiff primarily relies on the United States Supreme Court’s decision in Marsh v. Alabama to support its argument, but Marsh plainly did not go so far as to hold that any private property owner “who operates its property as a public forum for speech” automatically becomes a state actor who must comply with the First Amendment.”).

[88] See PragerU, 951 F.3d at 996-99 (citing Halleck 12 times, Hudgens 3 times, and Tanner 3 times).

[89] See supra n. 7-9 and associated text.

[90] Cf. Norwood v. Harrison, 413 U.S. 455, 465 (1973) (“It is axiomatic that a state may not induce, encourage or promote private persons to accomplish what it is constitutionally forbidden to accomplish.”).

[91] Blum v. Yaretsky, 457 U.S. 991, 1004 (1982).

[92] Id. at 1004-05.

[93] Id. (noting that “the factual setting of each case will be significant”).

[94] 372 U.S. 58 (1963).

[95] See id. at 66-67.

[96] See id. at 68.

[97] Id. at 67.

[98] Id. at 68-69.

[99] 827 F.2d 1291 (9th Cir. 1987).

[100] See id. at 1295.

[101] Id.

[102] See id. (“Simply by ‘command[ing] a particular result,’ the state had so involved itself that it could not claim the conduct had actually occurred as a result of private choice.”) (quoting Peterson v. City of Greenville, 373 U.S. 244, 248 (1963)).

[103] See Backpage.com, LLC v. Dar, 807 F.3d 229 (7th Cir. 2015).

[104] See id. at 231, 232.

[105] Id. at 230.

[106] Id. at 235.

[107] Id. at 231.

[108] 2023 WL 2443073 (9th Cir. Mar. 10, 2023).

[109] See id. at *2-3.

[110] See id. at *5-6.

[111] Id. at *6.

[112] Id.

[113] Id.

[114] 2022 WL 1427507 (N.D. Cal. May 5, 2022).

[115] See id. at *8.

[116] Id.

[117] Id. (emphasis in original).

[118] See, e.g., Trump v. Twitter, 602 F.Supp.3d 1213, 1218-26 (2022); Children’s Health Def. v. Facebook, 546 F.Supp.3d 909, 932-33 (2021).

[119] 2023 WL 2578260 (W.D. La. Mar. 20, 2023). See also Missouri, et al. v. Biden, et al., 2023 WL 4335270 (W.D. La. Jul. 4., 2023) (memorandum opinion granting the plaintiffs’ motion for preliminary injunction).

[120] 2023 WL 2578260 at *30-31.

[121] See id.

[122] See id. at *17-19.

[123] It is worth noting that all of these cases were decided at the motion-to-dismiss stage, during which all of the plaintiffs’ allegations are assumed to be true. The plaintiffs in Missouri v. Biden will have to prove their factual case of state action. Now that the Western District of Louisiana has ruled on the motion for preliminary injunction, it is likely that there will be an appeal before the case gets to the merits.

[124] The district court in Missouri v. Biden discussed this distinction further in the memorandum ruling on request for preliminary injunction:

The Defendants argue that by making public statements, this is nothing but government speech. However, it was not the public statements that were the problem. It was the alleged use of government agencies and employees to coerce and/or significantly encourage social-media platforms to suppress free speech on those platforms. Plaintiffs point specifically to the various meetings, emails, follow-up contacts, and the threat of amending Section 230 of the Communication Decency Act. Plaintiffs have produced evidence that Defendants did not just use public statements to coerce and/or encourage social-media platforms to suppress free speech, but rather used meetings, emails, phone calls, follow-up meetings, and the power of the government to pressure social-media platforms to change their policies and to suppress free speech. Content was seemingly suppressed even if it did not violate social-media policies. It is the alleged coercion and/or significant encouragement that likely violates the Free Speech Clause, not government speech, and thus, the Court is not persuaded by Defendants’ arguments here.

Missouri v. Biden, 2023 WL 4335270, at *56 (W.D. La. July 4, 2023).

[125] While the district court did talk in significantly greater detail about specific allegations as to each federal defendant’s actions in coercing or encouraging changes in moderation policies or enforcement actions, there is still a lack of specificity as to how it affected the plaintiffs. See id. at *45-53 (applying the coercion/encouragement standard to each federal defendant). As in its earlier decision at the motion-to-dismiss stage, the court’s opinion accompanying the preliminary injunction does deal with this issue to a much greater degree in its discussion of standing, and specifically of traceability. See id. at *61-62:

Here, Defendants heavily rely upon the premise that social-media companies would have censored Plaintiffs and/or modified their content moderation policies even without any alleged encouragement and coercion from Defendants or other Government officials. This argument is wholly unpersuasive. Unlike previous cases that left ample room to question whether public officials’ calls for censorship were fairly traceable to the Government; the instant case paints a full picture. A drastic increase in censorship, deboosting, shadow-banning, and account suspensions directly coincided with Defendants’ public calls for censorship and private demands for censorship. Specific instances of censorship substantially likely to be the direct result of Government involvement are too numerous to fully detail, but a birds-eye view shows a clear connection between Defendants’ actions and Plaintiffs injuries.

The Plaintiffs’ theory of but-for causation is easy to follow and demonstrates a high likelihood of success as to establishing Article III traceability. Government officials began publicly threatening social-media companies with adverse legislation as early as 2018. In the wake of COVID-19 and the 2020 election, the threats intensified and became more direct. Around this same time, Defendants began having extensive contact with social-media companies via emails, phone calls, and in-person meetings. This contact, paired with the public threats and tense relations between the Biden administration and social-media companies, seemingly resulted in an efficient report-and-censor relationship between Defendants and social-media companies. Against this backdrop, it is insincere to describe the likelihood of proving a causal connection between Defendants’ actions and Plaintiffs’ injuries as too attenuated or purely hypothetical.

The evidence presented thus goes far beyond mere generalizations or conjecture: Plaintiffs have demonstrated that they are likely to prevail and establish a causal and temporal link between Defendants’ actions and the social-media companies’ censorship decisions. Accordingly, this Court finds that there is a substantial likelihood that Plaintiffs would not have been the victims of viewpoint discrimination but for the coercion and significant encouragement of Defendants towards social-media companies to increase their online censorship efforts.

[126] See Lugar v. Edmonson Oil Co., 457 U.S. 922, 941-42 (1982).

[127] See Brentwood Acad. v. Tennessee Secondary Sch. Athletic Ass’n, 531 U.S. 288, 294 (2001).

[128] See id. at 296.

[129] For instance, in Mathis v. Pacific Gas & Elec. Co., 75 F.3d 498 (9th Cir. 1996), the 9th Circuit described the plaintiff’s “joint action” theory as one where a private person could only be liable if the particular actions challenged are “inextricably intertwined” with the actions of the government. See id. at 503.

[130] See Brentwood, 531 U.S. at 296 (noting that “examples may be the best teachers”).

[131] See Lugar, 457 U.S. at 925.

[132] See id.

[133] See id.

[134] Id. at 941 (internal citations omitted).

[135] Id.

[136] See id. at 942.

[137] 365 U.S. 715 (1961).

[138] See id. at 717-20.

[139] Id. at 724.

[140] See Rendell-Baker v. Kohn, 457 U.S. 830, 842-43 (1982).

[141] See Brentwood, 531 U.S. at 292-93.

[142] See id. at 296 (“[A] challenged activity may be state action… when it is ‘entwined with governmental policies,’ or when government is ‘entwined in [its] management or control.’”) (internal citations omitted).

[143] See id. at 298-301.

[144] Id. at 302.

[145] 489 U.S. 602 (1989).

[146] See id. at 606-12, 615.

[147] Id. at 615.

[148] Id.

[149] O’Handley, 2023 WL 2443073, at *7.

[150] Id.

[151] See id. at *7-8.

[152] 75 F.3d 498 (9th Cir. 1996).

[153] O’Handley, 2023 WL 2443073, at *8.

[154] Id.

[155] Hart, 2022 WL 1427507, at *6.

[156] Id. at *7.

[157] See, e.g., Fed. Agency of News LLC v. Facebook, Inc., 432 F. Supp. 3d 1107, 1124-27 (N.D. Cal. 2020); Children’s Health Def. v. Facebook Inc., 546 F. Supp. 3d 909, 927-31 (N.D. Cal. 2021); Berenson v. Twitter, 2022 WL1289049, at *3 (N.D. Cal. Apr. 29, 2022).

[158] 546 F. Supp. 3d 909 (N.D. Cal. 2021).

[159] Id. at 932 (citing Divino Grp. LLC v. Google LLC, 2021 WL 51715, at *6 (N.D. Cal. Jan. 6, 2021)).

[160] Missouri v. Biden, 2023 WL 2578260, at *33.

[161] Id.

[162] Id. at *33-34.

[163] Id. at *34.

[164] A government action is content based if it can’t be applied without considering its content. See, e.g., Reed v. Town of Gilbert, Ariz., 576 U.S. 155, 163 (2015) (“Government regulation of speech is content based if a law applies to particular speech because of the topic discussed or the idea or message expressed.”).

[165] See, e.g., Citizens United v. Fed. Election Comm’n, 558 U.S. 310, 340 (2010) (“Laws that burden political speech are ‘subject to strict scrutiny,’ which requires the Government to prove that the restriction ‘furthers a compelling interest and is narrowly tailored to achieve that interest.’”) (internal citations omitted).

[166] See Fulton v. City of Philadelphia, Pennsylvania, 141 S. Ct. 1868, 1881 (2021) (“A government policy can survive strict scrutiny only if it advances ‘interests of the highest order’…”).

[167] Ashcroft v. ACLU, 542 U.S. 656, 666 (2004). In that case, the Court compared the Children’s Online Protection Act’s age-gating to protect children from online pornography to blocking and filtering software available in the marketplace, and found those alternatives to be less restrictive. The Court thus struck down the regulation. See id. at 666-70.

[168] Alameda Books v. City of Los Angeles, 535 U.S. 425, 455 (2002).

[169] See, e.g., New York Times Co. v. United States, 403 U.S. 713, 714 (1971).

[170] The classic example being an ordinance on noise that doesn’t require the government actor to consider the content or viewpoint of the speaker in order to enforce. See Ward v. Rock Against Racism, 491 U.S. 781 (1989).

[171] See id. at 791 (“Our cases make clear, however, that even in a public forum the government may impose reasonable restrictions on the time, place, or manner of protected speech, provided the restrictions ‘are justified without reference to the content of the regulated speech, that they are narrowly tailored to serve a significant governmental interest, and that they leave open ample alternative channels for communication of the information.’”) (internal citations omitted).

[172] See Turner Broad. Sys., Inc. v. FCC, 512 U.S. 622, 662 (1994) (finding “the appropriate standard by which to evaluate the constitutionality of must-carry is the intermediate level of scrutiny applicable to content-neutral restrictions that impose an incidental burden on speech.”).

[173] See id. (“[A] content-neutral regulation will be sustained if ‘it furthers an important or substantial governmental interest; if the governmental interest is unrelated to the suppression of free expression; and if the incidental restriction on alleged First Amendment freedoms is no greater than is essential to the furtherance of that interest.’”) (quoting United States v. O’Brien, 391 U.S. 367, 377 (1968)).

[174] See Broadrick v. Oklahoma, 413 U.S. 601, 615 (1973) (holding that “the overbreadth of a statute must not only be real, but substantial as well, judged in relation to the statute’s plainly legitimate sweep”).

[175] See Kolender v. Lawson, 461 U.S. 352, 357 (1983) (holding that a law must have “sufficient definiteness that ordinary people can understand what conduct is prohibited and in a manner that does not encourage arbitrary and discriminatory enforcement”).

[176] 2023 WL 414258 (E.D. Cal. Jan. 25, 2023).

[177] Cal. Bus. & Prof. Code § 2270.

[178] Høeg, 2023 WL 414258, at *6 (internal citations omitted).

[179] Id. at *7.

[180] See id.

[181] Id. at *8.

[182] Id. at *9.

[183] Id. at *9.

[184] See id. at *12.

[185] New York Times Co. v. United States, 403 U.S. 713, 714 (1971) (quoting Bantam Books, 372 U.S. at 70).

[186] Missouri v. Biden, 2023 WL2578260, at *35 (quoting Backpage.com, 807 F.3d at 230).

[187] See id. (comparing the situation to cable operators in the Turner Broadcasting cases).

[188] Id.

[189] Id.

[190] See discussion of United States v. Alvarez, 567 U.S. 709 (2012) below.

[191] See Minnesota Voters Alliance v. Mansky, 138 S. Ct. 1876, 1885 (2018) (“In a traditional public forum — parks, streets, sidewalks, and the like — the government may impose reasonable time, place, and manner restrictions on private speech, but restrictions based on content must satisfy strict scrutiny, and those based on viewpoint are prohibited.”).

[192] Missouri v. Biden, 2023 WL2578260, at *35.

[193] Id.

[194] 567 U.S. 709 (2012).

[195] Id. at 717 (quoting United States v. Stevens, 559 U.S. 460, 468 (2010)).

[196] Id. at 718.

[197] See Chaplinsky v. New Hampshire, 315 U.S. 568, 571-72 (1942) (“There are certain well-defined and narrowly limited classes of speech, the prevention and punishment of which has never been thought to raise any Constitutional problem.”)

[198] See Alvarez, 567 U.S. at 718-22.

[199] See id. at 719 (“Even when considering some instances of defamation and fraud, moreover, the Court has been careful to instruct that falsity alone may not suffice to bring the speech outside the First Amendment. The statement must be a knowing or reckless falsehood.”). This means that the First Amendment was found to limit common law actions against false speech which did not receive constitutional protection.

[200] Under the common law, the elements of fraud include (1) a misrepresentation of a material fact or failure to disclose a material fact the defendant was obligated to disclose, (2) intended to induce the victim to rely on the misrepresentation or omission, (3) made with knowledge that the statement or omission was false or misleading, (4) the plaintiff relied upon the representation or omission, and (5) suffered damages or injury as a result of the reliance. See, e.g., Mandarin Trading Ltd v. Wildenstein, 919 N.Y.S.2d 465, 469 (2011); Kostryckyj v. Pentron Lab. Techs., LLC, 52 A.3d 333, 338-39 (Pa. Super. 2012); Masingill v. EMC Corp., 870 N.E.2d 81, 88 (Mass. 2007). Similarly, commercial speech regulation on deceptive or misleading advertising or health claims have also been found to be consistent with the First Amendment. See Virginia State Bd. of Pharmacy v. Virginia Citizens Consumer Council, 425 U.S. 748, 771-72 (1976) (“Obviously, much commercial speech is not provably false, or even wholly false, but only deceptive or misleading. We foresee no obstacle to a State’s dealing effectively with this problem. The First Amendment, as we construe it today does not prohibit the State form insuring that the stream of commercial information flow cleanly as well as freely.”).

[201] See, e.g., Donaldson v. Read Magazine, Inc. 333 U.S. 178, 190 (1948) (the government’s power “to protect people against fraud” has “always been recognized in this country and is firmly established”).

[202] Illinois, ex rel. Madigan v. Telemarketing Associates, Inc., 538 U.S. 600, 617 (2003).

[203] See, e.g., Schaumburg v. Citizens for a Better Environment, 444 U.S. 620 (1980); Secretary of State of Md. v. Joseph H. Munson Co., 467 U.S. 947 (1984); Riley v. National Federation of Blind of N. C., Inc., 487 U.S. 781 (1988).

[204] Madigan, 538 U.S. at 620.

[205] Under the old common-law rule, proving defamation required a plaintiff to present a derogatory statement and demonstrate that it could hurt their reputation. The falsity of the statement was presumed, and the defendant had the burden to prove the statement was true in all of its particulars. Re-publishing something from someone else could also open the new publisher to liability. See generally Samantha Barbas, The Press and Libel Before New York Times v. Sullivan, 44 Colum. J.L. & Arts 511 (2021).

[206] 376 U.S. 254 (1964).

[207] Id. at 271. See also id. at 271-72 (“Erroneous statement is inevitable in free debate, and [] it must be protected if the freedoms of expression are to have the ‘breathing space that they need to survive.’”) (quoting N.A.A.C.P. v. Button, 371 U.S. 415, 433 (1963)).

[208] Id. at 279-80.

[209] Id. at 727-28.

[210] Carlin Commc’ns, 827 F.2d at 1297.

[211] See Missouri, et al. v. Biden, et al., Case No. 3:22-CV-01213 (W.D. La. Jul. 4, 2023), available at https://int.nyt.com/data/documenttools/injunction-in-missouri-et-al-v/7ba314723d052bc4/full.pdf.

[212] Id. See also Missouri, et al. v. Biden, et al., 2023 WL 4335270, at *45-56 (W.D. La. Jul. 4., 2023) (memorandum ruling on request for preliminary injunction). But see Missouri, et al. v. Biden, et al., No. 23-30445 (5th Cir. Sept. 8, 2023), slip op., available at https://www.ca5.uscourts.gov/opinions/pub/23/23-30445-CV0.pdf (upholding the injunction but limiting the parties it applies to); Murthy et al. v. Missouri, et al., No: 3:22-cv-01213 (Sept. 14, 2023) (order issued by Justice Aliso issuing an administrative stay of the preliminary injunction until Sept. 22, 2023 at 11:509 p.m. EDT).

[213] 42 U.S.C. §1983.

[214] See, e.g., Adickes v. SH Kress & Co., 398 U.S. 144, 152 (1970) (“Although this is a lawsuit against a private party, not the State or one of its officials, our cases make clear that petitioner will have made out a violation of her Fourteenth Amendment rights and will be entitled to relief under § 1983 if she can prove that a Kress employee, in the course of employment, and a Hattiesburg policeman somehow reached an understanding to deny Miss Adickes service in the Kress store, or to cause her subsequent arrest because she was a white person in the company of Negroes. The involvement of a state official in such a conspiracy plainly provides the state action essential to show a direct violation of petitioner’s Fourteenth Amendment equal protection rights, whether or not the actions of the police were officially authorized, or lawful… Moreover, a private party involved in such a conspiracy, even though not an official of the State, can be liable under § 1983.”) (internal citations omitted).

[215] Smith v. Wade, 461 U.S. 30, 56 (1983).

[216] See Missouri, et al. v. Biden, et al., 2023 WL 4335270, at *55, 56 (W.D. La. Jul. 4., 2023).

[217] Codified at Fla. Stat. § 112.23, available at https://casetext.com/statute/florida-statutes/title-x-public-officers-employees-and-records/chapter-112-public-officers-and-employees-general-provisions/part-i-conditions-of-employment-retirement-travel-expenses/section-11223-government-directed-content-moderation-of-social-media-platforms-prohibited.

[218] Id.

[219] For more on this proposal, Manne, Stout, & Sperry, supra note 31, at 106-112.

[220] See Dominion Voting Sys. v. Fox News Network, LLC, C.A. No.: N21C-03-257 EMD (Sup. Ct. Del. Mar. 31, 2023), available at https://www.documentcloud.org/documents/23736885-dominion-v-fox-summary-judgment.

[221] See, e.g.,  Jeremy W. Peters & Katie Robertson, Fox Will Pay $787.5 Million to Settle Defamation Suit, New York Times (Apr. 18, 2023), https://www.nytimes.com/live/2023/04/18/business/fox-news-dominion-trial-settlement#fox-dominion-defamation-settle.

[222] See, e.g., Neil Vigdor, ‘Prove Mike Wrong’ for $5 Million, Lindell Pitched. Now, He’s Told to Pay Up., New York Times (Apr. 20, 2023), https://www.nytimes.com/2023/04/20/us/politics/mike-lindell-arbitration-case-5-million.html.

[223] See Stephen Fowler, Judge Finds Rudy Giuliani Liable for Defamation of Two Georgia Election Workers, national public radio (Aug. 30, 2023), https://www.npr.org/2023/08/30/1196875212/judge-finds-rudy-giuliani-liable-for-defamation-of-two-georgia-election-workers.

[224] See supra notes 206-09 and associated text.

Continue reading
Innovation & the New Economy

It’s Risk, Jerry, The Game of Broadband Conquest

TOTM The big news in telecommunications policy last week wasn’t really news at all—the Federal Communications Commission (FCC) released its proposed rules to classify broadband internet under Title . . .

The big news in telecommunications policy last week wasn’t really news at all—the Federal Communications Commission (FCC) released its proposed rules to classify broadband internet under Title II of the Communications Act. Supporters frame the proposed rules as “net neutrality,” but those provisions—a ban on blocking, throttling, or engaging in paid or affiliated-prioritization arrangements—actually comprise just a small part of the 435-page document.

Read the full piece here.

Continue reading
Telecommunications & Regulated Utilities