Regulatory Comments

ICLE Comments to FTC Regarding Technology Platform Censorship

Introduction

We thank the Federal Trade Commission (FTC) for the opportunity to comment on this important topic. This request for information (RFI) allows participants to contribute to the understanding of the nuanced ways that online content moderation operates.[1]

The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center dedicated to building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law & economics methodologies and economic learning to inform policy debates, and has longstanding expertise evaluating law and policy.

ICLE has an interest in ensuring that both First Amendment law and antitrust law promote the public interest by remaining grounded in sensible rules informed by sound economic analysis. ICLE scholars have written extensively on issues related to the regulation of technology platforms and free speech, as well as competition and consumer-protection issues more broadly. As such, we are well-positioned to speak to the legal and economic foundations underlying the factual record sought in this RFI.

Below, we will offer reasons why both the law and underlying economics support limiting how the FTC acts in response to the information gained in this RFI. We do so while acknowledging that a call for diverse public comments is likely the best means at the agency’s disposal to gather preliminary information, and that no subsequent formal economic study or law-enforcement action is implied by such an inquiry.

In Part I, we introduce the First Amendment background to the questions presented in this proceeding. The First Amendment protects the rights of technology platforms and advertisers, as well as individual citizens, to participate in the marketplace of ideas. This necessarily limits FTC action to protect consumers from alleged private “censorship.”

In Part II, we offer an alternative explanation for why technology platforms engage in content moderation, by applying the economics of multisided platforms and considering the market for speech governance. The major platforms’ moderation choices are best explained by the economic incentives to balance users’ speech interests. Part II considers whether platforms’ restrictions on certain users or content are justified or offset by “countervailing benefits to consumers.”[2]

In Part III, we analyze government pressure campaigns as a departure from the free marketplace of ideas. In particular, we consider the dangers of backdoor censorship, like that alleged against the Biden administration in Murthy v. Missouri, as well as the possibility that government may facilitate collusive behavior in specific cases.

In Part IV, we consider the difficulty of applying non-price-effects analysis to content-moderation decisions under antitrust law. Whether under Section 1 or Section 2 of the Sherman Antitrust Act, balancing various dimensions of quality, as well as the varied interests of users and advertisers, cannot be easily accomplished under antitrust law.

In Part v and VI, we outline the limitations of categorizing alleged private “censorship” as either unfair methods of competition (UMC) or unfair or deceptive acts or practices (UDAP). Using the FTC’s policy statements for each, we consider how these competition and consumer-protection statutes could apply, while making the case that neither tech platforms’ enforcement of moderation policies nor firms’ refusals to advertise on specific tech platforms violates these laws.

I. Platforms and Advertisers’ Rights to Participate in the Marketplace of Ideas[3]

The defense of free speech against government censorship has a long pedigree, with advocates that have included such landmark historical figures as John Stuart Mill and, long before the framing of the Bill of Rights, John Milton.[4] While neither author used the precise phrase, the “marketplace of ideas” metaphor grew out of their defenses of free speech.

Mill, for instance, rested his defense of free speech on four grounds: 1) that censored opinions may be true, 2) that, even if an opinion is in error, it may contain a portion of the truth that requires collision with “adverse opinions” so that further truth may be discovered, 3) that, even if an opinion is true, it needs to be contested in order for those who hold it to understand why it is true, and 4) that, even if it is true, it must be contested for those who hold it to have a true conviction, rather than merely inheriting it.[5] In other words, it is only the interaction of ideas that allows for the truth to win out.

Similarly, in the American context, President Thomas Jefferson in his First Inaugural Address argued in the wake of a divisive election for toleration of divergent political opinions, adding that even those who favor dissolving the union or changing its republican form of government should be left “undisturbed as monuments of the safety with which error of opinion may be tolerated where reason is left free to combat it.”[6]

This metaphor was later picked up by the U.S. Supreme Court—at first, in a dissent by Justice Oliver Wendell Holmes.[7] It eventually became a staple of First Amendment law, and some variation of the phrase has been cited in thousands of federal First Amendment opinions.[8]

To facilitate competition in the marketplace of ideas, the U.S. Constitution staunchly protects the liberty of private actors to determine what speech is acceptable, largely free from government intervention. As the Court put it in Manhattan Cmty. Access Corp. v. Halleck,[9] “[t]he Free Speech Clause of the First Amendment constrains governmental actors and protects private actors . . . .”[10]

Importantly, one way that private actors participate in the marketplace of ideas is through private ordering—by setting speech policies for their own private property. As the Halleck Court stated, “[a] private entity may thus exercise editorial discretion over the speech and speakers in the forum.”[11]

Notably, in Moody v. NetChoice,[12] the Supreme Court expressly considered “whether two state laws regulating social-media platforms and other websites facially violate the First Amendment.”[13] The Court made clear in that opinion that “the First Amendment offers protection when an entity engaging in expressive activity, including compiling and curating others’ speech, is directed to accommodate messages it would prefer to exclude.”[14] Surveying First Amendment case law on compelled speech, the Court concluded that, at least in the case of social-media platforms’ primary feeds, government efforts that “alter[] the platforms’ choices about the views they will, and will not, convey… interfere with protected speech.”[15] Moreover, “[h]owever imperfect the private marketplace of ideas… a worse proposal [is] the government itself deciding when speech [is] imbalanced, and then coercing speakers to provide more of some views or less of others.”[16]

In other words, technology platforms like social-media companies have a right to engage in editorial discretion as private entities. The moderation decisions these platforms make are protected by the First Amendment from government efforts to rebalance them.

The questions presented by this RFI may help the FTC to get a sense of the scale and diversity of such decisions, and perhaps to learn more about how they are made. But it is crucial that the Commission recognize that alleged political bias in private moderation decisions would not be a permissible government interest for action against the tech platforms.

It is, of course, possible for antitrust to apply to speech platforms. In Associated Press v. United States,[17] for instance, the Supreme Court found that antitrust law clearly applies to the Associated Press—a membership organization of newspaper publishers. But the Court also found that the First Amendment limited antitrust remedies. In Miami Herald Publishing Co. v. Tornillo,[18] the Court noted that:

The Court foresaw the problems relating to government-enforced access as early as its decision in Associated Press v. United States, supra. There it carefully contrasted the private “compulsion to print” called for by the Association’s bylaws with the provisions of the District Court decree against appellants which “does not compel AP or its members to permit publication of anything which their ‘reason’ tells them should not be published.”[19]

Accordingly, the courts have consistently rejected claims that would require tech platforms to carry speech. In Jian Zhang v. Baidu.com,[20] the U.S. District Court for the Southern District of New York found that the application of a New York public-accommodations law to a Chinese search engine that “censored” pro-democracy speech is inconsistent with the right to editorial discretion, stating “there is a strong argument to be made that the First Amendment fully immunizes search-engine results from most, if not all, kinds of civil liability and government regulation.”[21] The court further noted that “the central purpose of a search engine is to retrieve relevant information from the vast universe of data on the Internet and to organize it in a way that would be most helpful to the searcher. In doing so, search engines inevitably make editorial judgments about what information (or kinds of information) to include in the results and how and where to display that information (for example, on the first page of the search results or later).”[22] Other courts have similarly found tech platforms have a right to editorial discretion that limits antitrust claims.[23]

The Commission’s inquiry also appears to encompass characterizing advertisers’ conduct in the wake of the X.com platform’s decision to modify its content-moderation policies as a concerted group boycott or refusal to deal, possibly facilitated by third-party intermediaries. Such a characterization necessitates a careful consideration of First Amendment protections, which may circumscribe regulatory responses to advertiser decisions that are motivated by associational preferences—in this case, preferences regarding the content the platform is disseminating.

In FTC v. Superior Ct. Trail Lawyers Ass’n,[24] the Supreme Court has considered the interaction of antitrust law and the First Amendment in a group-boycott case in which a lawyer’s association joined together to refuse to represent indigent defendants until the District of Columbia government increased their fees.[25] The Court distinguished the case from NAACP v. Claiborne Hardware Co.,[26] arguing that in this case “the undenied objective of [the lawyer association’s] boycott was an economic advantage for those who agreed to participate,”[27] while the black citizens who boycotted white merchants in Claiborne County, Mississippi “sought no special advantage for themselves.”[28] The government may regulate economic activity, but it may not infringe the First Amendment in doing so.

Here, the motivation for advertisers to withdraw from various platforms may be driven by various considerations beyond the mere economic, such that it would be challenging to attribute a motive of consolidating market power. In particular, a reviewing court would likely see an advertiser’s decision not to be associated with content that it finds damaging to its brand to be a legitimate expression of its First Amendment rights. The decision to disassociate from particular content ecosystems could reflect an exercise of expressive choice, or a prudent measure to safeguard intangible reputational assets—activities that may themselves possess First Amendment dimensions.

While preserving brand equity is undeniably intertwined with a firm’s overall economic well-being, it is analytically distinct from a concerted effort to manipulate market conditions in ways traditionally scrutinized under antitrust doctrines. Indeed, decisions to curtail advertising, while potentially serving long-term reputational interests or expressive aims, might concurrently impose immediate opportunity costs or reductions in revenue, to the extent that advertising correlates with product or service sales. Ascertaining the primary impetus behind such conduct—whether predominantly expressive, reputational, or anticompetitively economic in the Trial Lawyers sense—presents a nuanced evidentiary challenge.

In either case, the FTC would confront a formidable challenge if it sought to design remedies in such a case that would comport with existing First Amendment jurisprudence. As surveyed above, governmental compulsion of speech by private entities is generally disfavored. Social-media companies’ creation of expressive products through curation and other moderation decisions, and businesses’ decisions on whether or not to advertise on a particular platform, are both First Amendment-protected activities. Consequently, any remedial measure—whether adjudicated in an antitrust proceeding or pursued via an enforcement action under Section 5 of the FTC Act—that would compel a social-media entity to host specific speech or mandate that businesses to engage in specific advertising, would likely raise substantial First Amendment questions.

II. Content Moderation and the Economics of Multisided Platforms[29]

The RFI’s premise that technology platforms engage in “censorship” invites a critical examination of the term’s conventional application. Censorship most precisely describes governmental restrictions on expression—actions circumscribed by First Amendment doctrines that do not comparably apply to the editorial prerogatives of private entities.[30] Or, to put it another way, the term censorship suggests government suppression of speech, rather than decisions by private actors to selectively favor and disfavor certain speech or speakers, even if those private decisions deny speakers access to their preferred platforms.

Private platforms’ decisions to curate, prioritize, or deprioritize content or speakers—even when such decisions restrict access for some—are more accurately characterized as exercises of editorial discretion, rather than “censorship” in the sense that is relevant to constitutional law. Such discretion is integral to how these platforms define their services, as well as how they cultivate specific user and advertiser environments. Below, we will consider how the economics of multisided platforms—the business model employed by social-media companies—explains the incentives they face.

A. The Economics of Multisided Platforms

As the Supreme Court has recognized, the economics of multisided platforms are now an important component of antitrust law.[31] A multisided platform serves two or more distinct sets of customers who are, in some way, mutually reliant.[32] Economically, this is defined as an interdependency of demand among the two (or more) customer groups. The platform brings these groups together by setting prices—and, as we will see, non-price attributes—that encourage each side to participate in a way that maximizes platform-wide output. Jean-Charles Rochet and Jean Tirole, who helped to develop the economics of two-sided platforms, offer this definition:

We define a two-sided market as one in which the volume of transactions between end-users depends on the structure and not only on the overall level of the fees charged by the platform. A platform’s usage or variable charges impact the two sides’ willingness to trade once on the platform and, thereby, their net surpluses from potential interactions; the platforms’ membership or fixed charges in turn condition the end-users’ presence on the platform.[33]

A multisided platform’s value is maximized through the interplay of the multiple sides.[34] Platforms have to set prices on each side that balance the interrelated demands.[35] This may mean that one side of the platform ends up cross-subsidizing the other side. In other words, one side may pay a higher price than the other; one side may even pay a monetary price of zero.

A classic example is a nightclub. If there is a queue outside, and the club’s clientele is primarily heterosexual, it is much more likely for young attractive females to be picked out of the line for early entry than for males. Some clubs even host “ladies night,” where female patrons receive discounted or free drinks. In either case, the nightclub chooses to show favoritism to women, even though this would appear to be against their interest under a traditional economic model. But it makes sense in multisided-platform analysis, because the club needs to get two groups of customers to come: men and women. With too few women, there will not be enough men willing to come and spend money. In order to maximize profits, it actually makes sense to discriminate in favor of women in order to attract enough men to the club; the net effect is higher profits by having enough of both sexes.

In summary, multisided platforms differ from traditional firms in that they need to balance the demands of each side, and this may not mean simply charging the highest price to each side. This has important implications for understanding consumer welfare. Charging one side below marginal cost, for instance, would not be an example of predatory pricing if it meant maximizing the value of the platform.

Hence, for example, social-media platforms might independently (unilaterally) offer valuable (and costly) content and services at a monetary price of zero because millions of consumers prefer that price, even if it is ad-supported, and because advertisers prefer to advertise to those millions of people and are glad to pay to reach them. More generally, charging a high price to one side of a platform and offering a lower price to the other side may not be due to platform’s market power, but could be an example of maximizing output across the platform. As will be discussed more below, this also applies to the non-price aspects that platforms must balance, as well.

B. Balancing Speech Interests to Benefit Both Users and Advertisers

For technology platforms that host users’ speech, moderation policies are an especially important non-price aspect of maximizing platform value. Social-media companies generate revenue by acting as intermediaries between users and advertisers. If they fail to serve their users well, those users could abandon the platform.

Users include both producers of speech (i.e., speakers) and consumers of it (i.e., listeners); often, these are the same people. Some tech platforms reward speakers who generate many views with a cut of the platform’s advertising revenue. Needless to say, however, without users, advertisers would have no interest in buying ads. And without advertisers, there is no revenue to be had. Social-media companies thus need to maximize the value of their multisided platform by setting rules that keep users engaged. This could even result in reverse network effects, where the losses on one side of the platform lead to large gains on the other.

As in any other community, “[i]nteractions on multi-sided platforms can involve behavior that some users find offensive.”[36] As a result, “[p]eople may incur costs [from] unwanted exposure to hate speech, pornography, violent images, and other offensive content.”[37] And “[e]ven if they are not exposed to this content, they may dislike being part of a community in which such behavior takes place.”[38]

When it comes to illegal speech and conduct, technology platforms already face a difficult job in moderation, whether they are required by law or encouraged by consumer demand.[39] But even speech that is at the core of First Amendment protection—such as political and ideological speech, as well as religious, scientific, and artistic speech—might be offensive to some listeners and deemed inappropriate for some private fora.

In other words, speech has both benefits and costs for both speakers and listeners.[40] Ultimately, the subjective preferences of consumers—or certain tranches of consumers—determine how those who host speech manage those tradeoffs, which includes questions of how the preferences known to predominate among their installed base of users are to be weighed against those of others who might be attracted to the platform. The nature of what is deemed offensive is obviously context- and listener-dependent, but the parties who are best-suited to set and enforce appropriate speech rules are the property owners themselves, subject to the constraints of the marketplace.

When it comes to speech, an individual’s desire for an audience must be balanced with a prospective audience’s willingness to listen. Marketplace actors who operate speech platforms must strike the proper balance between these desires, lest they lose business. Asking government agents to make categorical decisions for all of society would substitute centralized assessments of the costs and benefits of access to communications for the individual decisions of many actors, including those who open their digital property to third-party speech. As the economist Thomas Sowell has put it, “that different costs and benefits must be balanced does not in itself imply who must balance them?or even that there must be a single balance for all, or a unitary viewpoint (one ‘we’) from which the issue is categorically resolved.”[41]

Rather than incremental decisions on how and under what terms individuals may relate to one another on a particular speech platform, which can evolve over time in response to changes in what individuals find acceptable, government actors can only hand down categorical guidelines through precedential decisions: “you must allow a, b, and c speech” or “you must not allow x, y, and z speech.”

The freedom to experiment and evolve is vital in the social-media sphere, where norms about speech are in constant flux, and vary across both individuals and groups of individuals. Because social-media users often impose negative externalities on other users through their speech, social-media companies must resolve social-cost problems among their users by balancing these speech interests.

In his famous work “The Problem of Social Cost,” the economist Ronald Coase argued that the traditional approach to regulating externalities was misguided, because it overlooked the reciprocal nature of harms.[42] For example, the noise from a factory is a potential cost to the doctor next door who cannot use his office to conduct certain testing, and simultaneously the doctor’s choice to move his office next door is a potential cost to the factory’s ability to use its equipment.

In a world of well-defined property rights and low transaction costs, the initial allocation of a right would not matter, because the parties could bargain to overcome the harm in a beneficial manner—i.e., the factory could pay the doctor for lost income or to set up sound-proof walls, or the doctor could pay the factory to reduce the sound of its machines. But in the real world, where there are often significant transaction costs, who has the initial right matters, because it is unlikely that the right will be deployed to its highest-valued use.

Similarly, on social media, speech that some users find offensive or false may be inoffensive or even patently true to other users. Protecting one group from offensive speech necessarily imposes costs on the group that favors the same speech. There is a reciprocal nature to the harms of speech, much as with other forms of nuisance. Due to transaction costs, it is unlikely that users will be able to effectively bargain to a solution on speech harms.

There is, however, a significant difference between the examples. Unlike the situation of the factory owner and the doctor, social-media users are all using the property of social-media companies. And those companies are well-positioned to balance these varied interests in real time in order to optimize their platform’s value in response to consumer demand.

Social-media companies set rules that keep users sufficiently engaged such that advertisers will pay to reach them. Moreover, social-media platforms must encourage engagement by the right users. To attract advertisers, platforms must ensure that individuals likely to engage with advertisements remain active on the platform. Platforms ensure this optimization by setting and enforcing community rules.

In addition, as with users, advertisers themselves have preferences and means of expression for which platforms must account. Advertisers may threaten to pull ads if they do not like the platform’s speech-governance decisions. For instance, after Elon Musk restored the accounts of X users who had been banned by the company’s prior leadership, major advertisers left the platform.[43]

Assuming tech platforms that host speech desire to reach the broadest possible audience,[44] there are limits on how far they can go in moderation before they ultimately lose users. If moderation policies are truly one-sided and harm one side of the political aisle, those platforms would likely lose users, and therefore ultimately lose advertisers. The platforms’ multisided nature could lead to their implosion if they get content moderation wrong, just as much as it contributes to their growth if they get content moderation right.

For instance, the very sale of what was then known as Twitter to Elon Musk could be seen as a correction in the market for speech governance, as he believed there was a market for a social-media company with less politically biased moderation policies.[45] The fact that Meta has seemingly followed X’s model by moving to community notes to deal with the problem of misinformation—rather than relying primarily on third-party fact checkers—suggests that this may even be the new equilibrium.[46] Striking a proper balance among various speech interests is difficult, which is why it is best to let private actors subject to competition in the marketplace decide.

In sum, the economics of multisided platforms and the problem of speech externalities largely explain why technology platforms that host speech tend to set and enforce moderation policies. In the vast majority of cases, these policies and their enforcement are an attempt to maximize value by benefitting their users, so that they can make money from advertisers. Moreover, marketplace competition appears to be quite effective in combating perceived political bias in content moderation, as the cases of X and now Meta suggest.

III. Government-Led Pressure Campaigns to Censor

Recent examples where governmental entities appear to have compelled or exerted coercive influence over tech platforms to suppress specific speech or speakers align much more directly with established conceptions of censorship. The Commission’s exploration of how governmental persuasion or pressure may have shaped content-moderation frameworks and specific editorial outcomes on those platforms is, therefore, a pertinent line of inquiry. Such indirect state influence on content availability, sometimes characterized as “jawboning” or “backdoor censorship,” presents distinct challenges to First Amendment values and the integrity of public discourse, particularly to the extent that it circumvents established due-process and transparency safeguards. Enhanced visibility into these dynamics would offer a crucial predicate for robust legal analysis and informed policy development.

As we have noted previously, President Donald Trump’s “Executive Order on Restoring Freedom of Speech and Ending Federal Censorship”[47] is “an important step toward restoring the marketplace of ideas online, given that we now have greater knowledge of the extent of federal pressure that was placed on social-media companies to restrict First Amendment-protected speech in the name of combating online misinformation… If courts are unable to reach the problem practically, it is incumbent on the executive branch to limit itself.”[48]

Courts have not, to date, been able to successfully consider covert backdoor-censorship efforts targeted at social-media companies. Government-facilitated collusion to censor is a related claim that could be investigated. Below, we will consider both problems in turn.

A. Murthy, Vullo, and the Dangers of Backdoor Censorship[49]

After much anticipation, Murthy v. Missouri[50] turned out to be a bit of a disappointment for those who hoped the Court would tackle the issue of backdoor online censorship. In a 6-3 decision by Justice Amy Coney Barrett, the Court ruled that none of the plaintiffs had standing, due to a lack of traceability and redressability of the alleged injuries.[51] The result was that challenging backdoor censorship got a lot harder. Ultimately, government efforts to suppress online speech may not be practically challengeable in court unless social-media companies themselves push back with lawsuits against coercion. This is bad for the marketplace of ideas that the First Amendment is supposed to protect.[52]

When it comes to coercion, the Court has now addressed it twice in this term. NRA v. Vullo[53] made clear that the standard for considering such issues comes from Bantam Books.[54] But the coercion efforts in Vullo were very much out in the open—through the front door, if you will. Moreover, the directly harmed entity, the National Rifle Association, was also the plaintiff challenging the action.

Because Murthy was dismissed on standing, we don’t know how the Bantam Books standard applies in the social-media context. The Murthy plaintiffs were individuals who complained that they were censored by the social-media companies as a result of government efforts, as well as states who sued on behalf of themselves and their citizens. The high standing hurdles the Court has put in place would suggest that social-media companies would need to be the plaintiffs in cases alleging coercion. But if the allegations of broad-based coercion are true, then it is unlikely that social-media companies would want to bring such cases, when they necessarily interact with the government on so many fronts. As we noted after analyzing the oral arguments in this case:

If the Court decides to avoid the merits of this case under such a lack-of-standing reasoning, it would allow government agents to engage in egregious censorship activity so long as they did a good job of not creating a record of asking for particular individuals’ speech to be suppressed. The government could do this by calling for entire types of content or viewpoints to be censored without targeting specific people.[55]

Now, it is important to note what the Court is not saying. The Murthy case arose in the preliminary-injunction context, after discovery. This means that more was required to continue to maintain standing than at the motion-to-dismiss stage. As the Court said, “[a]t the preliminary injunction stage, then, the plaintiff must make a ‘clear showing’ that she is ‘likely’ to establish each element of standing… Where, as here, the parties have taken discovery, the plaintiff cannot rest on ‘mere allegations,’ but must instead point to factual evidence.”[56] Thus, as long as a plaintiff can allege facts that would establish traceability to the standards outlined here, they should be able to get to at least discovery to be able to prove it.

But despite this caveat, Murthy nonetheless does significantly restrict the pool of possible plaintiffs in backdoor-censorship cases. Those who have their speech suppressed online will likely not know if it is the independent work of social-media companies or due to government efforts unless some rare event like the “Twitter Files” happens, and the social-media companies let them know. Thus, backdoor censorship could continue without any court review.

This is important, because there are strong reasons to believe the complained-of conduct does violate the Bantam Books/Vullo standard, as found by the district court, the 5th U.S. Circuit Court of Appeals, and the Supreme Court dissent in this case. In fact, the dissent’s outlining of the case against Facebook with respect to Jill Hines merits attention.[57] There, the dissent argues persuasively from the record that the conduct of high-ranking White House officials and the U.S. Surgeon General’s Office violated the First Amendment.

First, because the White House is in charge of the executive branch, including those who could bring antitrust cases, data-privacy enforcement actions, and negotiate international data-transfer rules (as well as propose changes to Section 230), it is “beyond any serious dispute [they] possessed the authority to exert enormous coercive pressure.”[58]

Second, the communications coupled demands with “thinly veiled threats”[59] that became more explicit over time. “The natural interpretation” of these communications was “that the White House might retaliate if the platforms allowed free speech, not if they suppressed it.”[60]  The dissent persuasively argues that this goes far beyond the public bully pulpit:

If these communications represented the exercise of the bully pulpit, then everything that top federal officials say behind closed doors to any private citizen must also represent the exercise of the President’s bully pulpit. That stretches the concept beyond the breaking point.[61]

Third, Facebook clearly acceded to these demands when it changed their policies and enforcement in ways that harmed Jill Hines. Indeed, their internal communications made clear that they didn’t agree with the White House critiques, but went along with their requests nonetheless:

Facebook again took stock of its relationship with the White House after the President’s accusation that it was “killing people.” Internally, Facebook saw little merit in many of the White House’s critiques. One employee labeled the White House’s understanding of misinformation “completely unclear” and speculated that “it’s convenient for them to blame us” “when the vaccination campaign isn’t going as hoped.” Committee Report 473. Nonetheless, Facebook figured that its “current course” of “in effect explaining ourselves more fully, but not shifting on where we draw the lines,” is “a recipe for protracted and increasing acrimony with the [White House].” Id., at 573. “Given the bigger fish we have to fry with the Administration,” such as the EU-U.S. dispute over “data flows,” that did not “seem like a great place” for Facebook-White House relations “to be.” Ibid. So the platform was motivated to “explore some moves that we can make to show that we are trying to be responsive.” Ibid. That brainstorming resulted in the August 2021 rule changes. See supra, at 13, 19–20.[62]

In sum, the First Amendment case against the federal government officials in Murthy appears strong on the merits. But, as noted above, standing complications may doom these types of cases going forward. As the dissent eloquently put it:

The Court, however, shirks [its] duty and thus permits the successful campaign of coercion in this case to stand as an attractive model for future officials who want to control what the people say, hear, and think. That is regrettable. What the officials did in this case was more subtle than the ham-handed censorship found to be unconstitutional in Vullo, but it was no less coercive. And because of the perpetrators’ high positions, it was even more dangerous. It was blatantly unconstitutional, and the country may come to regret the Court’s failure to say so. Officials who read today’s decision together with Vullo will get the message. If a coercive campaign is carried out with enough sophistication, it may get by. That is not a message this Court should send.[63]

B. Government-Facilitated Collusion Is Particularly Dangerous

In Murthy, the original plaintiffs alleged both government coercion and collusion among the social-media companies and government actors to censor. The 5th Circuit and Supreme Court only considered the coercion element to establish state action. But many have recognized that collusion facilitated by government efforts is particularly dangerous, as it diminishes individuals’ ability to “cheat” on the collusive agreement and break down the collusive efforts of the whole.

For instance, in a classic Prisoner’s Dilemma, oligopolists might want to collectively raise their market price and all earn higher profits through collusion. But since explicit agreements to raise prices in this manner are illegal under antitrust law (and would potentially even expose them to criminal penalties), businesses are not likely to look to contract for such efforts. Instead, tacit collusion is the best way to try to accomplish this scheme. But while all parties would benefit from collusion, any individual party would benefit from cheating—lowering its own price and thereby capturing greater market share. This sets up a situation where individual incentives are incompatible with the group. One antitrust scholar went so far as to say that “[i]f explicit collusion is difficult to arrange and hard to enforce if arranged, tacit collusion must be next to impossible.”[64]

On the other hand, when collusion is facilitated by the government, the incentives change. Government agents can assure all participants that they will not be punished for colluding. Moreover, they can even direct the collusion and punish possible defectors. Thus, collusion backed by government power is much more likely to be effective, as it can at least partially solve the Prisoner’s Dilemma.

In Murthy, it was alleged at the district court level that the social-media companies worked together—including through government-funded third-party civil-society organizations—as well as with government agencies to coordinate censorship efforts.[65] The district court noted that “[w]hen a plaintiff establishes ‘the existence of a conspiracy involving state action,’ the government becomes responsible for all constitutional violations committed in furtherance of the conspiracy by a party to the conspiracy.”[66]

Under such an alleged scenario, collusion by social-media companies to censor speech is much more likely to be effective. Under the First Amendment, the government should not be in the business of facilitating censorship.

IV. Antitrust and Political Bias in Moderation Decisions

One way of interpreting allegations of political bias in content-moderation decisions is that tech platforms possess substantial market power that enables them to implement such policies, potentially diminishing product quality, without significant risk of users migrating to alternative platforms. To analyze this concern, it is instructive to examine how antitrust law addresses issues related to product quality. Thus, Section IV.A will explore how antitrust law considers non-price-effects analysis.

The application of antitrust law to complaints centered predominantly on free-speech principles may present notable limitations. The evaluation of tradeoffs inherent in product-quality claims would pose formidable challenges for judicial adjudication where the product quality in question is alleged political bias. Moreover, antitrust law does not prohibit the exploitation of even monopoly power, so long it has been lawfully obtained and maintained.[67] Accordingly, Section IV.B will examine the potential reasons why antitrust actions concerning political bias might face significant hurdles to success.

A. How Antitrust Law Deals with Non-Price Effects Generally

Antitrust law has long recognized that reduced product quality can affect consumers adversely. While most of this analysis has been done in the context of mergers, there are also lessons that can be drawn for monopolization claims.

In the 2010 Horizonal Merger Guidelines, the FTC and U.S. Justice Department (DOJ) stated:

Enhanced market power can also be manifested in non-price terms and conditions that adversely affect customers, including reduced product quality, reduced product variety, reduced service, or diminished innovation. Such non-price effects may coexist with price effects, or can arise in their absence. When the Agencies investigate whether a merger may lead to a substantial lessening of non-price competition, they employ an approach analogous to that used to evaluate price competition.[68]

The FTC considers non-price effects in its merger reviews. For instance, the Bureau of Economics in 2017 released its analysis of the non-price effects of the proposed merger between DraftKings, Inc. and FanDuel Ltd.[69] There, the bureau considered the daily fantasy-sports games that the platforms offered, and whether the quality and innovation of those games would be negatively affected if one of the two competitors was removed from the market. The analysis noted that:

Important sources of competition-driven non-price factors include the quality and variety of existing products, the resources that are expended to improve products, and the provision of complementary services to the product.

Product variety provides clear benefits to consumers by allowing them to find a product- price combination that best satisfies their demand. Variety may be costly for firms to develop, so a reduction in variety that reduces consumer welfare may be profitable for a merged entity. Likewise, the costly provision of complementary services may be reduced due to a merger. In consumer-facing industries, these services frequently take the form of high-quality customer service and/or flexible contract terms. While the specific form and importance of product variety and complementary services vary widely across industries, these factors must be considered in evaluating possible changes to consumer welfare due to a proposed merger.[70]

Antitrust courts have similarly recognized that non-price competition is often just as important as price competition. In United States v. Continental Can Co.,[71] which dealt with a metal-container company’s acquisition of a glass-container company, the Court noted:

[P]rice is only one factor in a user’s choice between one container or the other. That there are price differentials between the two products or that the demand for one is not particularly or immediately responsive to changes in the price of the other are relevant matters but not determinative of the product market issue. Whether a packager will use glass or cans may depend not only on the price of the package but also upon other equally important considerations. The consumer, for example, may begin to prefer one type of container over the other and the manufacturer of baby food cans may therefore find that his problem is the housewife rather than the packer or the price of his cans. This may not be price competition but it is nevertheless meaningful competition between interchangeable containers.[72]

Product quality is one of the primary areas of non-price competition that antitrust law seeks to protect.[73] In the case of single-firm conduct, this would mean considering the anticompetitive nature of product degradation. For instance, a monopoly car manufacturer could, in order to reduce production costs, decide to no longer manufacture cars with Bluetooth capabilities. In the absence of competition, they might have market power to offer this degraded product while continuing to charge the same price to consumers.

Continuing this hypothetical illustrates the interrelated nature of price and non-price competition. If a non-monopolist car manufacturer stopped building vehicles with Bluetooth capabilities without a price decrease, they would lose customers to those manufacturers that offered such a product at the same price. On the other hand, the same car manufacturer may decide that there is unmet demand for cheaper vehicles without Bluetooth capability, and be able to offer a product which does well in a competitive marketplace with the relatively cheaper vehicles. Similarly, a monopolist could reduce product quality, but also decrease the price in a profit-maximizing way if cost savings and consumers preferences would allow for it. An antitrust court would have to determine the tradeoffs between cost savings for consumers and reduced product quality. In other words, the question is whether a monopoly raises the quality-adjusted price.[74]

This trade-off becomes even more complicated when non-price effects must be traded off against one another. In Roland Mach. Co. v. Dresser Indus. Inc.,[75] the 7th U.S. Circuit Court of Appeals reviewed an antitrust case involving a manufacturer’s termination of a dealership agreement with a dealer that no longer intended to sell the manufacturer’s products exclusively. The court noted that, on the one hand, “exclusive dealing leads dealers to promote each manufacturer’s brand more vigorously than would be the case under nonexclusive dealing, the quality-adjusted price to the consumers (where quality includes the information and other services that dealers render to their customers) may be lower with exclusive dealing than without.”[76] But on the other hand, “a collateral effect of exclusive dealing is to slow the pace at which new brands… are introduced.”[77] The tradeoff between more information to consumers and less brands available at the store must be balanced against each other.

Even the process of considering product quality alone can be fraught with difficulty when there is more than one dimension. Product quality could include both function and aesthetics (e.g., a watch’s quality lies both in its ability to tell time and in how nice it looks on one’s wrist). [78] An analysis involving product quality across multiple dimensions involve tradeoffs in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm and benefit to consumers who prioritize one aspect of quality over another.

B. Applying Non-Price-Effects Analysis to Content Moderation

The apparent tension between established antitrust principles and complaints centered on tech platforms’ alleged political bias may contribute to the notable absence, to date, of successful antitrust cases founded on this theory. While moderation policies and their enforcement by online platforms can be conceptualized as a significant dimension of product quality, substantiating claims that political bias adversely affects the quality of platform moderation from a consumer perspective is likely to present considerable evidentiary and legal challenges.

Thinking through the issue in terms of quality-adjusted price isn’t straightforward. Social-media companies are multisided platforms where users are cross-subsidized by advertisers on the other side of the platform.[79] This results in users having free access to the platform, while advertisers pay to reach them.

Given that social-media platforms are offered to consumers at zero price, antitrust plaintiffs would have to prove that a tech platform offers a lower-quality product when it engages in political bias, and that it does so without offsetting benefits to consumers. But it isn’t clear how this theoretical example would benefit the monopolist financially. The allegation is essentially that tech platforms are so comfortable in their monopoly status that they could engage in politically motivated behavior that makes their product of lesser quality to their users. Even if the platform is a monopolist, it would have to believe that users who are discriminated against will not leave or use the platform less, which would thereby make it less valuable to the advertisers on the other side. It is extremely unlikely that Meta, X, or Google would purposely try to make less money, or that they are indifferent to profit.

Moreover, the complexity inherent to the tradeoffs noted above are even more difficult to assess when different consumer groups have sharply contrasting views of what constitutes indicia of quality along each of these dimensions. With political media, most would prefer to have more of what they want to read available, and less of what they don’t want available, even if it comes at the expense of other consumers’ preferences. There is no easy way to quantify and weigh general consumer welfare where one group’s moderation preferences necessarily come at the expense of another’s.

In this sense, a non-price-effects analysis of political bias would be even more complex than a complaint based on user privacy. All but the most exhibitionistic would prefer more to less privacy, all other things being equal. But when it comes to the algorithm that determines what is seen and in what order, or what is fact checked, or even what is monetized, there is no clear preference that all consumers share.

Platform moderation is not easy. As discussed above, social-media platforms’ goal is to keep users engaged to maximize the platform’s value to advertisers.[80] In order to do that, platforms need to reduce the incidence of low-quality speech that leads to less engagement. Whether this is spam, “hate speech,” misinformation, or any other speech to turns off some contingent of users, platforms are best-positioned to understand how to balance these consumer preferences.

A Section 2 monopolization claim made on the basis of political bias would place an antitrust court in the position of needing to decide whether consumer welfare is harmed when certain content (or content creators) is removed, fact checked, demonetized, or otherwise downgraded by a social-media algorithm. But this is a nearly impossible task; while some consumers may want to see such content, many others may not. Moreover, advertisers may not want to be associated with certain types of content. Thus, there are many reasons a platform may choose to moderate certain types of content that are pro-competitive. Indeed, where consumers and advertisers have highly heterogeneous preferences, conducting a competition analysis is likely impossible. In other words, allowing all First Amendment-protected speech probably does not maximize consumer welfare, if it is even possible.

A Section 2 claim would also prove difficult. Raising prices, by itself, can’t be the basis for Section 2 liability. As the Court stated in Trinko:

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period— is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth. To safeguard the incentive to innovate, the possession of monopoly power will not be found unlawful unless it is accompanied by an element of anticompetitive conduct.[81]

If quality-adjusted price is how antitrust enforcers and courts must consider prices, then reducing a product’s quality in a zero-price market probably effectively counts as a price increase. But without a separate element of anticompetitive conduct, this would be insufficient for an antitrust claim under Section 2 of the Sherman Act.

This RFI also implicates collusion among tech platforms, which could be the basis of a Section 1 theory under the Sherman Act. Allegations have been made that several tech platforms engaged in coordinated actions to the detriment of Parler, a competing social-media company with content-moderation policies often described as more accommodating to conservative viewpoints. These alleged actions reportedly transpired after the platforms suspended the accounts of then-President Trump and a number of other conservative individuals subsequent to the events at the U.S. Capitol on Jan. 6, 2021.[82]

After the events of Jan. 6, Amazon Web Services ceased hosting Parler, which advertised itself as the conservative “free speech” alternative to Twitter (as well as Facebook). At the same time, Twitter and Facebook removed the accounts of President Trump and a number of others for allegedly inciting violence. Parler alleged that there was an agreement between Twitter and AWS to exclude Parler, and that if not for AWS terminating its hosting agreement with them, they would have been poised to continue the growth they had experienced during the run-up to the 2020 election. Parler contended that such growth would be expected to magnified following the removal of President Trump from Twitter and Facebook, which they said would generate an exodus of conservative users and voices to their platform.[83] In particular, Parler alleged that their hosting termination by AWS was “apparently designed to reduce competition in the microblogging services market to the benefit of Twitter” in violation of Section 1 of the Sherman Act.[84]

The district court dismissed the motion for an injunction on this count, stating that “Parler has failed to demonstrate that it is likely to succeed on the merits of a Sherman Act claim… the evidence it has submitted in support of the claim is both dwindingly slight, and disputed by AWS. Importantly Parler has submitted no evidence that AWS and Twitter acted together intentionally—or even at all—in restraint of trade.”[85]

Other courts have come to the same conclusion when presented with bare allegations of Section 2 harm. One such case involved conservative activist Laura Loomer. In Freedom Watch Inc. v. Google Inc.,[86] the U.S. Circuit Court of Appeals for the D.C. Circuit affirmed the district court’s determination that Freedom Watch failed to state a viable claim under the Sherman Act.

To state a § 1 claim, a complainant must plead “enough factual matter (taken as true) to suggest that an agreement was made.” Bell Atlantic Corp. v. Twombly, 550 U.S. 544, 556, 127 S.Ct. 1955, 167 L.Ed.2d 929 (2007). Freedom Watch argues that we should infer an agreement primarily from the Platforms’ parallel behavior, as each company purportedly refused to provide certain services to Freedom Watch. But, as the district court explained, parallel conduct alone cannot support a claim under the Sherman Act. See Freedom Watch, 368 F.Supp.3d at 37 (citing Twombly, 550 U.S. at 556, 127 S.Ct. 1955 (“Without more, parallel conduct does not suggest conspiracy”)). Freedom Watch puts forth two additional factors that it claims suggest conspiracy: that the Platforms are pursuing a revenue-losing strategy and that they are motivated by political goals. But Freedom Watch does not explain why either factor tends to show an unlawful conspiracy, rather than lawful independent action by the different Platforms. See Freedom Watch, 368 F.Supp.3d at 37–38.”).

It is worth considering what might have happened in a similar case were there sufficient facts to survive a motion to dismiss. For instance, during a November 2020 hearing of the U.S. Senate Judiciary Committee, Sen. Josh Hawley (R-Mo.) alleged there was coordination among X, Meta, and Google over certain content-moderation decisions that amounted to anticompetitive collusion. He claimed that:

Facebook censorship teams communicate with their counterparts at Twitter and Google, and then enter those companies’ suggestions for censorship onto the Tasks platform so that Facebook can then follow-up with them and effectively coordinate their censorship efforts.[88]

If Hawley’s allegations were true, this could conceivably constitute a per-se violation of Section 1 of the Sherman Act. The antitrust complaint would be that this was an agreement to reduce product quality by restricting conservative voices.

The first question is whether these coordinated moderation practices would be judged per se illegal or analyzed under the rule of reason. The question is largely one of novelty—i.e., is this conduct so familiar to the courts as harmful to consumer welfare that it can skip further analysis? As the Supreme Court has put it:

…the per se rule is appropriate only after courts have had considerable experience with the type of restraint at issue, and only if courts can predict with confidence that it would be invalidated in all or almost all instances under the rule of reason.[89]

If a court believes the circumstances and allegations are sufficiently unusual that it must consider the particulars of the case and the effects of the tech companies’ behavior on competition before it can rule on that behavior’s legality, then the rule of reason would apply. Only where a court concluded the agreement was a self-evident restriction of output would it consider the behavior to be per se illegal.

The determination would be fact-dependent. If X, Meta, and Google had agreed to share information, but not to coordinate take-downs, that would be far easier to defend; it would be similar to the platforms sharing information about spam or harmful content. Questions may remain as to why it was in the platforms’ interest to share this information with their competitors, assuming each would stand to gain a commercial advantage from having a better moderation system than the others.

A harder case to judge would be one where the platforms did coordinate takedowns, particularly if a service only removed content on the condition that the others did as well. This would be much closer to cartelistic output-restriction. If the moderation decisions made a given platform more appealing to users, why would they not want to remove the content unilaterally?

One possible defense if that the coordination is what makes the take-down useful. If users find it difficult to discern between content removed for being spam and content removed because it is “hate speech” or “misinformation,” the best way to signal that it is the latter may be to remove it in coordination with similar services. This could help to create the “common knowledge” that a removed bit of content has been removed specifically because it is low value, thus giving the platforms’ users and advertisers the signal that they are the right places for them. Alternatively, the decision to coordinate may have more to do with political risk, where the platforms coordinate on grounds that there is safety in numbers. In that case, a defense of the agreement would be more difficult to mount.

On the other hand, common standards are not unusual, even for speech products. For instance, publishers agree on advertising standards (beyond what is required by law) through the Better Business Bureau’s National Advertising Division.[90] Another prominent example of coordination in speech products was the long-standing Comics Code Authority, through which the Comic Magazine Association of America created a set of rules and a Seal of Approval certifying compliance with principles like “criminals shall not be presented so as to be glamorous or to occupy a position which creates a desire for emulation,” “nudity in any form is prohibited, as is indecent exposure,” and “in every instance good shall triumph over evil and the criminal punished for his misdeeds.”[91] Despite the clear agreement among major participants in these sectors of the economy, and a possible reduction in product quality in the eyes of many possible consumers, these coordinated activities have not appeared to receive much antitrust attention.

In sum, while it does seem plausible that an agreement among Google, X, and Meta to remove all conservative speech would be seen as a naked quality-fixing agreement and per se illegal, these companies working together to create and enforce standards that reduce low-quality content would probably require a court to apply rule-of-reason analysis that weighs the agreement’s costs and benefits before condemning it. In such a case, the court would again have to consider the complex tradeoffs among different quality features. Plaintiffs would likely have a difficult time establishing a clear harm to consumer welfare.

Moreover, even in the alleged government-facilitated collusion among tech platforms,[92] antitrust immunities stemming from the state-action or sovereign-immunity doctrines may apply. This would mean that private plaintiffs or public enforcers could be limited in what they can successfully allege against the platforms, if they were indeed acting in accordance with the wishes of government actors.

V. UMC Section 5 Limitations to Policing Alleged Private ‘Censorship’

Section 5 of the FTC Act states that “[u]nfair methods of competition in or affecting commerce, and unfair or deceptive acts or practices in or affecting commerce, are hereby declared unlawful.”[93] The jurisprudential framework defining “unfair methods of competition” has evolved primarily through case-law interpretation and administrative guidance. In 2022, the Commission promulgated its “Policy Statement Regarding the Scope of Unfair Methods of Competition Under Section 5 of the Federal Trade Commission Act,”[94] wherein it articulated an expansive interpretation of its statutory mandate. The policy statement asserts Commission authority to proscribe anticompetitive business practices that extend beyond the traditional boundaries of Sherman and Clayton Act jurisprudence, establishing a more capacious regulatory framework to address market behaviors deemed contrary to principles of fair competition.[95]

But even under an expansive reading of Section 5 UMC authority, the FTC may nonetheless encounter significant challenges to demonstrating that advertiser or tech-platform conduct constitutes a violation of the law. The test for identifying whether conduct is a UMC include, first, a determination that the challenged conduct is a method of competition and, second, that it is unfair.

To constitute a method of competition under the Commission’s analytical framework, the challenged conduct must satisfy two threshold criteria. First, it must represent an affirmative action “undertaken by an actor in the marketplace,”[96] rather than merely reflecting structural market conditions, such as entry barriers or industry-concentration levels. Second, the conduct must necessarily implicate competitive dynamics. The Commission has indicated that certain behaviors outside traditional antitrust boundaries—including misuse of regulatory processes or violations of generally applicable laws—may satisfy this requirement when they affect competition.[97]

The concept of “unfairness” within the Commission’s Section 5 jurisprudence encompasses conduct that “goes beyond competition on the merits.”[98] The Commission employs two principal criteria to evaluate whether conduct constitutes impermissible non-meritorious competition: first, whether the conduct is “coercive, exploitative, collusive, abusive, predatory, or involve[s] the use of economic power of a similar nature,” and second, whether it “tend[s] to negatively affect competition conditions.”[99]

These evaluative criteria operate on a sliding scale within the Commission’s analytical framework, such that compelling evidence of one criterion may diminish the quantum of evidence required for the other.[100] Significantly, the Commission has clarified that actual anticompetitive harm need not be demonstrated; rather, a tendency to produce negative competitive effects suffices to establish a violation.[101]

When presented with a prima facie case of an unfair method of competition, the Commission will consider potential justifications, albeit within narrowly circumscribed parameters. The Commission categorically rejects mere pecuniary benefits accruing to the respondent as sufficient justification.[102] Any proffered justification must be legally cognizable, non-pretextual, and narrowly tailored to minimize competitive harm.[103] Furthermore, the Commission requires that the asserted benefits manifest in the same market where the competitive harm occurred. Even when these stringent requirements are satisfied, the claimed benefits must outweigh the competitive harm to constitute a valid defense.[104]

Applied to a claim contemplated by the RFI, advertisers deciding not to purchase ads on a particular tech platform may not be a “method of competition.” While the FTC could argue the decision to advertise or abstain from advertising is conduct “undertaken by an actor in the marketplace,” there certainly is little to suggest that such conduct is performed to “implicate competition.” For instance, advertisers that no longer wish to do business with X.com after Elon Musk purchased it and changed its moderation policies may be motivated by protecting brand reputation, rather than specifically harming the market for social media—a market, of course, in which they are not actually engaged. The FTC might be able to argue that the conduct implicates competition indirectly, but UMC claims are usually targeted at acts aimed at harming competitors to the detriment of competition.

The other primary claim seemingly contemplated by the RFI is that tech platforms themselves may be committing a UMC when they enforce their moderation policies. In such a case, the FTC would likely argue that those policies are coercive or abusive to users, or possibly deceptive or the result of economic power. But, as argued above, it is difficult to see how such an action could be sustained if enforcing moderation policies generally benefits their users as a whole. Not only is participating in the marketplace of ideas by creating and enforcing moderation policies protected First Amendment activity, but the tech platforms are best-positioned to balance the interests of their users (and advertisers). The FTC would also need to show that enforcement of these policies harms competition as whole. This would be difficult to establish, given the previously mentioned decisions at Meta and X.com to move toward more speech-protective stances.[105]

VI. UDAP Section 5 Limitations in Policing Alleged Private ‘Censorship’

The RFI suggests that tech platforms may be engaged in unfair or deceptive acts or practices with respect to their moderation policies. But this, again, may be difficult to prove under the relevant case law and policy statements.

A. Moderation Policies that Benefit Consumers Are Not Unfair Acts or Practices

The Commission may not declare an act or practice unlawful “on the grounds that such act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or competition.”[106]

The Commission’s authority to designate consumer practices as “unfair” under Section 5 underwent significant legislative curtailment in the late 1970s, following congressional concerns regarding perceived administrative overreach.[107] In response to congressional scrutiny, the Commission promulgated its “Policy Statement on Unfairness,” which articulated interpretive principles aligned with the statutory parameters subsequently codified at 15 U.S.C. § 45(n).[108]

Under this framework, unjustified consumer injury constitutes the cardinal consideration in consumer-unfairness analysis. To satisfy the statutory threshold, such injury must satisfy three conjunctive elements: it must be (1) substantial, (2) not outweighed by countervailing consumer or competitive benefits, and (3) not reasonably avoidable through consumer action.[109]

The substantiality requirement mandates that actionable harm transcend mere triviality or speculative injury. While economic detriment typically satisfies this criterion, substantial health and safety risks may likewise establish unfairness, whereas subjective or emotional harms generally fall outside the Commission’s enforcement purview.[110]

The Commission’s balancing inquiry acknowledges that “[m]ost business practices entail a mixture of economic and other costs and benefits,”[111] necessitating careful evaluation of offsetting market advantages. Under this calculus, practices are deemed unfair only when “injurious in [their] net effects,”[112] thus requiring the Commission to weigh consumer injury against potential market efficiencies.

Finally, the reasonable-avoidable criterion reflects the Commission’s deference to market self-correction through informed consumer choice. As articulated in the policy statement: “Normally we expect the marketplace to be self-correcting, and we rely on consumer choice—the ability of consumers to make their own private purchasing decision without regulatory intervention—to govern the market.”[113] The Commission’s enforcement authority is thus directed not toward “second-guess[ing] the wisdom of particular consumer decisions, but rather to halt some form of seller behavior the unreasonably creates or takes advantage of an obstacle to the free exercise of consumer decisionmaking.”[114] Paradigmatic examples include information asymmetries that prevent meaningful comparison shopping, coercive service-contract tactics, and fraudulent health claims—all practices that undermine the consumer’s capacity for autonomous market participation.[115]

Here, the Commission may face considerable challenges in demonstrating that the enforcement of content-moderation policies constitutes an “unfair” practice as defined by the applicable statute and the Commission’s policy statement.

For an injury to be substantial, it must be more than “emotional impact or other more subjective types of harm.” Complaints regarding content moderation often emanate from content creators alleging practices such as demonetization or diminished content visibility—sometimes referred to by the colloquial term “shadow banning.” Similarly, users (or “consumers of speech”) may assert that their access to desired content has been unduly restricted.

But even assuming that there is substantial injury, it would likely be difficult to demonstrate that the injury is not outweighed by offsetting consumer or competitive benefits. As explained above, the reason that tech platforms engage in speech moderation is to benefit their users. If they fail to do so, they are subject to losing users to other tech platforms, and possibly losing advertisers who seek that audience, as well. Moreover, competition appears to be working in this market, as several tech platforms have changed their moderation policies in response to perceived changes in consumer demand. FTC action here would “second-guess the wisdom of particular consumer decisions,” contrary to the unfairness policy statement.[116]

Finally, the availability of other, even more potentially speech-protective tech platforms like Parler, Gab, and Truth Social suggest that consumers can, in fact, reasonably avoid moderation policies they don’t like. FTC action in this space is not likely to succeed, even as it would require spending  scarce enforcement resources.

B. Enforcing or Changing Moderation Policies Is Not a Deceptive Act or Practice

The Commission has also issued a “Policy Statement on Deception,”[117] which outlines three elements to prove deception. First, “there must be a representation, omission or practice that is likely to mislead the consumer.”[118] Second, the practice is examined from “the perspective of a consumer [or specific group] acting reasonably in the circumstances.”[119] Third, “the representation, omission, or practice must be a ‘material’ one.”[120]

The FTC must first show a representation, omission, or practice occurred. Express claims of moderation policies would probably be enough. It is less certain whether the lack of a moderation policy would be an “implied claim.” Nonetheless, the FTC should look into exactly what the tech platforms have stated in their moderation policies.

It seems unlikely that published moderation policies would be found to mislead a reasonable consumer. Such policies generally make clear that tech platforms retain the right to remove or otherwise sanction speech that they believe violates them. Even where such policies contain provisions that, read in isolation, appear to promise access to speech as a general matter, a court would need to determine whether such statements are material. While the FTC has often relied on presumptions of materiality from express statements in enforcement actions, it is unclear whether a court would agree that every statement on a service’s website is material.

In summary, tech platforms enforcing published moderation policies would not constitute deception. The FTC should, of course, make sure that this is what the tech platforms are doing. But retaining the discretion to make decisions about what violates a service’s moderation policies doesn’t seem that it could be deceptive. For promises to provide a public forum (or similar language), the question would be whether a reasonable consumer is deceived, or whether such statements are actually material.

A trickier question would be whether changes to moderation policies could be deceptive. A change without notice or that applies retrospectively could be problematic, but the FTC would still need to prove reliance (materiality). A change with notice that only applies prospectively, however, would not be deceptive, as users would know the rules going forward.

Conclusion

This RFI can be a constructive endeavor, undertaken in light of existing precedent governing First Amendment rights, antitrust law, UMC Section 5, and UDAP Section 5 enforcement against private actors engaging in the marketplace of ideas. The FTC would do well to look into the extent to which government actors have influenced moderation decisions, as such interventions are problematic. The underlying economics suggests that tech platforms do, in fact, have good reasons to engage in content moderation, and the FTC should recognize that enforcement actions to counter such moderation could harm consumer welfare.

[1] Request for Public Comment Regarding Technology Platform Censorship, Fed. Trade Comm’n (Feb. 20, 2025), available at https://www.ftc.gov/system/files/ftc_gov/pdf/P251203CensorshipRFI.pdf [hereinafter “RFI”].

[2] Id. at 1. In that regard, the Commission might have inquired further about instances in which members of the public have welcomed content-moderation policies that limit unpleasant, obscene, harassing, or otherwise unwanted messages or images.

[3] For more on this subject, see Amicus Brief of International Center for Law & Economics, Moody v. NetChoice, NetChoice v. Paxton, No. 22-277, 22-555, In the Supreme Court of the United States (Dec. 4, 2023), available at https://laweconcenter.org/wp-content/uploads/2023/12/Intl-Ctr-for-Law-and-Econ-Amicus-12.4.231148722.12.pdf; Amicus Brief of International Center for Law & Economics, Murthy v. Missouri, No. 23-411, In the Supreme Court of the United States (Feb. 9, 2024), available at https://laweconcenter.org/wp-content/uploads/2024/02/Murthy-v.-Missouri-Intl-Center-for-Law-Econ.-Am.-Br.-2-9-24-pm-FINAL.pdf.

[4] See, e.g., J.S. Mill, On Liberty, Ch. 2 (1859); John Milton, Areopagitica (1644).

[5] See Mill, supra note 4.

[6] See Thomas Jefferson, First Inaugural Address (Mar. 4, 1801), https://avalon.law.yale.edu/19th_century/jefinau1.asp.

[7] See Abrams v. United States, 250 U.S. 616, 630 (1919), (Justice Oliver Wendell Holmes’ dissent noted that “time has upset many fighting faiths,” and that the “ultimate good desired is better reached by free trade in ideas – that the best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out. That at any rate is the theory of our Constitution.”) (emphasis added).

[8] See David Schultz, Marketplace of Ideas, Free Speech Center, https://firstamendment.mtsu.edu/article/marketplace-of-ideas (last updated Jul. 9, 2024).

[9] 139 S. Ct. 1921 (2019).

[10] Id. at 1926.

[11] See id. at 1930.

[12] 144 S. Ct. 2383 (2024).

[13] Id. at 2393.

[14] Id. at 2401.

[15] Id. at 2405.

[16] Id. at 2403.

[17] 326 U.S. 1 (1945).

[18] 418 U.S. 241 (1974).

[19] Id. at 245.

[20] 10 F. Supp. 3d 433 (S.D. N.Y. Mar. 28, 2014)

[21] Id. at 438.

[22] Id.

[23] See, e.g., E-Ventures Worldwide LLC v. Google Inc., 2017 WL 2210029, at *4 (M.D. Fla. Feb. 8, 2017); Langdon v. Google, Inc., 474 F. Supp. 2d 622, 629-30 (D. Del. 2007).

[24] 493 U.S. 411 (1990).

[25] See id. at 414.

[26] 458 U.S. 886 (1982).

[27] Id. at 426.

[28] Id.

[29] For more, see Ben Sperry, Knowledge and Decisions in the Information Age: The Law & Economics of Misinformation on Social Media, 59 Gonzaga L. Rev. 319, 330-41 (2024); Ben Sperry, An L&E Defense of the First Amendment’s Protection of Private Ordering, Truth Mark. (Apr. 23, 2021), https://laweconcenter.wpengine.com/2021/04/23/an-le-defense-of-the-first-amendments-protection-of-private-ordering.

[30] See Halleck, 139 S. Ct. at 1933 (“[A] private actor is not subject to First Amendment constraints on how it exercises editorial discretion”).

[31] See Ohio v. Am. Express Co., 138 S. Ct. 2274, 2280-81 (2018) (discussing the economics of “two-sided platforms” in relation to credit-card markets); see also Geoffrey A. Manne, In Defence of the Supreme Court’s ‘Single Market’ Definition in Ohio v. American Express, 7 J. Antitrust Enforcement 104 (2019).

[32] See David S. Evans & Richard Schmalensee, Markets with Two-Sided Platforms, 1 Issues in Comp. L. & Pol’y 667, 669 (2008).

[33] Jean-Charles Rochet & Jean Tirole, Two-Sided Markets: A Progress Report, 37 Rand J. Econ. 645, 646 (2006).

[34] See Benjamin Klein et al., Competition in Two-Sided Markets: The Antitrust Economics of Payment Card Interchange Fees, 73 Antitrust L.J. 571, 598 (2006) (“The economic theory of two-sided markets indicates that relative prices on the two sides of the market are independent of the degree of competition faced by a supplier in such a market. While total prices will be influenced by competition, relative prices are determined by optimal balancing of demand on the two sides of the market.”).

[35] See, e.g., David S. Evans, Multisided Platforms, Dynamic Competition, and the Assessment of Market Power for Internet-Based Firms, at 8-9 (Working Paper, Coase-Sandor Institute for Law and Economics at The University of Chicago Law School, Mar. 2016), https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=2468&context=law_and_economics.

[36] David S. Evans, Governing Bad Behavior by Users of Multi-Sided Platforms, 27 Berkeley Tech. L.J. 1201, 1215 (2012).

[37] Id.

[38] Id.

[39] For more on how the tradeoffs work under Section 230 and how the law could be reformed to better balance accountability and speech, see Geoffrey A. Manne, Ben Sperry, & Kristian Stout, Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Computer & Tech L. J. 26 (2022).

[40] See id. at 47-53 (noting Section 230 immunity has protected technology platforms when they have allowed unprotected speech and illegal or tortious conduct, to the detriment of some technology platform users).

[41] Thomas Sowell, Knowledge and Decisions 240 (2d ed. 1996).

[42] See Ronald H. Coase, The Problem of Social Cost, 3 J. L. & Econ. 1 (1960).

[43] See Kate Conger, Tiffany Hsu, & Ryan Mac, Elon Musk’s Twitter Faces Exodus of Advertisers and Executives, N.Y. Times (Nov. 1, 2022); Ryan Mac & Tiffany Hsu, Twitter’s US Ad Sales Plunge 59% as Woes Continue, N.Y. Times (Jun. 5, 2023).

[44] On the other hand, there is a long history of newspapers, magazines, and even cable-news networks with strong editorial biases surviving in the marketplace. The First Amendment also protects this.

[45] See Ben Sperry, The Market for Speech Governance: Free Speech Strikes Back?, Truth Mark. (May 4, 2022), https://truthonthemarket.com/2022/05/04/the-market-for-speech-governance-free-speech-strikes-back.

[46] See Ben Sperry, Meta’s Announcement: The Return of Online Free Speech?, Truth Mark. (Jan. 9, 2025), https://truthonthemarket.com/2025/01/09/metas-announcement-the-return-of-online-free-speech.

[47] See Restoring Freedom of Speech and Ending Federal Censorship, The White House (Jan. 20, 2025), https://www.whitehouse.gov/presidential-actions/2025/01/restoring-freedom-of-speech-and-ending-federal-censorship.

[48] Ben Sperry, Restoring the Marketplace of Ideas: Examining the Executive Order on Ending Federal Censorship, Truth Mark. (Jan. 28, 2025), https://truthonthemarket.com/2025/01/28/restoring-the-marketplace-of-ideas-examining-the-executive-order-on-ending-federal-censorship.

[49] Much of this section is adapted from Ben Sperry, What Does Murthy v. Missouri Mean for Online Speech?, Truth Mark. (Jun. 26, 2024), https://truthonthemarket.com/2024/06/26/what-does-murthy-v-missouri-mean-for-online-speech.

[50] 144 S. Ct. 1972 (2024).

[51] Id. at 1985 (“We begin—and end—with standing. At this stage, neither the individual nor the state plaintiffs have established standing to seek an injunction against any defendant.”). See also id. at 1989-93.

[52] Amicus Brief of the International Center for Law & Economics, Murthy v. Missouri, Feb. 9, 2024, available at https://laweconcenter.org/resources/icle-amicus-to-us-supreme-court-in-murthy-v-missouri/.

[53] 144 S. Ct. 1316 (2024).

[54] Id. at 1322 (quoting Bantam Books, Inc. v. Sullivan, 372 U.S. 58, 57 (1963): “Six decades ago, this Court held that a government entity’s ‘threat of invoking legal sanctions and other means of coercion’ against a third party ‘to achieve the suppression’ of disfavored speech violates the First Amendment… Today, the Court reaffirms what it said then: Government officials cannot attempt to coerce private parties in order to punish or suppress views that the government disfavors.”).

[55] Ben Sperry, Murthy Oral Arguments: Standing, Coercion, and the Difficulty of Stopping Backdoor Government Censorship, Truth Mark. (Mar. 20, 2024), https://truthonthemarket.com/2024/03/20/murthy-oral-arguments-standing-coercion-and-the-difficulty-of-stopping-backdoor-government-censorship.

[56] Murthy, 144 S. Ct. at 1986 (internal citations omitted).

[57] See id. at 2006-15.

[58] Id. at 2011.

[59] Id. at 2012.

[60] Id.

[61] Id. at 2013.

[62] Id. at 2015.

[63] Id. at 1999.

[64] Yale Brozen, Concentration, Mergers, and Public Policy 135 (1982).

[65] See Missouri v. Biden, 680 F.3d 630, 705-06 (W.D. La. Jul. 4, 2023).

[66] Id. at 706 (quoting Armstrong v. Ashely, 60 F.4th 262 (5th Cir. 2023).

[67] See infra discussion of Verizon Commcn’s Inc. v. Law Offices of Curtis v. Trinko, LLP, 540 U.S. 398 (2004).

[68] 2010 Merger Guidelines, sec. 1.

[69] See Matthew Jones, Bruce Kobayashi, & Jason O’Connor, Economics at the FTC: Non-Price Merger Effects and Deceptive Automobile Ads (FTC Working Paper, Dec. 2018), available at https://www.ftc.gov/system/files/documents/reports/economics-ftc-non-price-merger-effects-deceptive-automobile-ads/1812-be-rio.pdf.

[70] Id. at 6.

[71] United States v. Continental Can Co., 378 U.S. 441 (1964).

[72] Id. at 455-56.

[73] See National Soc’y of Prof. Engineers v. United States, 435 U.S. 279, 295 (1978) (“The Sherman Act reflects a legislative judgment that, ultimately, competition will produce not only lower prices but also better goods and services. “The heart of our national economic policy long has been faith in the value of competition.”  Standard Oil Co. v. FTC, 340 U. S. 231,  340 U. S. 248. The assumption that competition is the best method of allocating resources in a free market recognizes that all elements of a bargain – quality, service, safety, and durability – and not just the immediate cost, are favorably affected by the free opportunity to select among alternative offers.”) (emphasis added).

[74] Quality-adjusted price does have a long history in economics, and has been applied in antitrust analysis. See Joshua D. Wright & Douglas H. Ginsburg, The Goals of Antitrust: Welfare Trumps Choice, 81 Fordham L. Rev. 2405, 2410 (2013) (“Quality-adjusted prices have been part of the industrial organization toolkit since the early 1900s. The Bureau of Labor Statistics has used this tool for nearly a century. Furthermore, quality-adjusted prices are frequently used in industrial organization economics and in antitrust analysis.”). See also id. n.31-32.

[75] Roland Mach. Co. v. Dresser Indus. Inc., 749 F.2d 380 (7th Cir. 1984).

[76] Id. at 395.

[77] Id.

[78] This example is adapted from Geoffrey A. Manne & R. Ben Sperry, The Problems and Perils of Bootstrapping Privacy and Data into an Antitrust Framework, CPI Antitrust Chronicle (May 2015), at 3, available at https://laweconcenter.org/wp-content/uploads/2017/09/bootstrapping-privacy.pdf.

[79] See supra Part II.

[80] See supra Part II.

[81] Verizon Commcn’s Inc. v. Law Offices of Curtis v. Trinko, LLP, 540 U.S. 398, 407 (2004).

[82] See Parler LLC v. Amazon Web Services Inc., 514 F.Supp.3d 1261, 1264 (W.D. Wash. Jan. 21, 2021) (citing Complaint, ¶ 17).

[83] See id. at 1265 (“Parler claims that in response to speculation that the President would move to Parler, there was a mass exodus of users from Twitter to Parler and a 355% increase in installations of Parler’s app.”).

[84] Id. at 1266.

[85] Id.

[86] Freedom Watch Inc. v. Google Inc., 816 Fed. Appx. 497, 500 (D.C. Cir. 2020).

[88] Christiano Lima, Steven Overly, Nick Niedzwiadek, & Leah Nylen, ‘Censorship Teams’ vs ‘Working the Refs’: Key Moments from Today’s Hearing with Tech CEOs, Politico (Nov. 17, 2020), https://www.politico.com/news/2020/11/17/facebook-twitter-senate-tech-hearing-436975, (quoting Sen. Josh Hawley).

[89] Leegin Creative Leather Prods. Inc. v. PSKS Inc., 551 U.S. 877, 886-87 (2007) (citations and internal quotation marks omitted).

[90] See National Advertising Division (NAD), Better Bus. Bur., https://bbbprograms.org/programs/all-programs/national-advertising-division (last visited May 20, 2025).

[91] See Amy Kiste Nyberg, Comics Code History: The Seal of Approval, Comic Book Leg. Def. Fund, http://cbldf.org/comics-code-history-the-seal-of-approval (last visited May 20, 2025); Code of the Comics Magazine Association of America Inc. (Oct. 24, 1954), https://en.wikisource.org/wiki/Comic_book_code_of_1954.

[92] See supra Part III.B.

[93] 15 U.S.C. 45(a)(1).

[94] FTC Policy Statement Regarding the Scope of Unfair Methods of Competition Under Section 5 of the Federal Trade Commission Act, File No. P221202 (Nov. 10, 2022), available at https://www.ftc.gov/system/files/ftc_gov/pdf/P221202Section5PolicyStatement.pdf.

[95] Id. at 1.

[96] Id. at 8.

[97] Id.

[98] Id.

[99] Id. at 9.

[100] Id.

[101] Id. at 9-10.

[102] Id. at 10-12.

[103] Id. at 11.

[104] Id. at 11-12.

[105] See supra notes 45-46  and accompanying text.

[106] 15 U.S.C. 45(n).

[107] See J. Howard Beales, The FTC’s Use of Unfairness Authority: Its Rise, Fall, and Resurrection, Fed. Trade Comm’n (May 30, 2003), https://www.ftc.gov/news-events/news/speeches/ftcs-use-unfairness-authority-its-rise-fall-resurrection.

[108] FTC Policy Statement on Unfairness, appended to International Harvester Co., 104 F.T.C. 949, 1070 (1984), (Dec. 17, 1980), https://www.ftc.gov/legal-library/browse/ftc-policy-statement-unfairness.

[109] Id.

[110] Id.

[111] Id.

[112] Id.

[113] Id.

[114] Id.

[115] Id.

[116] Id.

[117] FTC Policy Statement on Deception, appended to Cliffdale Associates Inc., 103 F.T.C. 110, 174 (1984), (Oct. 14, 1983), available at https://www.ftc.gov/system/files/documents/public_statements/410531/831014deceptionstmt.pdf.

[118] Id. at 1.

[119] Id.

[120] Id.