Showing 9 of 73 Publications

Murthy Oral Arguments: Standing, Coercion, and the Difficulty of Stopping Backdoor Government Censorship

TOTM With Monday’s oral arguments in Murthy v. Missouri, we now have more of a feel for how the U.S. Supreme Court appears to be considering . . .

With Monday’s oral arguments in Murthy v. Missouri, we now have more of a feel for how the U.S. Supreme Court appears to be considering the issues of social-media censorship—in this case, done allegedly at the behest of federal officials.

In the International Center for Law & Economics’ (ICLE) amicus brief in the case, we argued that the First Amendment protects a marketplace of ideas, and government agents can’t intervene in that marketplace by coercing social-media companies into removing disfavored speech. But if the oral arguments are any indication, there are reasons to be skeptical that the Court will uphold the preliminary injunction the district court issued against the government officials (later upheld in a more limited form by the 5th U.S. Circuit Court of Appeals).

Read the full piece here.

Continue reading
Innovation & the New Economy

ICLE Comments to FTC on Children’s Online Privacy Protection Rule NPRM

Regulatory Comments Introduction We thank the Federal Trade Commission (FTC) for this opportunity to comment on the notice of proposed rulemaking (NPRM) to update the Children’s Online . . .

Introduction

We thank the Federal Trade Commission (FTC) for this opportunity to comment on the notice of proposed rulemaking (NPRM) to update the Children’s Online Privacy Protection Rule (“COPPA Rule”).

The International Center for Law and Economics (ICLE) is a nonprofit, nonpartisan research center whose work promotes the use of law & economics methodologies to inform public-policy debates. We believe that intellectually rigorous, data-driven analysis will lead to efficient policy solutions that promote consumer welfare and global economic growth.[1]

ICLE’s scholars have written extensively on privacy and data-security issues, including those related to children’s online safety and privacy. We also previously filed comments as part of the COPPA Rule Review and will make some of the same points below.[2]

The Children’s Online Privacy Protection Act (COPPA) sought to strike a balance in protecting children without harming the utility of the internet for children. As Sen. Richard Bryan (D-Nev.) put it when he laid out the purpose of COPPA:

The goals of this legislation are: (1) to enhance parental involvement in a child’s online activities in order to protect the privacy of children in the online environment; (2) to enhance parental involvement to help protect the safety of children in online fora such as chatrooms, home pages, and pen-pal services in which children may make public postings of identifying information; (3) to maintain the security of personally identifiable information of children collected online; and (4) to protect children’s privacy by limiting the collection of personal information from children without parental consent. The legislation accomplishes these goals in a manner that preserves the interactivity of children’s experience on the Internet and preserves children’s access to information in this rich and valuable medium.[3]

In other words, COPPA was designed to protect children from online threats by promoting parental involvement in a way that also preserves a rich and vibrant marketplace for children’s content online. Consequently, the pre-2013 COPPA Rule did not define personal information to include persistent identifiers standing alone. It is these persistent identifiers that are critical for the targeted advertising that funds the interactive online platforms and the creation of children’s content the legislation was designed to preserve.

COPPA applies to the “operator of any website or online service” that is either “directed to children that collects personal information from children” or that has “actual knowledge that it is collecting personal information from a child.”[4] These operators must “obtain verifiable parental consent for the collection, use, or disclosure of personal information.” The NPRM, following the mistaken 2013 amendments to the COPPA Rule, continues to define “personal information” to include persistent identifiers that are necessary for the targeted advertising undergirding the internet ecosystem.

Below, we argue that, before the FTC moves further toward restricting platform operators and content creators’ ability to monetize their work through targeted advertising, it must consider the economics of multisided platforms. The current path will lead to less available free content for children and more restrictions on their access to online platforms that depend on targeted advertising. Moreover, the proposed rules are inconsistent with the statutory text of COPPA, as persistent identifiers do not by themselves enable contacting specific individuals. Including them in the definition of “personal information” is also contrary to the statute’s purpose, as it will lead to a less vibrant internet ecosystem for children.

Finally, there are better ways to protect children online, including by promoting the use of available technological and practical solutions to avoid privacy harms. To comply with existing First Amendment jurisprudence regarding online speech, it is necessary to rely on these less-restrictive means to serve the goal of protecting children without unduly impinging their speech interests online.

I. The Economics of Online Multisided Platforms

Most of the “operators of websites and online services” subject to the COPPA Rule are what economists call multisided markets, or platforms.[5] Such platforms derive their name from the fact that they serve at least two different types of customers and facilitate their interaction. Multisided platforms generate “indirect network effects,” described by one economist as a situation where “participants on one side value being able to interact with participants on the other side… lead[ing] to interdependent demand.”[6]

Online platforms provide content to one side and access to potential consumers on the other side. In order to keep demand high, online platforms often offer free access to users, whose participation is subsidized by those participants on the other side of the platform (such as advertisers) that wish to reach them.[7] This creates a positive feedback loop in which more participants on one side of the platform leads to more participants on the other.

This dynamic is also true of platforms with content “directed to children.” Revenue is collected not from those users, but primarily from the other side of the platform—i.e., advertisers who pay for access to the platform’s users. To be successful, online platforms must keep enough—and the right type of—users engaged to maintain demand for advertising.

Moreover, many “operators” under COPPA are platforms that rely on user-generated content. Thus, they must also consider how to attract and maintain high-demand content creators, often accomplished by sharing advertising revenue. If platforms fail to serve the interests of high-demand content creators, those creators may leave the platform, thus reducing its value.

Online platforms acting within the market process are usually going to be the parties best-positioned to make decisions on behalf of platforms users. Operators with content directed to children may even compete on privacy policies and protections for children by providing tools to help users avoid what they (or, in this context, their parents and guardians) perceive to be harms, while keeping users on the platform and maintaining value for advertisers.[8]

There may, however, be examples where negative externalities[9] stemming from internet use are harmful to society more broadly. A market failure could result, for instance, if platforms’ incentives lead them to collect too much (or the wrong types of) information for targeted advertising, or to offer up content that is harmful for children or keeps them hooked to using the platform.

In situations where there are negative externalities from internet use, there may be a case to regulate online platforms in various ways. Any case for regulation must, however, acknowledge potential transaction costs, as well as how platforms and users may respond to changes in those costs. To get regulation right, the burden of avoiding a negative externality should fall on the least-cost avoider.

The Coase Theorem, derived from the work of Nobel-winning economist Ronald Coase[10] and elaborated on in the subsequent literature,[11] helps to explain the issue at-hand:

  1. The problem of externalities is bilateral;
  2. In the absence of transaction costs, resources will be allocated efficiently, as the parties bargain to solve the externality problem;
  3. In the presence of transaction costs, the initial allocation of rights does matter; and
  4. In such cases, the burden of avoiding the externality’s harm should be placed on the least-cost avoider, while taking into consideration the total social costs of the institutional framework.

In one of Coase’s examples, the noise from a confectioner using his candy-making machine is a potential cost to the doctor next door, who consequently cannot use his office to conduct certain testing. Simultaneously, the doctor moving his office next door to the confectioner is a potential cost to the confectioner’s ability to use his equipment.

In a world of well-defined property rights and low transaction costs, the initial allocation of rights would not matter, because the parties could bargain to overcome the harm in a mutually beneficial manner—i.e., the confectioner could pay the doctor for lost income or to set up sound-proof walls, or conversely, the doctor could pay the confectioner to reduce the sound of his machines.[12] But since there are transaction costs that prevent this sort of bargain, it is important whether the initial right is allocated to the doctor or the confectioner. To maximize societal welfare, the cost should be placed on the entity that can avoid the harm at the lowest cost.[13]

In the context of the COPPA Rule, website operators and online services create incredible value for their users, but they also can, at times, impose negative externalities relevant to children who use their services. In the absence of transaction costs, it would not matter whether operators must obtain verifiable parental consent before collecting, using, or disclosing personal information, or whether the initial burden is placed on parents and children to avoid the harms associated with such collection, use, or disclosure.

But given that there are transaction costs involved in obtaining (and giving) verifiable parental consent,[14] it matters how the law defines personal information (which serves as a proxy for a property right, in Coase’s framing). If personal information is defined too broadly and the transaction costs for providers to gain verifiable parental consent are too high, the result may be that the societal benefits of children’s internet use will be lost, as platform operators restrict access beyond the optimum level.

The threat of liability for platform operators under COPPA also risks excessive collateral censorship.[15] This arguably has already occurred, as operators like YouTube have restricted content creators’ ability to monetize their work through targeted advertising, leading on balance to less children’s content. By wrongly placing the burden on operators to avoid harms associated with targeted advertising, societal welfare is reduced, including the welfare of children who no longer get the benefits of that content.

On the other hand, there are situations where website operators and online services are the least-cost avoiders. For example, they may be the parties best-placed to monitor and control harms associated with internet use in cases where it is difficult or impossible to hold those using their platforms accountable for the harms they cause.[16] In other words, operators should still be held liable under COPPA when they facilitate adults’ ability to message children, or to identify a child’s location without parental consent, in ways that could endanger children.[17] Placing the burden on children or their parents to avoid such harms could allow operators to impose un- or undercompensated harms on society.

Thus, in order to get the COPPA Rule’s balance right, it is important to determine whether it is the operators or their users who are the least-cost avoiders. Placing the burden on the wrong parties would harm societal welfare, either by reducing the value that online platforms confer to their users, or in placing more uncompensated negative externalities on society.

II. Persistent Identifiers and ‘Personal Information’

As mentioned above, under COPPA, a website operator or online service that is either directed to children or that has actual knowledge that it collects personal information from a child must obtain “verifiable parental consent” for the “collection, use or disclosure” of that information.[18] But the NPRM continues to apply the expanded definition of “personal information” to include persistent identifiers from the 2013 amendments.

COPPA’s definition for personal information is “individually identifiable information” collected online.[19] The legislation included examples such as first and last name; home or other physical address; as well as email address, telephone number, or Social Security number.[20] These are all identifiers obviously connected to people’s real identities. COPPA does empower the FTC to determine whether other identifiers should be included, but the commission must permit “the physical or online contacting of a specific individual”[21] or “information concerning the child or the parents of that child that the website collects online from the child and combines with an identifier described in this paragraph.”[22]

In 2013, the FTC amended the definition of personal information to include:

A persistent identifier that can be used to recognize a user over time and across different Web sites or online services. Such persistent identifier includes, but is not limited to, a customer number held in a cookie, an Internet Protocol (IP) address, a processor or device serial number, or unique device identifier.[23]

The NPRM here continues this error.

Neither IP addresses nor device identifiers alone “permit the physical or online contacting of a specific individual,” as required by 15 U.S.C. §?6501(8)(F). A website or app could not identify personal identity or whether a person is an adult or child from these pieces of information alone. In order for persistent identifiers, like those relied upon for targeted advertising, to be counted as personal information under 15 U.S.C. §?6501(8)(G), they need to be combined with other identifiers listed in the definitions. In other words, it is only when a persistent identifier is combined with a first and last name, an address, an email, a phone number, or a Social Security number that it should be considered personal information protected by the statute.

While administrative agencies receive Chevron deference in court challenges when definitions are ambiguous, this text, when illuminated by canons of statutory construction,[24] is clear. The canon of ejusdem generis applies when general words follow an enumeration of two or more things.[25] The general words are taken to apply only to persons or things of the same general kind or class as those mentioned specifically. Persistent identifiers, such as cookies, bear little resemblance to the other examples of “personally identifiable information” listed in the statute, such as first and last name, address, phone, email, or Social Security number. Only when combined with such information could a persistent identifier become personal information.

The NPRM states that the Commission is “not persuaded” by this line of argumentation, pointing back to the same reasoning offered in the 2013 amendments. The NPRM states that it is “the reality that at any given moment a specific individual is using that device,” which “underlies the very premise behind behavioral advertising.”[26] Moreover the NPRM reasons that “while multiple people in a single home often use the same phone number, home address, and email address, Congress nevertheless defined these identifiers as ‘individually identifiable information’ in the COPPA statute.”[27] But this reasoning is flawed.

While multiple people regularly share an address, and sometimes even a phone number or email, each of these identifiers allows for contacting an individual person in a way that a persistent identifier simply does not. In each of those cases, bad actors can use such information to send direct messages to people (phone numbers and emails); find their physical location (address); and potentially to cause them harm.

A persistent identifier, on its own, is not the same. Without the subpoena of an internet service provider (ISP) or virtual private network (VPN), a bad actor that intended harm could not tell either where the person to whom the persistent identifier is assigned is located, or to message them directly. Persistent identifiers are useful primarily to online platforms in supporting their internal operations (which the NPRM continues to allow) and serving users targeted advertising.

Moreover, the fact that bills seeking to update COPPA—proposed but never passed by Congress—have proposed expanding the definition of personal information to include persistent identifiers suggests that the FTC has asserted authority that it does not have under the current statute.[28] Under Supreme Court precedent,[29] when considering whether an agency has the authority that it claims to pass rules, courts must consider whether Congress has rejected proposals to expand the agency’s jurisdiction in similar ways.

The NPRM also ignores the practical realities of the relationship between parents and children when it comes to devices and internet use. Parental oversight is already built into any type of advertisement (including targeted ads) that children see. Few children can view those advertisements without their parents providing them a device and the internet access to do so. Even fewer children can realistically make their own purchases. Consequently, the NPRM misunderstands targeted advertising in the context of children’s content, which is not based on any knowledge about the users as individuals, but on the browsing and search history of the device they happen to be using.

Children under age 13, in particular, are extremely unlikely to have purchased the devices they use; to have paid for the internet access to use those devices; or to have any disposable income or means to pay for goods and services online. Thus, contrary to the NPRM’s assumptions, the actual “targets” of this advertising—even on websites or online services that host children’s content—are the children’s parents.

This NPRM continues the 2013 amendments’ mistake and will continue to greatly reduce the ability of children’s content to generate revenue through the use of relatively anonymous persistent identifiers. As we describe in the next section, the damage done by the 2013 amendments is readily apparent, and the Commission should take this opportunity to rectify the problem.

III. More Parental Consent, Less Children’s Content

As outlined above, in a world without transaction costs—or, at least, one in which such costs are sufficiently low—verifiable parental consent would not matter, because it would be extremely easy for a bargain to be struck between operators and parents. In the real world, however, transaction costs exist. In fact, despite the FTC’s best efforts under the COPPA Rule, the transaction costs associated with obtaining verifiable parental consent continue to be sufficiently high as to prevent most operators from seeking that consent for persistent identifiers. As we stated in our previous comments, the economics are simple: if content creators lose access to revenue from targeted advertising, there will be less content created from which children can benefit.

FIGURE 1: Supply Curve for Children’s Online Content

The supply curve for children’s online content shifts left as the marginal cost of monetizing it increases. The marginal cost of monetizing such content is driven upward by the higher compliance costs of obtaining verifiable parental consent before serving targeted advertising. This supply shift means that less online content will be created for children.

These results are not speculative at this point. Scholars who have studied the issue have found the YouTube settlement, made pursuant to the 2013 amendments, has resulted in less child-directed online content, due to creators’ inability to monetize that content through targeted advertising. In their working paper “COPPAcalypse? The YouTube Settlement’s Impact on Kids Content,”[30] Garrett Johnson, Tesary Lin, James C. Cooper, & Liang Zhong summarized the issue as follows:

The Children’s Online Privacy Protection Act (COPPA), and its implementing regulations, broadly prohibit operators of online services directed at children under 13 from collecting personal information without providing notice of its data collection and use practices and obtaining verifiable parental consent. Because obtaining verifiable parental consent for free online services is difficult and rarely cost justified, COPPA essentially acts as a de facto ban on the collection of personal information by providers of free child-directed content. In 2013, the FTC amended the COPPA rules to include in the definition of personal information “persistent identifier that can be used to recognize a user over time and across different Web sites or online services,” such as a “customer number held in a cookie . . . or unique device identifier.” This regulatory change meant that, as a practical matter, online operators who provide child-directed content could no longer engage in personalized advertising.

On September 4, 2019, the FTC entered into a consent agreement with YouTube to settle charges that it had violated COPPA. The FTC’s allegations focused on YouTube’s practice of serving personalized advertising on child-directed content at children without obtaining verifiable parental consent. Although YouTube maintains it is a general audience website and users must be at least 13 years old to obtain a Google ID (which makes personalized advertising possible), the FTC complaint alleges that YouTube knew that many of its channels were popular with children under 13, citing YouTube’s own claims to advertisers. The settlement required YouTube to identify child-directed channels and videos and to stop collecting personal information from visitors to these channels. In response, YouTube required channel owners producing [“made-for-kids”] MFK content to designate either their entire channels or specific videos as MFK, beginning on January 1, 2020. YouTube supplemented these self-designations with an automated classifier designed to identify content that was likely directed at children younger than 13. In so doing, YouTube effectively shifted liability under COPPA to the channel owners, who could face up to $42,530 in fines per video if they fail to self-designate and are not detected by YouTube’s classifier.[31]

By requiring verifiable parental consent, the rule change and settlement increased the transaction costs imposed on online platforms that host content created by others. YouTube’s economically rational response was to restrict content creators’ ability to benefit from (considerably more lucrative) personalized advertising. The result was less content created for children, including by driving out less-profitable content creators:

Consistent with a loss in personalized ad revenue, we find that child-directed content creators produce 13% less content and pivot towards producing non-child-directed content. On the demand side, views of child-directed channels fall by 22%. Consistent with the platform’s degraded capacity to match viewers to content, we find that content creation and content views become more concentrated among top child-directed YouTube channels.[32]

This is not the only finding regarding COPPA’s role in reducing the production of content for children. Morgan Reed—president of the App Association, a global trade association for small and medium-sized technology companies—presented extensively at the FTC’s 2019 COPPA Workshop.[33] Reed’s testimony detailed that the transaction costs associated with obtaining verifiable parental consent did little to enhance parental control, but much to reduce the quality and quantity of content directed to children.

It is worth highlighting, in particular, Reed’s repeated use of the words “friction,” “restriction,” and “cost” to describe how COPPA’s institutional features affect the behavior of social-media platforms, parents, and children. While noting that general audience content is “unfettered, meaning that you do not feel restricted by what you can get to, how you do it. It’s easy, it’s low friction. Widely available. I can get it on any platform, in any case, in any context and I can get to it rapidly,” Reed said that COPPA-regulated apps and content are, by contrast, all about:

Friction, restriction, and cost. Every layer of friction you add alters parent behavior significantly. We jokingly refer to it as the over the shoulder factor. If a parent wants access to something and they have to pass it from the back seat to the front seat of the car more than one time, the parent moves on to the next thing. So the more friction you add to an application directed at children the less likely it is that the parent is going to take the steps necessary to get through it because the competition, of course, is as I said, free, unfettered, widely available. Restriction. Kids balk against some of the restrictions. I can’t get to this, I can’t do that. And they say that to the parent. And from the parent’s perspective, fine, I’ll just put in a different age date. They’re participating, they’re parenting but they’re not using the regulatory construction that we all understand.

The COPPA side, expensive, onerous or friction full. We have to find some way around that. Restrictive, fewer features, fewer capabilities, less known or available, and it’s entertaining-ish. …

Is COPPA the barrier? I thought this quote really summed it up. “Seamlessness is expected. But with COPPA, seamlessness is impossible.” And that has been one of the single largest areas of concern. Our folks are looking to provide a COPPA compliant environment. And they’re finding doing VPC is really hard. We want to make it this way, we just walked away. And why do they want to do it? We wanted to create a hub for kids to promote creativity. So these are not folks who are looking to take data and provide interest based advertising. They’re trying to figure out how to do it so they can build an engaging product. Parental consent makes the whole process very complicated. And this is the depressing part. …

We say that VPC is intentional friction. It’s clear from everything we’ve heard in the last two panels that the authors of COPPA, we don’t really want information collected on kids. So friction is intentional. And this is leading to the destruction of general audience applications basically wiping out COPPA apps off the face of the map.[34]

Reed’s use of the word “friction” is particularly enlightening. The economist Mike Munger of Duke University has often described transaction costs as frictions—explaining that, to consumers, all costs are transaction costs.[35] When higher transaction costs are imposed on social-media platforms, end users feel the impact. In this case, the result is that children and parents receive less quality children’s apps and content.

Thus, when the NPRM states that “the Commission [doesn’t] find compelling the argument that the 2013 persistent identifier modification has caused harm by hindering the ability of operators to monetize online content through targeted advertising,”[36] in part because “the 2013 Amendments permit monetization… through providing notice and seeking parental consent for the use of personal information for targeted advertising,”[37] it misses how transaction costs prevent this outcome. The FTC should not ignore the data provided by scholars who have researched the question, nor the direct testimony of app developers.

IV. Lower-Cost Ways to Avoid Harms to Children

Widely available practical and technological means are a lower-cost way to avoid the negative externalities associated with internet use, relative to verifiable-parental-consent laws. As NetChoice put it in the complaint the group filed against Arkansas’ social-media age-verification law, “[p]arents have myriad ways to restrict their children’s access to online services and to keep their children safe on such services.”[38]

NetChoice’s complaint recognized the subjective nature of negative externalities, stating:

Just as people inevitably have different opinions about what books, television shows, and video games are appropriate for minors, people inevitably have different views about whether and to what degree online services are appropriate for minors. While many minors use online services in wholesome and productive ways, online services, like many other technologies, can be abused in ways that may harm minors.[39]

They proceeded to list all the ways that parents can take control and help their children avoid online harms, including with respect to the decisions to buy devices for their children and to set terms for how and when they are permitted to use them.[40] Parents can also choose to use tools offered by cell-phone carriers and broadband providers to block certain apps and sites from their children’s devices, or to control with whom their children can communicate and for how long they can use the devices.[41]

NetChoice also pointed to wireless routers that allow parents to filter and monitor online content;[42] parental controls at the device level;[43] third-party filtering applications;[44] and numerous tools offered by NetChoice members that offer relatively low-cost monitoring and control by parents, or even by teen users acting on their own behalf.[45] Finally, they noted that, in response to market demand,[46] NetChoice members expend significant resources curating content to ensure that it is appropriate.[47]

Similarly, parents can protect their children’s privacy simply by taking control of the devices they allow their children to use. Tech-savvy parents can, if they so choose, install software or use ad-blockers to prevent collection of persistent identifiers.[48] Even less tech-savvy parents can make sure that their children are not subject to ads and tracking simply by monitoring their device usage and ensuring they only use YouTube Kids or other platforms created explicitly for children. In fact, most devices and operating systems now have built-in, easy-to-use controls that enable both monitoring and blocking of children’s access to specific apps and websites.[49]

This litany of less-restrictive means to accomplish the goal of protecting children online bears repeating, because even children have some First Amendment interests in receiving online speech.[50] If a court were to examine the COPPA Rule as a speech regulation that forecloses children’s access to online content, it would be subject to strict scrutiny. This means the rules would need to be the least-restrictive possible in order to fulfill the statute’s purpose. Educating parents and children on the available practical and technological means to avoid harms associated with internet use, including the collection of data for targeted advertising, would clearly be a less-restrictive alternative to a de facto ban of targeted advertising.

A less-restrictive COPPA rule could still enhance parental involvement and protect children from predators without impairing the marketplace for children’s online content significantly. Parents already have the ability to review their children’s content-viewing habits on devices they buy for them. A COPPA rule that enhances parental control by requiring verifiable parental consent when children are subject to sharing personal information—like first and last name, address, phone number, email address, or Social Security number—obviously makes sense, along with additions like geolocation data. But it is equally obvious that it is possible to avoid, at lower cost, the relatively anonymized collection of persistent identifiers used to support targeted ads through practical and technological means, without requiring costly verifiable parental consent.

V. Perils of Bringing More Entities Under the COPPA Rule

The costs of the COPPA Rule would be further exacerbated by the NPRM’s proposal to modify the criteria for determining whether a site or service is directed toward children.[51] These proposed changes, particularly the reliance on third-party services and comparisons with “similar websites or online services,” raise significant concerns about both their practical implementation and potential unintended consequences. The latter could include further losses of online content for both children and adults, as content creators drawn into COPPA’s orbit lose access to revenue from targeted advertising.

The FTC’s current practice employs a multi-factor test to ascertain whether a site or service is directed at children under 13. This comprehensive approach considers various elements, including subject matter, visual and audio content, and empirical evidence regarding audience composition.[52] The proposed amendments aim to expand this test by introducing such factors as marketing materials, representations to third parties and, notably, reviews by users or third parties and comparisons with similar websites or services.[53]

The inclusion of third-party reviews and comparisons with similar services as factors in determining a site’s target audience introduces a level of ambiguity and unreliability that would be counterproductive to COPPA’s goals. Without clear standards to evaluate their competence or authority, relying on third-party reviews would leave operators without a solid foundation upon which to assess compliance. This ambiguity could lead to overcompliance. In particular, online platforms that carry third-party content may err on the side of caution in order to align with the spirit of the rule. This threatens to stifle innovation and free expression by restricting creators’ ability to monetize content that has any chance to be considered “directed to children.” Moreover, to avoid this loss of revenue, content creators could shift their focus exclusively to content clearly aimed only at adults, rather than that which could be interesting to adults and children alike.

Similarly, the proposal to compare operators with “similar websites or online services” is fraught with challenges. The lack of guidance on how to evaluate similarity or to determine which service sets the standard for compliance would increase burdens on operators, with little evidence of tangible realized benefits. It’s also unclear who would make these determinations and how disputes would be resolved, leading to further compliance costs and potential litigation. Moreover, operators may be left in a position where it is impractical to accurately assess the audience of similar services, thereby further complicating compliance efforts.

Given these considerations, the FTC should not include reliance on third-party services or comparisons with similar websites or online services in its criteria for determining whether content is directed at children under 13. These approaches introduce a level of uncertainty and unreliability that could lead to overcompliance, increased costs, and unintended negative impacts on online content and services, including further restrictions on content creators who create content interesting to both adults and children. Instead, the FTC should focus on providing clear, direct guidelines that allow operators to assess their compliance with COPPA confidently, without the need to rely on potentially biased or manipulative third-party assessments. This approach will better serve the FTC’s goal of protecting children’s online privacy, while ensuring a healthy, innovative online ecosystem.

Conclusion

The FTC should reconsider the inclusion of standalone persistent identifiers in the definition of “personal information.” The NPRM continues to enshrine the primary mistake of the 2013 amendments. This change was inconsistent with the purposes and text of the COPPA statute. It already has reduced, and will continue to reduce, the availability of children’s online content.

[1] ICLE has received financial support from numerous companies, organizations, and individuals, including firms with interests both supportive of and in opposition to the ideas expressed in this and other ICLE-supported works. Unless otherwise noted, all ICLE support is in the form of unrestricted, general support. The ideas expressed here are the authors’ own and do not necessarily reflect the views of ICLE’s advisors, affiliates, or supporters.

[2] Much of these comments are adapted from ICLE’s 2019 COPPA Rule Review Comments, available at https://laweconcenter.org/wp-content/uploads/2019/12/COPPA-Comments-2019.pdf; Ben Sperry, A Law & Economics Approach to Social-Media Regulation, CPI TechREG Chronicle (Feb. 29, 2022), https://laweconcenter.org/resources/a-law-economics-approach-to-social-media-regulation; Ben Sperry, A Coasean Analysis of Online Age-Verification and Parental-Consent Regimes (ICLE Issue Brief, Nov. 9, 2023), available at https://laweconcenter.org/wp-content/uploads/2023/11/Issue-Brief-Transaction-Costs-of-Protecting-Children-Under-the-First-Amendment-.pdf.

[3] 144 Cong. Rec. 11657 (1998) (Statement of Sen. Richard Bryan), available at https://www.congress.gov/crec/1998/10/07/CREC-1998-10-07.pdf#page=303.

[4] 15 U.S.C. §?6502(b)(1)(A).

[5] See, e.g., Jean-Charles Rochet & Jean Tirole, Platform Competition in Two-Sided Markets, 1 J. Euro. Econ. Ass’n 990 (2003).

[6] David S. Evans, Multisided Platforms in Antitrust Practice, at 3 (Oct. 17, 2023), forthcoming, Michael Noel, Ed., Elgar Encyclopedia on the Economics of Competition and Regulation, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4606511.

[7] For instance, many nightclubs hold “ladies’ night” events in which female patrons receive free admission or discounted drinks in order to attract more men, who pay full fare for both.

[8] See, e.g., Ben Sperry, Congress Should Focus on Protecting Teens from Real Harms, Not Targeted Ads, The Hill (Feb. 16, 2023), https://thehill.com/opinion/congress-blog/3862238-congress-should-focus-on-protecting-teens-from-real-harms-not-targeted-ads.

[9] An externality is a side effect of an activity that is not reflected in the cost of that activity—basically, what occurs when we do something whose consequences affect other people. A negative externality occurs when a third party does not like the effects of an action.

[10] See Ronald H. Coase, The Problem of Social Cost, 3 J. L. & Econ. 1 (1960)

[11] See Steven G. Medema, The Coase Theorem at Sixty, 58 J. Econ. Lit. 1045 (2020).

[12] See Coase, supra note 8, at 8-10.

[13] See id. at 34 (“When an economist is comparing alternative social arrangements, the proper procedure is to compare the total social product yielded by these different arrangements.”).

[14] See Part III below.

[15] See Felix T. Wu, Collateral Censorship and the Limits of Intermediary Liability, 87 Notre Dame L. Rev. 293, 295-96 (2011); Geoffrey A. Manne, Ben Sperry, & Kristian Stout, Who Moderates the Moderators: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Computer & Tech. L J. 26, 39 (2022); Ben Sperry, The Law & Economics of Children’s Online Safety: The First Amendment and Online Intermediary Liability, Truth on the Market (May 12 2023), https://truthonthemarket.com/2023/05/12/the-law-economics-of-childrens-online-safety-the-firstamendment-and-online-intermediary-liability.

[16] See Geoffrey A. Manne, Kristian Stout, & Ben Sperry, Twitter v. Taamneh and the Law & Economics of Intermediary Liability, Truth on the Market (Mar. 8, 2023), https://truthonthemarket.com/2023/03/08/twitter-v-taamneh-and-the-law-economics-of-intermediary-liability; Ben Sperry, Right to Anonymous Speech, Part 2: A Law & Economics Approach, Truth on the Market (Sep. 6, 2023), https://truthonthemarket.com/2023/09/06/right-to-anonymous-speech-part-2-a-law-economics-approach.

[17] See Statement of Commissioner Alvaro M. Bedoya On the Issuance of the Notice of Proposed Rulemaking to Update the Children’s Online Privacy Protection Rule (COPPA Rule), at 3-4 (Dec. 20, 2023), available at https://www.ftc.gov/system/files/ftc_gov/pdf/BedoyaStatementonCOPPARuleNPRMFINAL12.20.23.pdf (listing examples of these types of enforcement actions).

[18] 15 U.S.C. §?6502(b)(1)(A)(ii).

[19] 15 U.S.C. §?6501(8).

[20] 15 U.S.C. §?6501(8)(A)-(E).

[21] 15 U.S.C. §?6501(8)(F).

[22] 15 U.S.C. §?6501(8)(G).

[23] 16 CFR § 312.2 (Personal information)(7).

[24] See Chevron U.S.A. Inc. v. Natural Resources Defense Council, Inc., 467 U. S. 837, 843 n.9 (1984) (“If a court, employing traditional tools of statutory construction, ascertains that Congress had an intention on the precise question at issue, that intention is the law and must be given effect.”).

[25] What is EJUSDEM GENERIS?, The Law Dictionary: Featuring Black’s Law Dictionary Free Online Legal Dictionary 2nd Ed. (last accessed Dec. 9, 2019), https://thelawdictionary.org/ejusdem-generis.

[26] NPRM at 2043.

[27] Id.

[28] See, e.g., Children and Teens’ Online Privacy Protection Act, S. 1418, §2(a)(3) 118th Cong. (2024).

[29] See FDA v. Brown & Williamson, 529 U.S. 120, 148-50 (2000).

[30] Garrett A. Johnson, Tesary Lin, James C. Cooper, & Liang Zhong, COPPAcalypse? The YouTube Settlement’s Impact on Kids Content, SSRN (Apr. 26, 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4430334.

[31] Id. at 6-7 (emphasis added).

[32] Id. at 1.

[33] The Future of the COPPA Rule: An FTC Workshop Part 2, Federal Trade Commission (Oct. 7, 2019), available at https://www.ftc.gov/system/files/documents/public_events/1535372/transcript_of_coppa_workshop_part_2_1.pdf.

[34] Id. at 6 (emphasis added).

[35] See Michael Munger, To Consumers, All Costs are Transaction Costs, Am. Inst. Econ. Rsch. (June 13, 2023), https://www.aier.org/article/to-consumers-all-costs-are-transaction-costs.

[36] NPRM at 2043.

[37] Id. at 2034, n. 121.

[38] See NetChoice Complaint, NetChoice LLC v. Griffin, NO. 5:23-CV-05105, 2023 U.S. Dist. LEXIS 154571 (W.D. Ark. 2023), available at https://netchoice.org/wp-content/uploads/2023/06/NetChoice-v-Griffin_-Complaint_2023-06-29.pdf.

[39] Id. at para. 13.

[40] See id. at para. 14

[41] See id.

[42] See id. at para 15.

[43] See id. at para 16.

[44] See id.

[45] See id. at para. 17, 19-21

[46] Sperry, supra note 8.

[47] See NetChoice Complaint, supra note 36, at para. 18.

[48] See, e.g., Mary James & Catherine McNally, The Best Ad Blockers 2024, all about cookies (last updated Feb. 29, 2024), https://allaboutcookies.org/best-ad-blockers.

[49] See, e.g., Parental Controls for Apple, Android, and Other Devices, internet matters (last accessed Mar. 7, 2024), https://www.internetmatters.org/parental-controls/smartphones-and-other-devices.

[50] See, e.g., Brown v. Ent. Merchants Ass’n, 564 U.S. 786, 794-95 (2011); NetChoice, LLC v. Griffin, 2023 WL 5660155, at *17 (W.D. Ark. Aug. 31, 2023) (finding Arkansas’s Act 689 “obviously burdens minors’ First Amendment rights” by “bar[ring] minors from opening accounts on a variety of social media platforms.”).

[51] See NPRM at 2047.

[52] See id. at 2046-47.

[53] Id. at 2047 (“Additionally, the Commission believes that other factors can help elucidate the intended or actual audience of a site or service, including user or third-party reviews and the age of users on similar websites or services.”).

Continue reading
Data Security & Privacy

A Law & Economics Approach to Social-Media Regulation

Popular Media The thesis of this essay is that policymakers must consider what the nature of social media companies as multisided platforms means for regulation. The balance . . .

The thesis of this essay is that policymakers must consider what the nature of social media companies as multisided platforms means for regulation. The balance struck by social media companies acting in response to the incentives they face in the market could be upset by regulation that favors the interests of some users over others. Promoting the use of technological and practical means to avoid perceived harms by users themselves would preserve the benefits of social media to society without the difficult tradeoffs of regulation. Part I will introduce the economics of multisided platforms like social media, and how this affects the incentives of these platforms. Social-media platforms, acting within the market process, are best usually best positioned to balance the interests of their users, but there could be occasions where the market process fails due to negative externalities. Part II will consider these situations where there are negative externalities due to social media and introduce the least-cost avoider principle. Usually, social-media users are the least-cost avoiders of harms, but sometimes social media are better placed to monitor and control harms. This involves a balance, as the threat of collateral censorship or otherwise reducing opportunities to speak and receive speech could result from social media regulation. Part III will then apply the insights from Part I and II to the areas of privacy, children’s online safety, and speech regulation.

I. Introduction

Policymakers at both the state and federal levels have been actively engaged in recent years with proposals to regulate social media, whether the subject is privacy, children’s online safety, or concerns about censorship, misinformation, and hate speech.[1] While there may not be consensus about precisely why social media is bad, there is broad agreement that the major online platforms are to blame for at least some harms to society. It is also generally recognized, though often not emphasized, that social media brings great value to its users. In other words, there are costs and benefits, and policymakers should be cautious when introducing new laws that would upset the balance that social-media companies must strike in order to serve their users well.

This essay will propose a general approach, informed by the law & economics tradition, to assess when and how social media should be regulated. Part I will introduce the economics of multisided platforms, and how they affects social-media platforms’ incentives. The platforms themselves, acting within the market process, are best usually best-positioned to balance the interests of their users, but there could be occasions where the market process fails due to negative externalities. Part II will consider such externalities and introduce the least-cost avoider principle. Usually, social-media users are the least-cost avoiders of harms, but platforms themselves are sometimes better placed to monitor and control harms. This requires a balance, as social-media regulation raises the threat of collateral censorship or otherwise reducing opportunities to speak and receive speech. Part III will apply the insights from Part I and II to the areas of privacy, children’s online safety, and speech regulation.

The thesis of this essay is that policymakers must consider social-media companies’ status as multisided platforms means for regulation. The balance struck by social-media companies acting in response to the market incentives they face could be upset by regulation that favors the interests of some users over others. Promoting the use of technological and practical means to avoid perceived harms would allow users to preserve the benefits of social media without the difficult tradeoffs of regulation.

II. The Economics of Social-Media Platforms

Mutually beneficial trade is a fundamental bedrock of the market process. Entrepreneurs—including those that act through formal economic institutions like business corporations—seek to discover the best ways to serve consumers. Various types of entities help connect those who wish to buy products or services to those who are trying to sell them. Physical marketplaces are common around the world: those places set up to facilitate interactions between buyers and sellers. If those marketplaces fail to serve the interests of those who use them, others will likely arise.

Social-media companies are a virtual example of what economists call multi-sided markets or platforms.[2] Such platforms derive their name from the face that they serve at least two different types of customers and facilitate their interaction. Multi-sided platforms have “indirect network effects,” described by one economist as a situation where “participants on one side value being able to interact with participants on the other side… lead[ing] to interdependent demand.”[3] In some situations, a platform may determine it can only raise revenue from one side of the platform if demand on the other side of the platform is high. In such cases, the platform may choose to offer one side free access to the platform to boost such demand, which is subsidized by participants on the other side of the platform.[4] This creates a positive feedback loop in which more participants on one side of the platform leads to more participants on the other.

In this sense, social-media companies are much like newspapers or television in that, by solving a transaction cost problem,[5] these platforms bring together potential buyers and sellers by providing content to one side and access to consumers on the other side. Recognizing that their value lies in reaching users, these platforms sell advertising and offer access to content for a lower price, often at the price of zero (or free). In other words, advertisers subsidize the access to content for platform users.

Therefore, most social-media companies are free for users. Revenue is primarily collected from the other side of the platform—i.e., from advertisers. In effect, social-media companies are attention platforms: They supply content to users, while collecting data for targeted advertisements for businesses who seek access to those users. To be successful, social-media companies must keep enough (and the right type of) users engaged so as to maintain demand for advertising. Social-media companies must curate content that users desire in order to persuade them to spend time on the platform.

But unlike newspapers or television, social-media companies primarily rely on their users to produce content rather than creating their own. Thus, they must also consider how to attract and maintain high-demand content creators, as well as how to match user-generated content to the diverse interests of other users. If they fail to serve the interests of high-demand content creators, those users may leave the platform, thus reducing time spent on the platform by all users, which thereby reduces the value of advertising. Similarly, if they fail to match content to user interests, those users will be less engaged on the platform, reducing its value to advertisers.

Moreover, this means that social-media companies need to balance the interests of advertisers and other users. Advertisers may desire more data to be collected for targeting, but users may desire less data collection. Similarly, advertisers may desire more ads, while users may prefer fewer ads. Advertisers may prefer content that keeps users engaged on the platform, even if it is harmful for society, whether because it is false, hateful, or leads to mental-health issues for minors. On the other hand, brand-conscious advertisers may not want to run ads next to content with which they disagree. Moreover, users may not want to see certain content. Social-media companies need to strike a balance that optimizes their value, recognizing that losing participants on either side would harm the other.

Usually, social-media companies acting within the market process are going to be best-positioned to make decisions on behalf of their users. Thus, they may create community rules that restrict content that would, on net, reduce user engagement.[6] This could include limitations on hate speech and misinformation. On the other hand, if they go too far in restricting content that users consider desirable, that could reduce user engagement and thus value to advertisers. Social-media companies therefore compete on moderation policies, trying to strike the appropriate balance to optimize platform value. A similar principle applies when it comes to privacy policies and protections for minors: social-media companies may choose to compete by providing tools to help users avoid what they perceive as harms, while keeping users on the platform and maintaining value for advertisers.

There may, however, be scenarios where social media produces negative externalities[7] that are harmful to society. A market failure could result, for instance, if platforms have too great of an incentive to allow misinformation or hate speech that keeps users engaged, or to collect too much (or the wrong types of) information for targeted advertising, or to offer up content that is harmful for minors and keeps them hooked to using the platform.

In sum, social-media companies are multi-sided platforms that facilitate interactions between advertisers and users by curating user-generated content that drives attention to their platforms. To optimize the platform’s value, a social-media company must keep users engaged. This will often include privacy policies, content-moderation standards, and special protections for minors. On the other hand, incentives could become misaligned and lead to situations where social-media usage leads to negative externalities due to insufficient protection of privacy, too much hate speech or misinformation, or harms to minors.

III. Negative Social-Media Externalities and the Least-Cost-Avoider Principle

In situations where there are negative externalities from social-media usage, there may be a case for regulation. Any case for regulation must, however, recognize the presence of transaction costs, and consider how platforms and users may respond to changes in those costs. To get regulation right, the burden of avoiding a negative externality should fall on the least-cost avoider.

The Coase Theorem, derived from the work of Nobel-winning economist Ronald Coase[8] and elaborated on in the subsequent literature,[9] helps to explain the issue at hand:

  1. The problem of externalities is bilateral;
  2. In the absence of transaction costs, resources will be allocated efficiently, as the parties bargain to solve the externality problem;
  3. In the presence of transaction costs, the initial allocation of rights does matter; and
  4. In such cases, the burden of avoiding the externality’s harm should be placed on the least-cost avoider, while taking into consideration the total social costs of the institutional framework.

In one of Coase’s examples, the noise from a confectioner using his machinery is a potential cost to the doctor next door, who consequently can’t use his office to conduct certain testing. Simultaneously, the doctor moving his office next door is a potential cost to the confectioner’s ability to use his equipment. In a world of well-defined property rights and low transaction costs, the initial allocation of a right would not matter, because the parties could bargain to overcome the harm in a beneficial manner—i.e., the confectioner could pay the doctor for lost income or to set up sound-proof walls, or the doctor could pay the confectioner to reduce the sound of his machines.[10] But since there are transaction costs that prevent this sort of bargain, it is important whether the initial right is allocated to the doctor or the confectioner. To maximize societal welfare, the cost should be placed on the entity that can avoid the harm at the lowest cost.[11]

Here, social-media companies create incredible value for their users, but they also arguably impose negative externalities in the form of privacy harms, misinformation and hate speech, and harms particular to minors. In the absence of transaction costs, the parties could simply bargain away the harms associated with social-media usage. But since there are transaction costs, it matters whether the burden to avoid harms is placed on the users or the social-media companies. If the burden is wrongly placed, it may end up that the societal benefits of social media will be lost.

For instance, imposing liability on social-media companies risks collateral censorship, which occurs when platforms decide that liability risk is too large and opt to over-moderate or not host user-generated content, or to restrict access to such content either by charging higher prices or excluding those who could be harmed (like minors).[12] By wrongly placing the burden to avoid harms on social-media platforms, societal welfare will be reduced.

On the other hand, there may be situations where social-media companies are the least-cost avoiders. For instance, they may be best-placed to monitor and control harms associated with social-media usage when it is difficult or impossible to hold those using their platforms accountable for harms they cause.[13] For instance, if a social-media company allows anonymous or pseudonymous use, with no realistic possibility of tracking down users who cause harms, illegal conduct could go undeterred. In such cases, placing the burden on social-media users could lead to social media imposing uncompensated harms on society.

Thus, it is important to determine whether the social-media companies or their users are the least-cost avoiders. Placing the burden on the wrong party or parties would harm societal welfare, either by reducing the value of social media or by creating more uncompensated negative externalities.

IV. Applying the Lessons of Law & Economics to Social-Media Regulation

Below, I will examine the areas of privacy, children’s online safety, and content moderation, and consider both the social-media companies’ incentives and whether the platforms or their users are the least-cost avoiders.

A. Privacy

As discussed above, social-media companies are multi-sided platforms that provide content to attract attention from users, while selling information collected from those users for targeted advertising. This leads to the possibility that social-media companies will collect too much information in order to increase revenue from targeted advertising. In other words, as the argument goes, the interests of the paying side of the platform will outweigh the interests of social-media users, thereby imposing a negative externality on them.

Of course, this assumes that the collection and use of information for targeted advertisements is considered a negative externality by social-media users. While this may be true for some, for others, it may be something they care little about or even value, because targeted advertisements are more relevant to them. Moreover, many consumers appear to prefer free content with advertising to paying a subscription fee.[14]

It does seem likely, however, that negative externalities are more likely to arise when users don’t know what data is being collected or how it is being used. Moreover, it is a clear harm if social-media companies misrepresent what they are collecting and how they are using it. Thus, it is generally unobjectionable—at least, in theory—for the Federal Trade Commission or another enforcer to hold social-media companies accountable for their privacy policies.[15]

On the other hand, privacy regulation that requires specific disclosures or verifiable consent before collecting or using data would increase the cost of targeted advertising, thus reducing its value to advertisers, and thereby further reducing the platform’s incentives of to curate valuable content for users. For instance, in response to the FTC’s consent agreement with YouTube charging that it violated the Children’s Online Privacy Protection Act (COPPA), YouTube required channel owners producing children’s content to designate their channels as such, along with automated processes designed to identify the same.[16] This reduced content creators’ ability to benefit from targeted advertising if their content was directed to children. The result was less content created for children with poorer matching as well:

Consistent with a loss in personalized ad revenue, we find that child-directed content creators produce 13% less content and pivot towards producing non-child-directed content. On the demand side, views of child-directed channels fall by 22%. Consistent with the platform’s degraded capacity to match viewers to content, we find that content creation and content views become more concentrated among top child-directed YouTube channels.

Alternatively, a social-media company could raise the price it charges to users, as it can no longer use advertising revenue to subsidize users’ access. This is, in fact, exactly what has happened in Europe, as Meta now offers an ad-free version of Facebook and Instagram for $14 a month.[18]

In other words, placing the burden on social-media companies to avoid the perceived harms from the collection and use of information for targeted advertising could lead to less free content available to consumers. This is a significant tradeoff, and not one that most social-media consumers appear willing to make voluntarily.

On the other hand, it appears that social-media users could avoid much of the harm from the collection and use of their data by using available tools, including those provided by social-media companies. For instance, most of the major social-media companies offer two-factor authentication, privacy-checkup tools, the ability to browse the service privately, to limit audience, and to download and delete data.[19] Social-media users could also use virtual private networks (VPNs) to protect their data privacy while online.[20] Finally, users could just not post private information or could limit interactions with businesses (through likes or clicks on ads) if they want to reduce the amount of information used for targeted advertising.

B. Children’s Online Safety

Some have argued that social-media companies impose negative externalities on minors by serving them addictive content and/or content that results in mental-health harms.[21] They argue that social-media companies benefit from these harms because they are able to then sell data from minors to advertisers.

While it is true that social-media companies want to attract users through engaging content and interfaces, and that they make money through targeted advertising, it is highly unlikely that they are making much money from minors themselves. Very few social-media users under 18 have considerable disposable income or access to payment-card options that would make them valuable to advertisers. Thus, regulations that raise the costa to social-media companies of serving minors, whether through a regulatory duty of care[22] or through age verification and verifiable parental consent,[23] could lead social-media companies to invest more excluding minors than in creating vibrant and safe online spaces for them.

Federal courts considering age-verification laws have noted there are costs to companies, as well as users, in obtaining this information. In Free Speech Coalition Inc. v. Colmenero,[24] the U.S. District Court in Austin, Texas, considered a law that required age verification before viewing online pornography, and found that the costs of obtaining age verification were high, citing the complaint that stated “several commercial verification services, showing that they cost, at minimum, $40,000.00 per 100,000 verifications.”[25] But just as importantly, the transaction costs in this example also include the subjective costs borne by those who actually go through with verifying their age to access pornography. As the court noted, “the law interferes with the Adult Video Companies’ ability to conduct business, and risks deterring adults from visiting the websites.”[26] Similarly, in NetChoice v. Griffin,[27] the U.S. District Court for Western District of Arkansas found that a challenged law’s age-verification requirements were “costly” and would put social-media companies covered by the law in the position of needing to take drastic action to either implement age verification, restrict access for Arkansans, or face the possibility of civil and criminal enforcement.[28]

On the other hand, social-media companies—responding to demand from minor users and their parents—have also exerted considerable effort to reduce harmful content being introduced to minors. For instance, they have invested in content-moderation policies and their enforcement, including through algorithms, automated tools, and human review, to remove, restrict, or add warnings to content inappropriate for minors.[29] On top of that, social-media companies offer tools to help minors and their parents avoid many of the harms associated with social-media usage.[30] There are also options available at the ISP, router, device, and browser level to protect minors while online. As the court put it in Griffin, “parents may rightly decide to regulate their children’s use of social media—including restricting the amount of time they spend on it, the content they may access, or even those they chat with. And many tools exist to help parents with this.”[31]

In other words, parents and minors working together can use technological and practical means to make marginal decisions about social-media usage at a lower cost than a regulatory environment that would likely lead to social-media companies restricting use by minors altogether.[32]

C. Content Moderation

There have been warring allegations about social-media companies’ incentives when it comes to content moderation. Some claim that salacious misinformation and hate speech drives user engagement, making platforms more profitable for advertisers; others argue that social-media companies engage in too much “censorship” by removing users and speech in a viewpoint-discriminatory way.[33] The U.S. Supreme Court is currently reviewing laws from Florida and Texas that would force social-media companies to carry speech.[34]

Both views fail to take into account that social-media companies are largely just responding to the incentives they face as multi-sided platforms. Social-media companies are solving a Coasean speech problem, wherein some users don’t want to be subject to certain speech from other users. As explained above, social-media companies must balance these interests by setting and enforcing community rules for speech. This may include rules against misinformation and hate speech. On the other hand, social-media companies can’t go too far in restricting high-demand speech, or they will risk losing users. Thus, they must strike a delicate balance.

Laws that restrict the “editorial discretion” of social-media companies may fail the First Amendment,[35] but they also reduce the companies’ ability to give their customers a valuable product in light of user (and advertiser) demand. For instance, the changes in the moderation standards of X (formerly Twitter) in the last year since the purchase by Elon Musk have led to many users and advertisers exiting the platform due to a perceived increase in hate speech and misinformation.[36]

Social-media companies need to be free to moderate as they see fit, free from government interference. Such interference includes not just the forced carriage of speech, but in government efforts to engage in censorship-by-proxy, as has been alleged in Murthy v. Missouri.[37] From the perspective of the First Amendment, government intervention by coercing or significantly encouraging the removal of disfavored speech, even in the name of misinformation, is just as harmful as the forced carriage of speech.[38] But more importantly for our purposes here, such government actions reduce platforms’ value by upsetting the balance that social-media companies strike with respect to their users’ speech interests.

Users can avoid being exposed to unwanted speech by averting their digital eyes from it—i.e., by refusing to interact with it and thereby training social-media companies’ algorithms to serve speech that they prefer. They can also take their business elsewhere by joining a social-media network with speech-moderation policies more to their liking. Voting with one’s digital feet (and eyes) is a much lower-cost alternative than either mandating the carriage of speech or censorship by government actors.

V. Conclusion

Social-media companies are multisided platforms that must curate compelling content while restricting harms to users in order to optimize their value to the advertisers that pay for access. This doesn’t mean they always get it right. But they are generally best-positioned to make those decisions, subject to the market process. Sometimes, there may be negative externalities that aren’t fully internalized. But as Coase taught us, that is only the beginning of the analysis. If social-media users can avoid harms at lower cost than social-media companies, then regulation should not place the burden on social-media companies. There are tradeoffs in social-media regulation, including the possibility that it will result in a less-valuable social-media experience for users.

[1] See e.g. Mary Clare Jalonick, Congress eyes new rules for tech, social media: What’s under consideration, Associated Press (May 8, 2023), https://www.wvtm13.com/article/whats-under-consideration-congress-eyes-new-rules-for-tech-social-media/43821405#;  Khara Boender, Jordan Rodell, & Alex Spyropoulos, The State of Affairs: What Happened in Tech Policy During 2023 State Legislative Sessions?, Project Disco (Jul. 25, 2023), https://www.project-disco.org/competition/the-state-of-affairs-statetech-policy-in-2023 (noting laws passed and proposed addressing consumer data privacy, content moderation, and children’s online safety at the state level).

[2] See e.g. Jean-Charles Rochet & Jean Tirole, Platform Competition in Two-Sided Markets, 1 J. Euro. Econ. Ass’n 990 (2003).

[3] David S. Evans, Multisided Platforms in Antitrust Practice, at 3 (Oct. 17, 2023), forthcoming, Michael Noel, Ed., Elgar Encyclopedia on the Economics of Competition and Regulation, available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4606511.

[4] For instance, many nightclubs hold “Ladies Night” where ladies get in free in order to attract more men who pay for entrance.

[5] Transaction costs are the additional costs borne in the process of buying or selling, separate and apart from the price of the good or service itself — i.e. the costs of all actions involved in an economic transaction. Where transaction costs are present and sufficiently large, they may prevent otherwise beneficial agreements from being concluded.

[6] See David S. Evans, Governing Bad Behavior by Users of Multi-Sided Platforms, 27 Berkeley Tech. L. J. 1201 (2012); Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 HARV. L. REV. 1598 (2018).

[7] An externality is a side effect of an activity that is not reflected in the cost of that activity — basically, what occurs when we do something whose consequences affect other people. A negative externality occurs when a third party does not like the effects of an action.

[8] See R.H. Coase, The Problem of Social Cost, 3 J. L. & Econ. 1 (1960)

[9] See Steven G. Medema, The Coase Theorem at Sixty, 58 J. Econ. Lit. 1045 (2020).

[10] See Coase, supra note 9, at 8-10.

[11] See id. at 34 (“When an economist is comparing alternative social arrangements, the proper procedure is to compare the total social product yielded by these different arrangements.”).

[12] See Felix T. Wu, Collateral Censorship and the Limits of Intermediary Liability, 87 Notre Dame L. Rev. 293, 295-96 (2011); Geoffrey A. Manne, Ben Sperry & Kristian Stout, Who Moderates the Moderators: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Computer & Tech. L J. 26, 39 (2022); Ben Sperry, The Law & Economics of Children’s Online Safety: The First Amendment and Online Intermediary Liability, Truth on the Market (May 12 2023), https://truthonthemarket.com/2023/05/12/the-law-economics-of-childrens-online-safety-the-firstamendment-and-online-intermediary-liability.

[13] See Geoffrey A. Manne, Kristian Stout & Ben Sperry, Twitter v. Taamneh and the Law & Economics of Intermediary Liability, Truth on the Market (Mar. 8, 2023), https://truthonthemarket.com/2023/03/08/twitter-v-taamneh-and-the-law-economics-of-intermediary-liability; Ben Sperry, Right to Anonymous Speech, Part 2: A Law & Economics Approach, Truth on the Market (Sep. 6, 2023), https://truthonthemarket.com/2023/09/06/right-to-anonymous-speech-part-2-a-law-economics-approach.

[14] See, e.g., Matt Kaplan, What Do U.S. consumers Think About Mobile Advertising?, InMobi (Dec. 15, 2021), https://www.inmobi.com/blog/what-us-consumers-think-about-mobile-advertising (55% of consumers agree or strongly agree that they prefer mobile apps with ads rather than paying to download apps); John Glenday, 65% of US TV viewers will tolerate ads for free content, according to report, The Drum (Apr. 22, 2022), https://www.thedrum.com/news/2022/04/22/65-us-tv-viewers-will-tolerate-ads-free-content-according-report (noting that a report from TiVO found 65% of consumers prefer free TV with ads to paying without ads). Consumers often prefer lower subscription fees with ads to higher subscription fees without ads as well. See e.g. Toni Fitzgerald, Netflix Gets it Right: Study Confirms People Prefer Paying Less With Ads, Forbes (Apr. 25, 2023), https://www.forbes.com/sites/tonifitzgerald/2023/04/25/netflix-gets-it-right-study-confirms-more-people-prefer-paying-less-with-ads/.

[15] See 15 U.S.C. § 45.

[16] See Garrett A. Johnson, Tesary Lin, James C. Cooper, & Liang Zhong, COPPAcalypse? The YouTube Settlement’s Impact on Kids Content, at 6-7, SSRN (Apr. 26, 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4430334.

[17] Id. at 1.

[18] See Sam Schechner, Meta Plans to Charge $14 a Month for Ad-Free Instagram or Facebook, Wall Street J. (Oct. 3, 2023), https://www.wsj.com/tech/meta-floats-charging-14-a-month-for-ad-free-instagram-or-facebook-5dbaf4d5.

[19] See Christopher Lin, Tools to Protect Your Privacy on Social Media, NetChoice (Nov. 16, 2023), https://netchoice.org/tools-to-protect-your-privacy-on-social-media/.

[20] See e.g. Chris Stobing, The Best VPN Services for 2024, PC Mag (Jan. 4, 2024), https://www.pcmag.com/picks/the-best-vpn-services.

[21] See e.g. Jonatahan Stempel, Diane Bartz & Nate Raymond, Meta’s Instagram linked to depression, anxiety, insomnia in kids – US state’s lawsuit, Reuters (Oct. 25, 2023), https://www.reuters.com/legal/dozens-us-states-sue-meta-platforms-harming-mental-health-young-people-2023-10-24/ (describing complaint from 33 states alleging Meta “knowingly induced young children and teenagers into addictive and compulsive social media use”).

[22] See e.g. California Age-Appropriate Design Code Act, AB 2273 (2022), https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202120220AB2273AADC; Kids Online Safety Act, S. 1409, 118th Cong. (2023), as amended and posted by the Senate Committee on Commerce, Science, and Transportation on July 27, 2023, available at  https://www.congress.gov/bill/118th-congress/senate-bill/1409 (last accessed Dec. 19, 2023).

[23] See e.g. Arkansas Act 689 of 2023, the “Social Media Safety Act.”

[24] Free Speech Coal. Inc. v. Colmenero, No. 1:23-CV-917-DAE, 2023 U.S. Dist. LEXIS 154065 (W.D. Tex., Aug. 31, 2023), available at https://storage.courtlistener.com/recap/gov.uscourts.txwd.1172751222/gov.uscourts.txwd.1172751222.36.0.pdf.

[25] Id. at 10.

[26] Id.

[27] NetChoice LLC. v. Griffin, Case No. 5:23-CV-05105 (W.D. Ark., Aug. 31, 2023), available at https://netchoice.org/wpcontent/uploads/2023/08/GRIFFIN-NETCHOICE-GRANTED.pdf.

[28] See id. at 23.

[29] See id. at 18-19.

[30] See id. at 19-20.

[31] Id. at 15.

[32] For more, see Ben Sperry, A Coasean Analysis of Online Age-Verification and Parental-Consent Regimes, at 23 (ICLE Issue Brief, Nov. 9, 2023), https://laweconcenter.org/wp-content/uploads/2023/11/Issue-Brief-Transaction-Costs-of-Protecting-Children-Under-the-First-Amendment-.pdf.

[33] For an example of a hearing where Congressional Democrats argue the former and Congressional Republicans argue the latter, see Preserving Free Speech and Reining in Big Tech Censorship, Libr. of Cong. (Mar. 28, 2023), https://www.congress.gov/event/118th-congress/house-event/115561.

[34] See Moody v. NetChoice, No. 22-555 (challenging Florida’s SB 7072); NetChoice v. Paxton, No. 22-277 (challenging Texas’s HB 20).

[35] See e.g. Brief of International Center for Law & Economics as Amicus Curiae in Favor of Petitioners in 22-555 and Respondents in 22-277, Moody v. NetChoice, NetChoice v. Paxton, In the Supreme Court of the United States (Dec. 7, 2023), available at https://www.supremecourt.gov/DocketPDF/22/22-277/292986/20231211144416746_Nos.%2022-277%20and%2022-555_Brief_corrected.pdf. .

[36] See e.g. Ryan Mac & Tiffany Hsu, Twitter’s U.S. Ad Sales Plunge 59% as Woes Continue, New York Times (Jun. 5, 2023), https://www.nytimes.com/2023/06/05/technology/twitter-ad-sales-musk.html (“Six ad agency executives who have worked with Twitter said their clients continued to limit spending on the platform. They cited confusion over Mr. Musk’s changes to the service, inconsistent support from Twitter and concerns about the persistent presence of misleading and toxic content on the platform.”); Kate Conger, Tiffany Hsu & Ryan Mac, Elon Musk’s Twitter Faces Exodus of Advertisers and Executives, New York Times (Nov. 1, 2022), https://www.nytimes.com/2022/11/01/technology/elon-musk-twitter-advertisers.html (“At the same time, advertisers — which provide about 90 percent of Twitter’s revenue — are increasingly grappling with Mr. Musk’s ownership of the platform. The billionaire, who is meeting advertising executives in New York this week, has spooked some advertisers because he has said he would loosen Twitter’s content rules, which could lead to a surge in misinformation and other toxic content.”).

[37] See Murthy v. Missouri, No.23A-243; see also Missouri v. Biden, No. 23-30445, slip op. (5th Cir. Sept. 8, 2023).

[38] See Ben Sperry, Knowledge and Decisions in the Information Age: The Law & Economics of Regulating Misinformation on Social Media Platforms, (ICLE White Paper Sept. 22, 2023), forthcoming 59 Gonz. L. Rev. (2023), available at https://laweconcenter.org/resources/knowledge-and-decisions-in-the-information-age-the-law-economics-of-regulating-misinformation-on-social-media-platforms/.

 

Should be block quote

Continue reading
Innovation & the New Economy

NetChoice, the Supreme Court, and the State Action Doctrine

TOTM George Orwell’s “Nineteen Eighty-Four” is frequently invoked when political actors use language to obfuscate what they are doing. Ambiguity in language can allow both sides . . .

George Orwell’s “Nineteen Eighty-Four” is frequently invoked when political actors use language to obfuscate what they are doing. Ambiguity in language can allow both sides to appeal to the same words, like “the First Amendment” or “freedom of speech.” In a sense, the arguments over online speech currently before the U.S. Supreme Court really amount to a debate about whether private actors can “censor” in the same sense as the government.

In the oral arguments in this week’s NetChoice cases, several questions from Justices Clarence Thomas and Samuel Alito suggested that they believed social-media companies engaged in “censorship,” conflating the right of private actors to set rules for their property with government oppression. This is an abuse of language, and completely inconsistent with Supreme Court precedent that differentiates between state and private action.

Read the full piece here.

Continue reading
Innovation & the New Economy

ICLE Amicus to the 9th Circuit in NetChoice v Bonta

Amicus Brief INTEREST OF AMICUS CURIAE[1] The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center that builds intellectual foundations . . .

INTEREST OF AMICUS CURIAE[1]

The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center that builds intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law and economics methodologies and economic learning to inform policy debates and has longstanding expertise evaluating law and policy.

ICLE has an interest in ensuring that First Amendment law promotes the public interest by remaining grounded in sensible rules informed by sound economic analysis. ICLE scholars have written extensively on issues related to Internet regulation and free speech, including the interaction of privacy rules and the First Amendment.

SUMMARY OF ARGUMENT

While the District Court issued a preliminary injunction against California’s Age-Appropriate Design Code (AADC), it did so under the commercial speech standard of intermediate scrutiny. Below we argue that the Ninth Circuit should affirm the District Court’s finding that plaintiffs are likely to succeed on the merits in their First Amendment claim, but also make clear that the AADC’s rules that have the effect of restricting the access of minors to lawful speech should be subject to strict scrutiny.

The First Amendment protects an open marketplace of ideas. 303 Creative LLC v. Elenis, 600 U.S. 570, 143 S. Ct. 2298, 2311 (2023) (“‘[I]f there is any fixed star in our constitutional constellation,’ it is the principle that the government may not interfere with ‘an uninhibited marketplace of ideas.’”) (quoting West Virginia Bd. of Ed. v. Barnette, 319 U.S. 624, 642 (1943) and McCullen v. Coakley, 573 U.S. 464, 476 (2014)). In fact, the First Amendment protects speech in this marketplace whether the “government considers… speech sensible and well intentioned or deeply ‘misguided,’ and likely to cause ‘anguish’ or ‘incalculable grief.’”  303 Creative, 143 S. Ct. at 2312 (quoting Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 U.S. 557, 574 (1995) and Snyder v. Phelps, 562 U.S. 443, 456 (2011)).

The protection of the marketplace of ideas necessarily includes the creation, distribution, purchasing, and receiving of speech. See Brown v. Ent. Merchs. Ass’n, 564 U.S. 786, 792 n.1 (2011) (“Whether government regulation applies to creating distributing or consuming speech makes no difference” for First Amendment purposes). In other words, it protects both the suppliers in the marketplace of ideas (creators and distributors), and the consumers (purchasers and receivers).

No less than other speakers, profit-driven firms involved in the creation or distribution of speech are protected by the First Amendment. See 303 Creative LLC v. Elenis, 600 U.S. 570, 600 (2023) (“[T]he First Amendment extends to all persons engaged in expressive conduct, including those who seek profit.”). This includes Internet firms that provide speech platforms. See Reno v. ACLU, 521 U.S. 844, 870 (1997); NetChoice, LLC v. Moody, 34 F.4th 1196, 1213 (11th Cir. 2022).

Even minors have a right to participate in the marketplace of ideas, including as purchasers and receivers. See Brown, 564 U.S. at 794-95 (government has no “free-floating power to restrict ideas to which children may be exposed”). This includes the use of online speech platforms. See NetChoice, LLC v. Griffin, 2023 WL 5660155, at *17 (W.D. Ark. Aug. 31, 2023) (finding Arkansas’s Act 689 “obviously burdens minors’ First amendment rights” by “bar[ring] minors from opening accounts on a variety of social media platforms”).

This is important because online firms, especially those primarily involved in curating and creating content, are central to the modern marketplace of ideas. See Packingham v. North Carolina, 582 U.S. 98, 107 (2017) (describing the Internet as “the modern public square” where citizens can “explor[e] the vast realms of human thought and knowledge”).

Online firms primarily operate as what economists call “matchmakers” or “multisided platforms.” See David Evans & Richard Schmalensee, Matchmakers: The New Economics of Multisided Platforms 10 (2016). “[M]atchmakers’ raw materials are the different groups of customers that they help bring together. And part of the stuff they sell to members of each group is access to members of the other groups. All of them operate physical or virtual places where members of these different groups get together.  For this reason, they are often called multisided platforms.” Id. In this sense, they are very similar to newspapers and cable operators in attempting to attract attention through interesting content so that advertisers can reach them.

Online platforms bring together advertisers and users—including both speakers and listeners—by curating third-party speech as well as by producing their own content. The goal is to keep users engaged so advertisers can reach them. For many online platforms, advertisers cross-subsidize access to content for users, to the point that it is often free. Online platforms are in this sense “attention platforms” which supply content to its users while collecting data for targeted advertisements for businesses who then pay for access to those users. To be successful, online platforms must keep enough—and the right type of—users engaged so as to maintain demand for advertising. But if platforms fail to curate and produce interesting content, it will lead to users using them less or even leaving altogether, making it less likely that advertisers will invest in these platforms.

The First Amendment protects this business model because it allows entities that have legally obtained data to use it for both for the curation of speech for its users and targeted advertising. See Sorrell v. IMS Health, Inc., 564 U.S. 552, 570-71 (2011) (finding that there is a “strong argument” that “information is speech for First Amendment purposes” and striking down a law limiting the ability of marketers to use prescriber-identifying information for pharmaceutical sales). The First Amendment also protects the gathering of information when it is “inherently expressive.” Cf. Project Veritas v. Schmidt, 72 F.4th 1043, 1055 (9th Cir. 2023) (citing cases that have found the act of filming or recording are inherently expressive activity). Gathering of online data for targeted advertising makes it as inherently expressive as the act of filming or recording is for creating media.

Moreover, due to the nature of online speech platforms, the collection and use of data is “inextricably intertwined” with the curation of protected, non-commercial speech. Cf. Riley v. Nat’l Fed’n of the Blind of N.C., 487 U.S. 781, 796 (1988); Dex Media West, Inc. v. City of Seattle, 696 F.3d 952, 958 (9th Cir. 2012).

By restricting use of data, the AADC will prevent online platforms from being able to tailor their products to their users, resulting in less relevant—and in the case of minors, less appropriate—content. Online platforms may also be less likely to effectively monetize through targeted advertisements. Both situations will place platforms in a situation that may require a change in business model, either by switching to subscriptions or by excluding anyone who could possibly be a minor. Thus, restrictions on the collection and use of data for the curation of content and targeted advertising should be subject to strict scrutiny, as the result of such restrictions will be to restrict minors’ access to lawful online speech.

Under strict scrutiny, California bears the burden of showing it has a compelling governmental interest and that the restriction on speech is narrowly tailored to that interest. It can do neither.

First, California fails to establish a compelling government interest because it has failed to “identify an ‘actual problem’ in need of solving.” Brown, 564 U.S. at 799 (quoting United States v. Playboy Entertainment Group, Inc., 529 U.S. 803, 822-23 (2000)). There is no more evidence of a direct causal link between the use of online platforms subject to the AADC and harm to minors than there was from the video games at issue in Brown. Cf. id. at 799-801. In fact, the best available data does “not support the conclusion that social media causes changes in adolescent health at the population level.” See Nat’l Acad. Sci. Engineering & Med., Social Media and Adolescent Health at 92 (2023).

Second, California’s law is not narrowly tailored because the requirements that restrict minors’ access to lawful content are not the least restrictive means for protecting minors from potentially harmful content. Cf. Playboy, 529 U.S. at 823-25 (finding the voluntary use of blocking devices to restrict access to adult channels is less restricting than mandating the times such content may be made available); Aschroft v. ACLU, 542 U.S. 656, 667-70 (2004) (finding filtering software a less restrictive alternative than age verification). Parents and minors have technological and practical means available to them that could allow them to avoid the putative harms of Internet use without restricting the access of others to lawful speech. Government efforts to promote the creation and use of such tools is a less restrictive way to promote the safety of minors online.

In sum, the AADC is unconstitutional because it would restrict the ability of minors to participate in the marketplace of ideas. The likely effects of the AADC on covered businesses will be to bar or severely restrict minors’ access to lawful content.

ARGUMENT

California has argued that the AADC regulates only “conduct” or “economic activity” or “data” and thus should not be subject to First Amendment scrutiny. See Ca. Brief at 28. But NetChoice is correct to emphasize that the AADC is content-based, as it is designed to prevent minors from being subject to certain kinds of “harmful” First Amendment-protected speech. See NetChoice Brief at 39-41. As such, the AADC’s rules should be subject to strict scrutiny. In this brief we emphasize a separate reason that the AADC should be subject to strict scrutiny: the restrictions on data gathering for curation of speech and targeted advertising will inevitably lead to less access to lawful online speech platforms for minors.

In Part I we argue that gathering data for the curation of speech and targeted advertising is protected by the First Amendment. In Part II we argue that the collection of data for those purposes is inextricably linked, and thus the AADC’s restrictions on the collection of data for those purposes should be subject to strict scrutiny. In Part III we argue that the AADC fails strict scrutiny, both for a lack of a compelling government interest and because its restrictions are not narrowly tailored.

I. GATHERING DATA FOR THE CURATION OF SPEECH AND TARGETED ADVERTISING IS PROTECTED BY THE FIRST AMENDMENT

Online platforms attract users by curating content and presenting it in an engaging way. To do this effectively requires data. Moreover, that same data is useful for targeted advertising, which is the primary revenue source for most online platforms, which are multisided platforms. This is a protected business model under First Amendment principles.

First, display decisions by communications platforms on how best to present information to its users is protected by the First Amendment. Cf. Miami Herald Pub. Co. v. Tornillo, 418 U.S. 241, 258 (1974) (“The choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials—whether fair or unfair—constitute the exercise of editorial control and judgment.”). Limitations on the right of a communications platform to curate its own content come only from the marketplace of ideas itself: “The power of a privately owned newspaper to advance its own political, social, and economic views is bounded by… the acceptance of a sufficient number of readers—and hence advertisers—to assure financial success.” Id. at 255 (quoting Columbia Broad. Sys., Inc. v. Democratic Nat’l Comm., 412 U.S. 94, 117 (1973) (plurality)).

Second, the use of data for commercial purposes is protected by the First Amendment. See Sorrell, 564 U.S. at 567 (“While the burdened speech results from an economic motive, so too does a great deal of vital expression.”). No matter how much California wishes it were so, the AADC’s restrictions on the “sales, transfer, and use of” information is not simply regulation of economic activity.  Cf. id. at 750. On the contrary, the Supreme Court “has held the creation and dissemination of information are speech within the meaning of the First Amendment.” Id. Among the protected uses of data is creating tailored content, including marketing. See id. at 557-58 (describing the use of “detailing” where drug salespersons use prescribing history of doctors to present a particular sales message.).

Third, even the collection of information can be protected First Amendment activity. For instance, in Project Veritas, this court found that an audio or video recording “qualifies as speech entitled to the protection of the First Amendment.” See 72 F.4th at 1054. This is because the act of recording itself is “inherently expressive.” Id. at 1055. Recording is necessary to create the speech at issue.

Applying these principles here leads to the conclusion that the targeted advertising-supported business model of online platforms is protected by the First Amendment. Online platforms have a right to determine what to curate and how to display that content on its platform, as they seek to discover whether it serves its users and advertisers in the marketplace of ideas, much like the newspaper in Tornillo. Using data to better curate content to users and to offer them more relevant advertisements is protected, as in Sorrell. And the collection of data to curate speech and offer them targeted advertisements is as “inherently expressive” as the act of recording is for making a video in Project Veritas.

II. STRICT SCRUTINY SHOULD APPLY TO THE AADC’S RESTRICTIONS ON DATA COLLECTION FOR THE CURATION OF SPEECH AND TARGETED ADVERTISING

The question remains what level of scrutiny the AADC’s restrictions on data collection for curation and targeted advertising should face. The District Court applied only intermediate scrutiny, assuming that this was commercial speech. See Op. at 10-11 (in part because the AADC’s provisions fail intermediate scrutiny anyway). But the court noted that if expression involved commercial and non-commercial speech that is “inextricably intertwined,” then strict scrutiny would apply. See id. at 10. This is precisely the case, as online multisided platforms must have data both to effectively curate content and to offer targeted advertisements which subsidize users’ access. Targeted advertising is inextricably intertwined with the free or reduced-price access of users to these online platforms.

Over time, courts have gained more knowledge of how multisided platforms work, specifically in the antitrust context. See Ohio v. American Express, 138 S. Ct. 2274, 2280-81 (2018) (describing how credit card networks work). But this also has important relevance in the First Amendment context where advertisements often fund the curation of content.

For instance, in Dex Media West, this court considered yellow page directories and found that the protected speech of the phonebooks (i.e. telephone numbers) was inextricably intertwined with the advertisements that help fund it. See 696 F.3d at 956-65. The court found the “[e]conomic reality” that “yellow pages directories depend financially upon advertising does not make them any less entitled to protection under the First Amendment.” Id. at 963-64. The court rejected the district court’s conclusion that “economic dependence was not sufficient to intertwine commercial and noncommercial elements of the publication,” id. at 964, as the same could be said of television stations or newspapers as well, but they clearly receive full First Amendment protection for their speech. The court concluded that:

Ultimately, we do not see a principled reason to treat telephone directories differently from newspapers, magazines, television programs, radio shows, and similar media that does not turn on an evaluation of their contents. A profit motive and the inclusion or creation of noncommercial content in order to reach a broader audience and attract more advertising is present across all of them. We conclude, therefore, that the yellow pages directories are entitled to full First Amendment protection. Id. at 965.

Here, this means the court should consider the interconnected nature of the free or reduced-price access to online content and targeted advertising that is empowered by data collection. Online platforms are, in this sense, indistinguishable “from newspapers, magazines, television programs, radio shows, and similar media…” that curate “noncommercial content in order to reach a broader audience and attract more advertising.” Id. The only constitutional limits on platforms’ editorial discretion arise from the marketplace of ideas itself. Cf. Tornillo, 418 U.S. at 255.

To find otherwise will lead to detrimental effects on this business model. Without data collection, not only will online platforms serve less relevant content to users but also less relevant advertising. This will make the platforms less lucrative for advertisers and lead to upward pricing pressure on the user-side of online platforms. Online platforms will be forced to change their business models by either charging fees (or raising them) for access or excluding those users subject to the regulation. Excluding minors from accessing lawful speech clearly implicates the First Amendment and is subject to strict scrutiny. Cf. Brown, 564 U.S. at 794-95, 799 (the Act “is invalid unless California can demonstrate that it passes strict scrutiny”).

III. THE AADC FAILS STRICT SCRUTINY

The District Court determined that the AADC’s provisions would fail under either intermediate or strict scrutiny. This court should affirm the district court, but also make clear that strict scrutiny applies.

A. There Is No Compelling Government Interest

Under strict scrutiny, the government must “specifically identify an ‘actual problem’ in need of solving.” Brown, 564 U.S. at 799 (quoting Playboy, 529 U.S. at 822-23).

In Brown, the Supreme Court found that California’s evidence linking exposure to violent video games and harmful effects on children was “not compelling” because it did “not prove that violent video games cause minors to act aggressively.” Id. at 800 (emphasis in original). At best, there was a limited correlation that was “indistinguishable from effects produced by other media” not subject to the rules. Id. at 800-01.

The same is true here. The literature on the relationship between Internet use and harm to minors simply does not establish causation.

For instance, the National Academies of Science, Engineering, and Medicine has noted that there are both benefits and harms from social media use for adolescents. Nat’l Acad. Sci. Engineering & Med., Social Media and Adolescent Health at 4 (2023) (“[T]he use of social media, like many things in life, may be a constantly shifting calculus of the risky, the beneficial, and the mundane.”). There are some studies that show a very slight correlation between “problematic social media use” and mental health harms for adolescents. See Holly Shannon, et al., Problematic Social Media Use in Adolescents and Young Adults: Systematic Review and Meta-analysis, 9 JMIR Mental Health 1, 2 (2022) (noting “problematic use characterizes individuals who experience addiction-like symptoms as a result of their social media use”). But the “links between social media and health are complex.” Social Media and Adolescent Health at 89.

The reasons for this complexity include the direction of the relationship (i.e., is it because of social media usage that a person is depressed or does someone use social media because they are depressed?), and whether both social media usage and mental health issues are possibly influenced by another variable(s). Moreover, it is nearly impossible to find a control group that has not been exposed to social media. As a result, the National Academies’ extensive review of the literature “did not support the conclusion that social media causes changes in adolescent health at the population level.” Id. at 92.

The AADC applies to far more than just social media, however, extending to any “online service, product, or feature” that is “likely to be accessed by children.” See Cal. Civ. Code § 1798.99.30 (b)(4). There is little evidence that general Internet usage is correlated with harm to minors. According to one survey of the international literature, the prevalence of “Problematic Internet Use” among adolescents ranges anywhere from 4% to 20%. See Juan M. Machimbarrena et al., Profiles of Problematic Internet Use and Its Impact on Adolescents’ Health-Related Quality of Life, 16 Int’l J. Eviron. Res. Public Health 1, 2 (2019). This level of harmful use suggests the AADC’s reach is overinclusive. Cf. Brown, 564 U.S. at 805 (Even when government ends are legitimate, if “they affect First Amendment rights they must be pursued by means that are neither seriously underinclusive nor seriously overinclusive.”).

Moreover, the rules at issue are also underinclusive, even assuming there was a causal link. The AADC does not extend to the same content offline and also likely to be accessed by children, even if also supported by advertising, it would not be subject to those regulations. California has offered no reason to think that accessing the same content while receiving advertising offline would be less harmful to minors. Cf. Brown, 564 U.S. at 801-02 (“California has (wisely) declined to restrict Saturday morning cartoons, the sale of games rated for young children, or the distribution of guns. The consequence is that its regulation is wildly underinclusive when judged against its asserted justification, which in our view is alone enough to defeat it.”).

In sum, California has not established a compelling state interest in protecting minors from harm allegedly associated with Internet usage.

B. The AADC Is Not Narrowly Tailored

Even assuming there is a compelling state interest in protecting minors from harms online, the AADC’s provisions restricting the collection and use of data for curating speech and targeted advertising are not narrowly tailored to that end. They are much more likely to lead to the complete exclusion of minors from online platforms, foregoing the many benefits of Internet usage. See Social Media and Adolescent Health at 4-5 (listing benefits of social media usage for adolescents). A less restrictive alternative would be promoting the use of practical and technological means by parents and minors to avoid the harms associated with Internet usage, or to avoid specifically harmful forms of Internet use.

For instance, the AADC requires covered online platforms to “[e]stimate the age of child users with a reasonable level of certainty appropriate to the risks” or “apply the privacy and data protections afforded to children” under the Act to “all consumers.” Cal. Civ. Code § 1798.99.31(a)(5). These privacy and data protections would severely limit by default the curation of speech and targeted advertising. See Cal. Civ. Code § 1798.99.31(a)(6); (b)(2)-(4). This would reduce the value of the online platforms to all users, who would receive less relevant content and advertisements.

Rather than leading to more privacy protection for minors, such a provision could result in more privacy-invasive practices or the exclusion of minors from the benefits of online platforms altogether. There is simply no foolproof method for estimating a user’s age.

Platforms typically use one of four methods: self-declaration, user-submitted hard identifiers, third-party attestation, and inferential age assurance. See Scott Babwah Brennen & Matt Perault, Keeping Kids Safe Online: How Should Policymakers Approach Age Verification?, at 4 (The Ctr. for Growth and Opportunity at Utah State University and University of North Carolina Ctr. on Tech. Pol’y Paper, Jun. 2023), https://www.thecgo.org/wp-content/uploads/2023/06/Age-Assurance_03.pdf. Each method comes with tradeoffs. While self-declaration allows users to simply lie about their age, other methods can be quite privacy-invasive. For instance, requiring users to submit hard identifiers, like a driver’s license or passport, may enable platforms to more accurately assess age in some circumstances and may make it more difficult for minors to fabricate their age, but it also poses privacy and security risks. It requires platforms to collect and process sensitive data, requires platforms to develop expertise in ID verification, and may create barriers to access for non-minor users who lack an acceptable form of identification. Courts have consistently found age verification requirements to be an unconstitutional barrier to access to online content. See Aschroft v. ACLU; NetChoice, LLC v. Griffin; NetChoice v. Yost, 2024 WL 555904 (S.D. Ohio, Feb. 12, 2024); Free Speech Coal., Inc. v. Colmenero, 2023 WL 5655712, at *15-16 (W.D. Tex. Aug. 31, 2023) (available age verification services “amplif[y]” privacy concerns and “exacerbate[]” “First Amendment injury,” including chilling effect).

But even age assurance or age estimation comes with downsides. For instance, an online platform could use AI systems to estimate age based on an assessment of the content and behavior associated with a user. But to develop this estimate, platforms must implement technical systems to collect, review, and process user data, including minors’ data. These methods may also result in false positives, where a platform reaches an inaccurate determination that a user is underage, which would result in a different set of privacy defaults under the AADC. See Cal. Civ. Code § 1798.99.31(a)(6); (b)(2)-(4). Errors are sufficiently common that some platforms have instituted appeals mechanisms so that users can contest an age-related barrier. See, e.g., Minimum age appeals on TikTok, TikTok, https://support.tiktok.com/en/safety-hc/account-and-user-safety/minimum-age-appeals-on-tiktok (last accessed Feb. 12, 2024). Not only is the development of such mechanisms costly to online platforms, but is potentially very costly to those mislabeled as well.

Another possibility is that online platforms may restrict access by users who they have any reason to believe to be minors to avoid significantly changing their business models predicated on curation and targeted advertising. Cf. Op. at 8 (noting evidence that “age-based regulations would ‘almost certain[ly] [cause] news organizations and others [to] take steps to prevent those under 18 from accessing online news content, features, or services.’”) (quoting Amicus Curiae Br. of New York Times Co. & Student Press Law Ctr. at 6).

The reason why this is likely flows from an understanding of the economics of multisided markets mentioned above. Restricting the already limited expected revenue from minors through limits on the ability to do targeted advertising, combined with strong civil penalties for failure to live up to the provisions of the AADC with respect to minors, will encourage online platforms to simply exclude them altogether. See Cal. Civ. Code § 1798.99.35(a) (authorizing penalties of up to $7,500 per “affected child”).

Much less restrictive alternatives are possible. California could promote online education for both minors and parents which would allow them to take advantage of widely available technological and practical means to avoid online harms. Cf. Ashcroft, 542 U.S. at 666-68 (finding filtering software is a less restrictive alternative than age verification to protect minors from inappropriate content). Investing in educating the youth in media literacy could be beneficial for avoiding harms associated with problematic Internet use. See Social Media and Adolescent Health at 8-10 (arguing for training and education so young people can be empowered to protect themselves).

If anything, there are more technological ways for parents and minors to work together to avoid online harms today. For instance, there are already tools to monitor and limit how minors use the Internet available from cell carriers and broadband providers, on routers and devices, from third-party applications, and even from online platforms themselves. See Ben Sperry, A Coasean Analysis of Online Age-Verification and Parental-Consent Regimes, at 20-21 (ICLE Issue Brief 2023-11-09), https://laweconcenter.org/wp-content/uploads/2023/11/Issue-Brief-Transaction-Costs-of-Protecting-Children-Under-the-First-Amendment-.pdf. Even when it comes to privacy, educating parents and minors on how to protect their information when online would be a less restrictive alternative than restricting the use of data collection for targeted advertising.

CONCLUSION

The free marketplace of ideas is too important to be restricted, even in the name of protecting children. Minors must be able to benefit from the modern public square that is the Internet. The AADC would throw “the baby out with the bathwater.” Op. at 16. The court should affirm the judgment of the district court.

[1] All parties have consented to the filing of this brief.  See Fed. R. App. P. 29(a)(2).  No counsel for any party authored this brief in whole or in part, no party or party’s counsel has contributed money intended to fund the preparation or submission of the brief, and no individual or organization contributed funding for the preparation and submission of the brief.  See id. 29(a)(4)(E).

Continue reading
Innovation & the New Economy

ICLE’s Amicus Briefs on the Future of Online Speech

TOTM Over the past few months, we at the International Center for Law & Economics (ICLE) have endeavored to bring the law & economics methodology to . . .

Over the past few months, we at the International Center for Law & Economics (ICLE) have endeavored to bring the law & economics methodology to the forefront of several major public controversies surrounding online speech. To date, ICLE has engaged these issues by filing two amicus briefs before the U.S. Supreme Court, and another in Ohio state court.

The basic premise we have outlined is that online platforms ought to retain the right to engage in the marketplace of ideas by exercising editorial discretion, free from government interference. A free marketplace of ideas best serves both the users of these platforms, and society at-large.

Read the full piece here.

Continue reading
Innovation & the New Economy

Google’s Search Service Is Not a Phone or Rail Company as Ohio AG Yost Contends in Lawsuit

Popular Media Ohio has made the news, but not for the success of Ohio State football or the induction of a new musical act into the Rock . . .

Ohio has made the news, but not for the success of Ohio State football or the induction of a new musical act into the Rock and Roll Hall of Fame. This time, it’s because of Ohio Attorney General Dave Yost’s quixotic effort to have Google’s search engine declared a common carrier.

Traditionally, a common carrier is a business that opens itself indiscriminately to public use. It is on this basis that Ohio, other states, and the federal government all have imposed nondiscrimination requirements on entities like railroads and telephone companies.

Read the full piece here.

 

Continue reading
Innovation & the New Economy

ICLE Amicus to US Supreme Court in Murthy v Missouri

Amicus Brief INTEREST OF AMICUS CURIAE[1] The International Center for Law & Economics (“ICLE”) is a nonprofit, nonpartisan global research and policy center aimed at building the . . .

INTEREST OF AMICUS CURIAE[1]

The International Center for Law & Economics (“ICLE”) is a nonprofit, nonpartisan global research and policy center aimed at building the intellectual foundations for sensible, economically sound policy.  ICLE promotes the use of law-and-economics methods and economic learning to inform policy debates.

ICLE has an interest in ensuring that First Amendment law promotes the public interest, the rule of law, and a rich marketplace of ideas.  To this end, ICLE’s scholars write extensively on social media regulation and free speech.  E.g., Int’l Ctr. for Law & Econ. Am. Br., Moody v. NetChoice, LLC, NetChoice, LLC v. Paxton, Nos. 22-277, 22-555 (Dec. 7, 2023); Ben Sperry, Knowledge and Decisions in the Information Age: The Law & Economics of Regulating Misinformation on Social-Media Platforms, 59 Gonzaga L. Rev. ___ (2024) (forthcoming); Geoffrey Manne, Ben Sperry & Kristian Stout, Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Computer & Tech. L. J. 26 (2022); Internet Law Scholars Am. Br., Gonzalez v. Google LLC, 21-1333 (Jan. 19, 2023); Ben Sperry, An L&E Defense of the First Amendment’s Protection of Private Ordering, Truth on the Market (Apr. 23, 2021), https://bit.ly/49tZ7XD.

ICLE is concerned about government meddling in—and the resulting impoverishment of—the marketplace of ideas.  That meddling is on display in this case—and another case before the Court this Term.  See No. 22-842, Nat’l Rifle Ass’n of Am. v. Vullo (state official coerced insurance companies not to partner with gun-rights organization to cover losses from gun use).  But this case and Vullo merely illustrate a larger problem.  See Backpage.com, LLC v. Dart, 807 F.3d 229 (7th Cir. 2015) (sheriff campaigned to shut down Backpage.com by pressuring Visa and Mastercard to stop processing Backpage transactions); Heartbeat Int’l, Inc. Am. Br. at 4–10, Vullo, supra (collecting examples); Will Duffield, Jawboning Against Speech: How Government Bullying Shapes the Rules of Social Media, Cato Inst. (Sept. 12, 2022) (collecting examples), bit.ly/41NEhjb; Victor Nava, Amazon “censored” COVID-19 vaccine books after “feeling pressure” from Biden White House: docs, New York Post (Feb. 5, 2024), https://bit.ly/3Sq5152.  With this brief, ICLE urges the Court to enforce the Constitution to protect the marketplace of ideas from all such government intrusions.

SUMMARY OF ARGUMENT

The First Amendment protects a public marketplace of ideas free from government interference.

“The First Amendment directs us to be especially skeptical of regulations that seek to keep people in the dark for what the government perceives to be their own good.” Sorrell v. IMS Health Inc., 564 U.S. 552, 577 (2011) (citation omitted).

“Our representative democracy only works if we protect the ‘marketplace of ideas.’  This free exchange facilitates an informed public opinion, which, when transmitted to lawmakers, helps produce laws that reflect the People’s will.  That protection must include the protection of unpopular ideas, for popular ideas have less need for protection.”  Mahanoy Area Sch. Dist. v. B.L., 594 U.S. ___, 141 S. Ct. 2038, 2046 (2021).

Without a free marketplace of ideas, bad ideas persist and fester.  With a free marketplace of ideas, they get challenged and exposed.  When we think of the marketplace, we think of Justice Holmes dissenting in Abrams v. United States, 250 U.S. 616, 630 (1919).  But the insight behind the concept dates back thousands of years, at least to the Hebrew Bible, and has been recognized by, among others, John Milton, the Founders, and John Stuart Mill.  The insight is that the solution for false speech is true speech.  The government may participate in the marketplace of ideas by speaking for itself.  But it ruins the marketplace by coercing speech.

This Court has long stressed the danger of restricting speech on public health, where information can save lives. Several respondents here are elite professors of medicine who dissented from the scientific judgments of government officials. The professors were just the kind of professionals whose views the public needed to make informed decisions.  Instead, the government pressured social media websites to suppress the professors’ views, which the government –at least at the time—saw as outside the mainstream.

Government intervention like this undermines the scientific enterprise.  The goal of science is not to follow the current consensus, but to challenge it with hard data.  For that challenge to happen, the government must not interfere with the open marketplace of ideas, where the current consensus can always yield to a new and better one.

As the “purchasers” in the marketplace of ideas, the people—including respondents here—were stripped of their First Amendment right to make informed decisions on crucial matters of public health. The right to speak includes a corresponding right to receive speech.  Based on the record here, respondent states can likely show that petitioners trampled on their right to receive information and ideas published by websites.  Similarly, respondent individuals will likely be able to show that they have been robbed of their right to hear other suppressed speakers. Today, the marketplace of ideas is stocked, in part, by social media companies exercising editorial discretion. What distinguishes one site from another is what it will, and will not, publish.  As commentators have noted, in the online world, content moderation is the product.  Social media companies are what economists call multi-sided platforms, which connect advertisers with users by curating third-party speech.  The better platforms become at curating speech, the more users engage, and the more valuable advertising becomes to advertisers and users alike.

At times, keeping users engaged requires removing harmful speech or even disruptive users.  But platforms must strike a balance in their content-moderation policies—allowing enough speech to attract users, but not so much speech that users are driven away.  Operating in the marketplace, social media companies are best placed to strike this balance.

Even if the online marketplace did not operate very efficiently (it does), it could not permissibly be controlled by the government.  The First Amendment forbids any abridgement of speech, including speech on the internet.  The way a website adjusts to the market shows what it thinks deserves “expression, consideration, and adherence,” or is “worthy of presentation” (phrases this Court has used to describe protected editorial discretion).  Pressuring social media companies to take down content changes the content of the platforms’ speech, intrudes on their editorial discretion, and violates the Constitution.

Given the record respondents have compiled, it is likely that they can show coercion by federal officials. The Fifth Circuit agreed, but its test for coercion fell short of the test applied in Bantam Books.  The focus of Bantam Books is not on the subjective understanding of the private actor, but on what the state actors objectively did—namely, was it reasonably understood as attempting to coerce private action?

Here it was.  Indeed, the allegations here include (a) many threats to have social media companies investigated, prosecuted, and regulated if they fail to remove disfavored speech, coupled with (b) extensive use of private meetings, emails, and digital portals to pressure social media companies to remove speech.  That was attempted coercion, and it was unlawful.

The remedy for unlawful coercion is an injunction against, or in some cases, damages from, government actors.  The court below focused the injunction on federal officials.  That was correct.  The marketplace of ideas—now freed from impermissible government intervention by the injunction—leaves its participants free to exercise their editorial discretion as they see fit.  The judgment should be affirmed.

ARGUMENT

I.       The First Amendment protects the marketplace of ideas from government meddling.

A.     A marketplace offering only government-approved ideas is no marketplace, logically and as historically understood.

The First Amendment protects an open marketplace of ideas.  “By allowing all views to flourish, the framers understood, we may test and improve our own thinking both as individuals and as a Nation.”  303 Creative LLC v. Elenis, 600 U.S. 570, 143 S. Ct. 2298, 2311 (2023).  “‘[I]f there is any fixed star in our constitutional constellation,’ it is the principle that the government may not interfere with ‘an uninhibited marketplace of ideas.’”  Id. (quoting West Virginia Bd. of Ed. v. Barnette, 319 U.S. 624, 642 (1943) and McCullen v. Coakley, 573 U.S. 464, 476 (2014)).

“[U]ninhibited” means uninhibited. “[T]he First Amendment protects an individual’s right to speak his mind regardless of whether the government considers his speech sensible and well intentioned or deeply ‘misguided,’ and likely to cause ‘anguish’ or ‘incalculable grief.’” 303 Creative, 143 S. Ct. at 2312 (quoting Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 U.S. 557, 574 (1995) and Snyder v. Phelps, 562 U.S. 443, 456 (2011)).  “The First Amendment directs us to be especially skeptical of regulations that seek to keep people in the dark for what the government perceives to be their own good.”  Sorrell, 564 U.S. at 577 (citation omitted).  Without zealous protection, unpopular speech may be “chill[ed],” “would-be speakers [may] remain silent,” and “society will lose their contributions to the ‘marketplace of ideas.’”  United States v. Hansen, 599 U.S. 762, 143 S. Ct. 1932, 1939–40 (2023) (quoting Virginia v. Hicks, 539 U.S. 113, 119 (2003)).  Nor do speakers “shed their First Amendment protections by employing the corporate form to disseminate their speech.”  303 Creative, 143 S. Ct. at 2316.

When the marketplace of ideas is impoverished, it is not only “society” that loses (Hansen, 143 S. Ct. at 1939–40); it is democracy itself.  “Our representative democracy only works if we protect the ‘marketplace of ideas.’  This free exchange facilitates an informed public opinion, which, when transmitted to lawmakers, helps produce laws that reflect the People’s will.  That protection must include the protection of unpopular ideas, for popular ideas have less need for protection.”  Mahanoy Area Sch. Dist., 141 S. Ct. at 2046.  “A democratic people must be able to freely generate, debate, and discuss * * * ideas, hopes, and experiences.  They must then be able to transmit their resulting views and conclusions to their elected representatives[.]  Those representatives can respond by turning the people’s ideas into policies.  The First Amendment, by protecting the marketplace and the transmission of ideas, thereby helps to protect the basic workings of democracy itself.  City of Austin v. Reagan Nat’l Advert. of Austin, LLC, 596 U.S. 61, 142 S. Ct. 1464, 1476–77 (2022) (Breyer, concurring) (internal citations and quotation marks omitted).  In short, “[t]he First Amendment was fashioned to assure unfettered interchange of ideas for the bringing about of political and social changes desired by the people.”  Meyer v. Grant, 486 U.S. 414, 421 (1988) (internal citation and quotation marks omitted).

Without a free marketplace of ideas, bad ideas flourish, unchallenged by competition. “[T]ime has upset many fighting faiths”; and “the ultimate good desired is better reached by free trade in ideas—that the best test of truth is the power of the thought to get itself accepted in the competition of the market[.]  That at any rate is the theory of our Constitution.”  Abrams, 250 U.S. at 630 (Holmes, J., dissenting).  With a free marketplace, however, people enjoy the liberty to be wrong—even as their mistaken ideas tend to get exposed.  For this reason, after the divisive presidential election of 1800, winner Thomas Jefferson urged toleration of dissenters.  Even those in favor of changing our form of government, he urged, should be left “undisturbed as monuments of the safety with which error of opinion may be tolerated where reason is left free to combat it.”  First Inaugural Address (Mar. 4, 1801), https://bit.ly/42tAxUt.

Of course, neither Holmes nor Jefferson was the first to recognize that the best ideas emerge from the crucible of competition.  Thousands of years before the American republic, the Hebrew Bible observed that  “[t]he one who states his case first seems right, until the other comes and examines him.”  Prov. 18:17.   Much later, John Milton and John Stuart Mill would sound similar themes.  “Even a false statement may be deemed to make a valuable contribution to public debate, since it brings about ‘the clearer perception and livelier impression of truth, produced by its collision with error.’”  N.Y. Times Co. v. Sullivan, 376 U.S. 254, 279 n.19 (1964) (quoting Mill, On Liberty 15 (1947) and citing Milton, Areopagitica, Prose Works, Vol. II 561 (1959)).

In sum, “[t]he remedy for speech that is false is speech that is true.  This is the ordinary course in a free society.  The response to the unreasoned is the rational; to the uninformed, the enlightened; to the straight-out lie, the simple truth.”  United States v. Alvarez, 567 U.S. 709, 727–28 (2012) (plurality).  “And suppression of speech by the government can make exposure of falsity more difficult, not less so.  Society has the right * * * to engage in open, dynamic, rational discourse.  These ends are not well served when the government seeks to orchestrate public discussion through content-based mandates.”  Id. at 728.  “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.”  Whitney v. California, 274 U.S. 357, 377 (1927) (Brandeis, J., concurring).

Of course, the government itself may participate in the marketplace of ideas. Government agencies concerned about health or election misinformation may use social media platforms to broadcast their message.  Those agencies may even amplify and target their counter-speech through advertising campaigns tailored to those most likely to share or receive misinformation—including by creating their own apps or social media websites.

All these steps would combat alleged online misinformation in a way that promotes the marketplace of ideas rather than restricting it.  What is more, presidents may always directly use the bully pulpit to advocate their views.  Pet. Br. 24–25 (listing examples of presidential statements criticizing protected speech).  What the government may not do, as petitioners necessarily concede, is “use its authority to suppress contrary views.”  Id. at 23.  As the record shows, that is exactly what happened in this case.

Finally, protecting the marketplace of ideas from government interference of course does not guarantee that the best ideas win. To the contrary, the marketplace will still see a “good deal of market failure”—if success is measured by the truth winning out. Ronald Coase, The Market for Goods and the Market for Ideas, 64 Am. Econ. Rev. 384, 385 (1974).  But “that different costs and benefits must be balanced does not in itself imply who must balance them,” much less how the balance should be struck.  Thomas Sowell, Knowledge and Decisions 240 (1996).

In the First Amendment, the Founders struck the balance in favor of liberty.  However flawed an open marketplace of ideas may be, they decided, it is better than censorship.  “The liberal defense of free speech is not based on any claim that the market for ideas somehow eliminates error or erases human folly.  It is based on a comparative institutional analysis in which most state interventions make a bad situation worse.”  Roger Koppl, Expert Failure 217 (2018).

B.     As this Court instructs, it is especially crucial that the marketplace of ideas be uninhibited on matters of public health.

It is precisely this judgment of the Founders—that state interventions in the marketplace of ideas “make a bad situation worse” (Koppl, supra, at 217) —that petitioners here ignored.  White House officials pressured websites to take down “[c]laims that have been ‘debunked’ by public health authorities.”  J.A. 98.  So-called misinformation was itself dubbed an “urgent public health crisis.”  J.A. 113.  Indeed, said the Surgeon General, “misinformation poses an imminent threat to the nation’s health and takes away the freedom to make informed decisions.”  J.A. 125 (emphasis added).  These assertions are dead wrong—backwards even.  Public health is the last area in which the government should be deciding “which ideas should prevail.”  Nat’l Inst. of Family & Life Advocates v. Becerra, 138 S. Ct. 2361, 2375 (2018) (“NIFLA”).  “[T]his Court has stressed the danger of content-based regulations ‘in the fields of medicine and public health, where information can save lives.’”  Ibid. (quoting Sorrell, 564 U.S. at 566 (striking down statute restricting publication of pharmacy records)).

Several respondents here are professors of medicine at elite institutions who disagreed with the scientific judgments of government officials.  In other words, they were just the kind of professionals whose views the public needed “to make informed decisions.” J.A. 125.  Instead, the government pressured social media websites to suppress these professionals’ views, which the government at the time viewed as outside the mainstream.

“As with other kinds of speech, regulating the content of professionals’ speech ‘pose[s] the inherent risk that the Government seeks not to advance a legitimate regulatory goal, but to suppress unpopular ideas[.]’”  NIFLA, 138 S. Ct. at 2374 (quoting Turner Broad. Sys., Inc. v. FCC, 512 U.S. 622, 641 (1994)).  “Take medicine, for example.  Doctors help patients make deeply personal decisions, and their candor is crucial.”  NIFLA, 138 S. Ct. at 2374.  Yet “[t]hroughout history, governments have ‘manipulat[ed] the content of doctor-patient discourse’ to increase state power and suppress minorities”:

For example, during the Cultural Revolution, Chinese physicians were dispatched to the countryside to convince peasants to use contraception. In the 1930s, the Soviet government expedited completion of a construction project on the Siberian railroad by ordering doctors to both reject requests for medical leave from work and conceal this government order from their patients.  In Nazi Germany, the Third Reich systematically violated the separation between state ideology and medical discourse. German physicians were taught that they owed a higher duty to the ‘health of the Volk’ than to the health of individual patients. Recently, Nicolae Ceausescu’s strategy to increase the Romanian birth rate included prohibitions against giving advice to patients about the use of birth control devices and disseminating information about the use of condoms as a means of preventing the transmission of AIDS. – Ibid. (quoting Thomas Berg, Toward a First Amendment Theory of Doctor-Patient Discourse and the Right To Receive Unbiased Medical Advice, 74 B. U. L. Rev. 201, 201–202 (1994) (footnotes omitted)).

None of this government interference makes sense if the goal is to discover the truth.  And that is the goal of the scientific enterprise:  to discover the truth by testing hypotheses.  The goal is not to follow the current consensus.  “The notion that scientists should agree with a consensus is contrary to how science advances—scientists challenge each other, ask difficult questions and explore paths untaken.  Expectations of conformance to a consensus undercuts scientific inquiry.  It also lends itself to the weaponization of consensus to delegitimize or deplatform inconvenient views, particularly in highly politicized settings.”  Roger Pielke, Jr., The Weaponization of “Scientific Consensus,” American Enterprise Institute (Feb. 5, 2024), https://bit.ly/3OBH3Tj.

We saw just this politicization during the recent pandemic.  “Reputable scientists and physicians have questioned—and in many cases debunked—the ‘official’ narratives on lockdowns, school closures, border testing, vaccine mandates, endless boosters, bivalent COVID shots, epidemic forecasting, natural immunity, vaccine-induced myocarditis, and more.  * * *  But it’s become untenable for those in charge to defend many of their initial positions.”  Matt Strauss, Marta Shaw, J. Edward Les & Pooya Kazemi, COVID dissent wasn’t always misinformation, but it was censored anyway, National Post (Mar. 1, 2023), https://bit.ly/3SQZ6Yb.  Yet that did not stop many of those in charge, in the meantime, from using government power effectively to censor dissenters.  That is what happened in this case.  As one liberal member of Congress said of the “lab leak” theory of COVID’s origin—itself a key exhibit in the shifting of accepted thinking about COVID—“If you take partisan politics and you mix that with science * * *, it’s a toxic combination.”  Sheryl Gay Stolberg & Benjamin Mueller, Lab Leak or Not? How Politics Shaped the Battle Over Covid’s Origin, New York Times (Mar. 19, 2023) (quoting U.S. Rep. Anna Eshoo).

In sum, “[p]rofessionals might have a host of good-faith disagreements, both with each other and with the government, on many topics in their respective fields.  Doctors and nurses might disagree about the ethics of assisted suicide or the benefits of medical marijuana; lawyers and marriage counselors might disagree about the prudence of prenuptial agreements or the wisdom of divorce; bankers and accountants might disagree about the amount of money that should be devoted to savings or the benefits of tax reform.  ‘[T]he best test of truth is the power of the thought to get itself accepted in the competition of the market,’ and the people lose when the government is the one deciding which ideas should prevail.”  NIFLA, 138 S. Ct. at 2374–75 (quoting Abrams, 250 U.S. at 630 (Holmes, J., dissenting)).  The people lost here.

C.     A marketplace offering only government-approved ideas violates the rights of speakers and listeners, the overlooked “purchasers” in the marketplace.

The people’s loss is constitutionally cognizable.  As the “purchasers” in the marketplace of ideas, the people—including respondents here—were robbed of their First Amendment right to make informed decisions.  After all, the right to speak includes a “reciprocal” right to receive speech.  Va. State Bd. of Pharm. v. Va. Citizens Consumer Council, 425 U.S. 748, 757 (1976); see First Amend. and Internet Law Scholars Am. Br., Moody v. NetChoice LLC, NetChoice LLC v. Paxton, Nos. 22-277, 22-555, at 4–5 (Dec. 6, 2023) (collecting authorities).  “To suppress free speech is a double wrong.  It violates the rights of the hearer as well as those of the speaker.  It is just as criminal to rob a man of his right to speak and hear as it would be to rob him of his money.”  Frederick Douglass, Address: A Plea for Free Speech in Boston (1860), in Great Speeches by Frederick Douglass 48, 50 (2013) (quoted in First Amend. and Internet Law Scholars Am. Br, supra, at 4–5).

Stated differently, “[t]he First Amendment protects ‘speech’ and not just speakers.”  Eugene Volokh, Mark Lemley & Peter Henderson, Freedom of Speech and AI Output, 3 J. Free Speech L. 653, 656 (2023).  As a result, “th[is] Court has long recognized First Amendment rights ‘to hear’ and ‘to receive information and ideas.’”  Id. at 657 & n.11 (citing, among other cases, Kleindienst v. Mandel, 408 U.S. 753, 762–763 (1972) (“In a variety of contexts this Court has referred to a First Amendment right to receive information and ideas”) (internal quotation marks omitted); Stanley v. Georgia, 394 U.S. 557, 564 (1969) (“It is now well established that the Constitution protects the right to receive information and ideas.”); Thomas v. Collins, 323 U.S. 516, 534 (1945) (“That there was restriction upon Thomas’ right to speak and the rights of the workers to hear what he had to say, there can be no doubt.”)).

Based on the record respondents have built, Missouri and Louisiana can likely show that petitioners have trampled on their right to “hear” and to “receive information and ideas” published by websites.  Volokh, supra, at 656–657; Resp. Br. 25–27.  And by the same token, respondent individuals will likely be able to show that they have been robbed of their right to hear other suppressed speakers, “whom [respondents] follow, engage with, and re-post on social media.”  Resp. Br. 22.  The judgment should be affirmed.

II.    Websites stock the online marketplace of ideas by exercising editorial discretion.

By effectively forcing websites to take down certain content, the government here “alte[red] the content of [the websites’] speech.”  NIFLA, 138 S. Ct. at 2371 (internal citation omitted).  Such laws “are presumptively unconstitutional and may be justified only if the government proves that they are narrowly tailored to serve compelling state interests.”  Reed v. Town of Gilbert, 576 U.S. 155, 163 (2015).  “This stringent standard reflects the fundamental principle that governments have no power to restrict expression because of its message, its ideas, its subject matter, or its content.”  NIFLA, 138 S. Ct. at 2371 (internal citation and quotation marks omitted).  Nor is government control necessary in the competitive marketplace of ideas stocked by social media companies.

What distinguishes one site from another is what it publishes and refuses to publish. “[C]ontent moderation is the product.” Thomas Germain, Actually, Everyone Loves Censorship. Even You., GIZMODO (Feb. 22, 2023) (emphasis added), http://bit.ly/3Rge8pI.  As private participants in the marketplace of ideas, social media firms set their own editorial policies and choose which ideas to publish.  “The Free Speech Clause does not prohibit private abridgment of speech.”  Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1928 (2019) (emphasis in original).  Even as they openly publish the speech of others, social media platforms do not “lose the ability to exercise what they deem to be appropriate editorial discretion,” because then they would “face the unappetizing choice of allowing all comers or closing the platform altogether.”  Id. at 1931.  In turn, users participate in the marketplace of ideas by choosing which social media website best meets their needs, including through its respective moderation policies.

Social media firms are what economists call “matchmakers” or “multi-sided” platforms.  David Evans & Richard Schmalensee, Matchmakers: The New Economics of Multisided Platforms 10 (2016).  “[M]atchmakers’ raw materials are the different groups of customers that they help bring together.  And part of the stuff they sell to members of each group is access to members of the other groups.  All of them operate physical or virtual places where members of these different groups get together.  For this reason, they are often called multisided platforms.”  Ibid.  Social media firms bring together advertisers and users—including both speakers and listeners—by curating third-party speech.  Curating speech well keeps users engaged so advertisers can reach them.

At times, keeping users engaged requires removing harmful speech, or even removing users who break the rules.  See David Evans, Governing Bad Behavior by Users of Multi-Sided Platforms, 27 Berkeley Tech. L.J. 1201, 1215 (2012).  But a social media company cannot go too far in restricting speech that users value.  Otherwise, users will visit the platform less or even abandon it for other companies in the “attention market”—which includes not only other platforms, but newspapers, magazines, television, games, and apps.  Facing the prospect of fewer engaged users, advertisers will expect lower returns and invest less in the platform.  Eventually, if too many customers flee, the social media company will fail.

Social media companies must also consider brand-conscious advertisers who may not want to be associated with perceived misinformation or other harmful speech.  To take just one example, advertisers reportedly left X after that company loosened its moderation practices.  Ryan Mac, Brooks Barnes & Tiffany Hsu, Advertisers Flee X as Outcry Over Musk’s Endorsement of Antisemitic Post Grows, N.Y. Times (Nov. 17, 2023).  In other words, platforms must strike a balance in their content-moderation policies.  This balance includes creating rules discouraging misinformation if such speech drives away users or advertisers.  As active participants in the marketplace, social media firms are best positioned to discover the best way to serve their users.  See Int’l Ctr. for Law & Economics Am. Br. at 6–11, Moody v. NetChoice LLC, NetChoice LLC v. Paxton, Nos. 22-277, 22-555 (Dec. 7, 2023).  As competition plays out, though, consumers can deliver surprises—and platforms must adjust.  This is the marketplace of ideas in action.

All these product changes happen without government intervention, which, again, would be forbidden in any event. After all, the First Amendment forbids any “abridg[ement]” of speech, no matter where that speech is “publish[ed]” or “disseminat[ed]”—including the online marketplace of ideas. Reno v. ACLU, 521 U.S. 844, 853 (1997); 303 Creative, 600 U.S. at 594.  The way a social media company adjusts to the market shows what it deems “deserving of expression, consideration, and adherence,” or “worthy of presentation.”  Turner, 512 U.S. at 641; Hurley, 515 U.S. at 575.  By forcing platforms to take down content, government coercion “alte[red] the content of [the platforms’] speech.”  NIFLA, 138 S. Ct. at 2371 (internal citation omitted).

When a company “exercises editorial discretion in the selection and presentation of its programming, it engages in speech activity.”  Arkansas Ed. Television Comm’n v. Forbes, 523 U.S. 666, 674 (1997).  “[E]ditorial control” encompasses the “choice of material,” “decisions made as to limitations on the size and content,” and “treatment of public issues[.]”  Miami Herald Pub. Co. v. Tornillo, 418 U.S. 241, 258 (1974).  Any governmental “compulsion to publish that which reason tells them should not be published”—or vice versa—“is unconstitutional.”  Id. at 256 (internal citation and quotation marks omitted).

III. The online marketplace of ideas was impoverished by federal coercion here, and the Court should affirm the injunction insofar as it binds federal officials.

Although social media companies are private actors with a right to editorial discretion, the facts adduced so far in this case, if ultimately established, show coercion by federal officials, and not the exercise of discretion by websites. Relying on an extensive record, “the district court concluded that the officials, via both private and public channels, asked the platforms to remove content, pressed them to change their moderation policies, and threatened them—directly and indirectly—with legal consequences if they did not comply. And it worked—that ‘unrelenting pressure’ forced the platforms to act and take down users’ content.”  J.A. 16–17.

The Fifth Circuit agreed, holding that federal officials likely “ran afoul of the First Amendment by coercing and significantly encouraging social-media platforms to censor disfavored [speech], including by threats of adverse government action like antitrust enforcement and legal reforms.”  J.A. 32 (internal citations and quotation marks omitted).  In reaching this conclusion, the Fifth Circuit adopted a four-part test, ostensibly derived from Bantam Books, Inc. v. Sullivan, 372 U.S. 58 (1963), to tell when government actions aimed at private parties become coercive: “(1) the speaker’s word choice and tone; (2) “?whether the speech was perceived as a threat?”; (3) “?the existence of regulatory authority?”; and, “perhaps most importantly, (4) whether the speech refers to adverse consequences.”  J.A. 42 (internal citations and quotation marks omitted)

But the Fifth Circuit’s test falls short of the test applied in Bantam Books.  The focus of Bantam Books is not on the subjective understanding of the private actor, but on what the state actors objectively did—namely, was it reasonably understood as attempting to coerce private action.  The Bantam Books test is about the efforts of the state actor to suppress speech, not whether the private actor is in some hyper-literal sense “free” to ignore the state actor.  Surreptitious pressure in the form alleged by respondents is just as much an intervention into the marketplace of ideas as overt censorship.

Consider what happened in Bantam Books.  A legislatively created commission notified book publishers that certain books and magazines were objectionable for sale or distribution.  The commission had no power to sanction publishers or distributors, and there were no bans or seizures of books.  372 U.S. at 66–67.  In fact, the book distributors were technically “free” to ignore the commission’s notices.  Id. at 68 (“It is true * * * that [the distributor] was ‘free’ to ignore the Commission’s notices, in the sense that his refusal to ‘cooperate’ would have violated no law.”).  Nonetheless, this Court held, “the Commission deliberately set about to achieve the suppression of publications deemed ‘objectionable’ and succeeded in its aim.”  Id. at 67.  Particularly important was that the notices could be seen as a threat of prosecution.  See id. at 68–69 (“People do not lightly disregard public officers’ thinly veiled threats to institute criminal proceedings against them if they do not come around[.]  The Commission’s notices, phrased virtually as orders, reasonably understood to be such by the distributor, invariably followed up by police visitations, in fact stopped the circulation of the listed publications[.]  It would be naive to credit the State’s assertion that these blacklists are in the nature of mere legal advice, when they plainly serve as instruments of regulation.”).

Ignoring this lesson of Bantam Books, petitioners focus on the subjective response of social media companies rather than the objective actions of the government.  Petitioners emphasize that media companies did not always censor speech to the degree that federal officials asked.  Br. 39.  But under Bantam Books, that is not the question.  The question is whether the government’s communications could reasonably be seen as a threat.  372 U.S. at 68–69.

They could.  Indeed, the allegations here include (a) many threats to have social media firms investigated, prosecuted, and regulated if they failed to remove disfavored speech, coupled with (b) extensive use of private meetings, emails, and digital portals to pressure firms to remove speech.  Resp. Br. 2–16.  As a result of this pressure, social media firms removed speech against their policies and changed their policies.  Ibid.  Much as in Bantam Books, government pressure suppressed lawful speech.

All this government coercion is a first-order infringement of speech and an impermissible intervention into the marketplace of ideas.  It also destroys the business model of social media websites.  As multisided platforms, these companies must carefully balance users, advertisers, and speech.  Government intervention disrupts this careful balance.  Again, the value proposition of social media websites is that they—as actors in the market—are best situated to curate forums attractive to their users.  Destroying these privately curated forums will chill speech for all Americans.  The Court should find that respondents are likely to succeed on the merits of their First Amendment claim.

As noted, the government is free to use the bully pulpit to persuade—and even to argue publicly that certain content on social media platforms is misinformation that should be demoted or removed. Pet. Brief 23–25 (listing examples of presidential statements criticizing protected speech).  But this does not mean the First Amendment allows coercing private actors into shutting down speech, which is what is shown by the facts adduced here.

The remedy for unlawful government coercion is an injunction against, or in specific cases, damages from, government actors. Here, the District Court and Fifth Circuit rightly focused the injunction against federal officials.  That was correct.  The marketplace of ideas, now freed from impermissible government intervention, leaves its participants free to exercise their editorial discretion as they see fit.  There is no need to enjoin private actors; and, indeed, doing so would undermine the same freedom of expression that enjoining coercive government actors protects.  On remand, the injunction should continue to make clear that social media companies may continue to engage in the marketplace of ideas by exercising editorial discretion.  But the government may not press its thumb on the scale by compelling them to censor.

CONCLUSION

The judgment should be affirmed.

[1] No party or counsel for a party authored this brief in whole or in part.  No one other than amicus or its counsel made a monetary contribution to fund preparation or submission of this brief.

Continue reading
Innovation & the New Economy

ICLE Amicus in Ohio v Google

Amicus Brief Interest of Amicus[1] The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center aimed at building the intellectual . . .

Interest of Amicus[1]

The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center aimed at building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law and economics methodologies and economic learning to inform policy debates and has longstanding expertise evaluating law and policy.

ICLE has an interest in ensuring that First Amendment law promotes the public interest by remaining grounded in sensible rules informed by sound economic analysis. ICLE scholars have written extensively in the areas of free speech, telecommunications, antitrust, and competition policy. This includes white papers, law journal articles, and amicus briefs touching on issues related to the First Amendment and common carriage regulation, and competition policy issues related to alleged self-preferencing by Google in its search results.

Introduction

Google’s mission is to “organize the world’s information and make it universally accessible and useful.” See Our Approach to Search, Google (last accessed Jan. 18, 2024), https://www.google.com/search/howsearchworks/our-approach/. Google does this at zero price, otherwise known as free, to its users. This generates billions of dollars of consumer surplus per year for U.S. consumers. See Avinash Collis, Consumer Welfare in the Digital Economy, in The Global Antitrust Instit. Report on the Digital Economy (2020), available at https://gaidigitalreport.com/2020/08/25/digital-platforms-and-consumer-surplus/.

This incredible deal for users is possible because Google is what economists call a multisided platform. See David S. Evans & Richard Schmalensee, Matchmakers: The New Economics of Multisided Platforms 10 (2016) (“Many of the biggest companies in the world, including… Google… are matchmakers… [M]atchmakers’ raw materials are the different groups of customers that they help bring together. And part of the stuff they sell to members of each group is access to members of the other groups. All of them operate physical or virtual places where members of these different groups get together. For this reason, they are often called multisided platforms.”). On one side of the platform, Google provides answers to queries of users. On the other side of the platform, advertisers, pay for access to Google’s users, and, by extension, subsidize the user-side consumption of Google’s free services.

In order to maximize the value of its platform, Google must curate the answers it provides in its search results to the benefit of its users, or it risks losing those users to other search engines. This includes both other general search engines and specialized search engines that focus on one segment of online content (like Yelp or Etsy or Amazon). Losing users would mean the platform becomes less valuable to advertisers.

If users don’t find Google’s answers useful, including answers that may preference other Google products, then they can easily leave and use alternative methods of search. Thus, there are real limitations on how much Google can self-preference before the incentives that allowed it to build a successful platform unravel as users and therefore advertisers leave. In fact, it is highly likely that users of Google search want the integration of direct answers and Google products, and Google provides these results to the benefit of its users. See Geoffrey A. Manne, The Real Reason Foundem Foundered, at 16 (ICLE White Paper 2018), https://laweconcenter.org/wp-content/uploads/2018/05/manne-the_real_reaon_foundem_foundered_2018-05-02-1.pdf (“[N]o one is better positioned than Google itself to ensure that its products are designed to benefit its users”).

Here, as has been alleged without much success in antitrust cases, see United States v. Google, LLC, 2023 WL 4999901, at *20-24 (D. D.C. Aug. 4, 2023) (granting summary judgment in favor of Google on antitrust claims of self-preferencing in search results), the alleged concern is that Google preferences itself at the expense of competitors, and to the detriment of its users. See Complaint (“Google intentionally structures its Results Pages to prioritize Google products over organic search results.”). Ohio asks the court to declare Google a common carrier and subject it to a nondiscrimination requirement that would prevent Google from prioritizing its own products in search results.

The problem, of course, is the First Amendment. Federal district courts have consistently found that the First Amendment protects how providers structure search results. See, e.g., e-ventures Worldwide, LLC v. Google, Inc., 2017 WL 2210029 (M.D. Fla., Feb. 8, 2017); Jian Zhang v. Baidu.com Inc., 10 F. Supp. 3d 433 (S.D. N.Y., Mar. 28, 2014); Langdon v. Google, Inc., 474 F. Supp. 2d 622 (D. Del. 2007); Search King, Inc. v. Google Tech., Inc., 2003 WL 21464568 (W.D. Okla., May 27, 2003).

While Ohio and their amici argue that Google should be considered a common carrier, and thus be subject to a lower standard of review for First Amendment purposes, there is no legal basis for such a conclusion.

First, common carriage is a poor fit for Google’s search product. Courts have rejected monopoly power or being “affected with a public interest” as the proper prerequisites for common carrier status. Ohio, like other jurisdictions, has found that the “fundamental test of common carriage is whether there is a public profession or holding out to serve the public.” Girard v. Youngstown Belt Ry. Co., 134 Ohio St. 3d 79, 89 (2012) (emphasis added). See also Loveless v. Ry. Switching Serv., Inc., 106 Ohio App. 3d 46, 51 (1995) (“The distinctive characteristic of a common carrier is that he undertakes to carry for all people indifferently and hence is regarded in some respects as a public servant.”) (internal quotations omitted). Google simply does not carry information in an undifferentiated way comparable to a railroad carrying passengers or freight. It is rather a service that explicitly differentiates and prioritizes answers to queries by providing individualized responses based upon location, search history, and other factors.

Second, as mentioned above, Google’s search results are protected by the First Amendment, and simply “[l]abeling” Google “a common carrier… has no real First Amendment consequences.” Denver Area Educ. Telecomm. Consortium, Inc. v. FCC, 518 U.S. 727, 825 (1996) (Thomas, J., concurring in the judgment in part and dissenting in part). As this court stated, it is the nondiscrimination requirement sought by Ohio that is subject to First Amendment scrutiny, not the common carriage label itself. See Motion to Dismiss Opinion at 16. And any purported nondiscrimination requirement should be subject to strict scrutiny, as such a requirement would constrain Google’s own speech in the form of its carefully tailored search results, and not simply the speech of others.

Argument

1. Common Carriage Is a Poor Fit as Applied to Google’s Search Product

There is a long history of common carriage regulation in this country. But there has not always been universal agreement on what constitutes the defining feature of a common carrier, with proposed justifications ranging from monopoly power (or natural monopoly) to being affected by the public interest. Over time, though, courts and commentators, including Ohio courts, have agreed that common carriage is primarily about holding oneself out to serve the public indiscriminately.

Simply put, Google Search does not hold itself out to, nor does it actually serve, the public indiscriminately by carrying information, either from users or from other digital service providers. It provides individualized and tailored answers to users’ queries, which may include Google products, direct answers, or general information its search crawlers have learned about other service providers on the Internet.

A. Common Carriage Is Not About Monopoly Power or the Public Interest, It’s About Holding Oneself Out to Serve the Public Indiscriminately

In its complaint, Ohio makes much of Google’s market share in search. See Complaint para. 19-32. Amici also argue that the “immense market dominance” of Google makes it a common carrier analogous to telegraphs or telephones. See Claremont Amicus at 6. Similarly, both Ohio and amici argue that Google’s search results are affected by a public interest. See Complaint at 40; Claremont Amicus at 3-4.

Whatever the market share of Google search, common law courts, including those of Ohio, do not find monopoly power to be a part of the definition of common carriage. For instance, the presence of competition for innkeepers did not mean they were not subject to requirements to serve. See Joseph William Singer, No Right to Exclude: Public Accommodations and Private Property, 90 Nw. U. L. Rev. 1283, 1319-20 (1996) (“On the monopoly rationale, it is important to note that none of the antebellum cases bases the duty to serve on the fact of monopoly. Indeed, the presence of competition was never a reason for denying the duty to serve in the antebellum era. In many towns, there were several innkeepers and cities like Boston had dozens of innkeepers. Yet, no lawyer, judge, or treatise writer ever suggested that innkeepers in cities like Boston should be exempt from the duty to serve the public.”). Nor does the presence of monopoly necessarily lead to common carriage treatment under the law. See Blake Reid, Uncommon Carriage, at 25, 76 Stan. L. Rev., forthcoming (2024) (“[F]irms holding effective monopolies or oligopolies in a wide range of sectors, including pharmacies and drug stores, managed healthcare providers, office supply stores, eyeglass sellers, airlines, alcohol distribution, and even candy are not widely regarded or legally treated as common carriers.”). Accordingly, Ohio does not define common carriage in relation to monopoly power. Cf. Kinder Morgan Cochin LLC v. Simonson, 66 N.E. 1176, 1182 (Ohio Ct. App. 5th Dist. Ashland County 2016) (failing to mention monopoly as part of the definition of common carrier).

Moreover, while older cases and commentators cite the “affected with a public interest” standard, courts have moved away from it because of its indeterminacy. See Biden v. Knight First Amendment Inst., 141 S. Ct. 1220, 1223 (2021) (Thomas, J., concurring) (this definition is “hardly helpful, for most things can be described as ‘of public interest.’”). See also Christopher S. Yoo, The First Amendment, Common Carriers, and Public Accommodations: Net Neutrality, Digital Platforms, and Privacy, 1 J. of Free Speech L. 463, 468-69 (2021).

Instead, the definition of common carriage under Ohio law is defined as holding itself “out to the public as ready and willing to serve the public indifferently.” See Kinder Morgan Cochin, 66 N.E. at 1182; Girard v. Youngstown Belt Ry. Co., 134 Ohio St. 3d 79, 89 (2012); Loveless v. Ry. Switching Serv., Inc., 106 Ohio App. 3d 46, 51 (1995).

B. Google Does Not Offer an Undifferentiated Search Product to Its Users

With this definition in mind, Google is not a common carrier. Google does not offer an undifferentiated service to its users like a pipeline (like in Kinder Morgan Cochin) or railroad (like in Girard or Loveless), or even like a mall offering an escalator to customers (like in May Department Stores Co. v. McBride, 124 Ohio St. 264 (1931)). Nor does it offer to “communicate or transmit” information of “their own design and choosing” to users. See FCC v. Midwest Video Corp., 440 U.S. 689, 701 (1979) (defining common carrier services in the communications context). Instead, it offers a tailored search result to its users. See Complaint at paras. 17-18 (noting that search results depend on location); How Search work with your activity, Google (last accessed Jan. 18, 2024), https://support.google.com/websearch/answer/10909618 (“When you search on Google, your past searches and other info are sometimes incorporated to help us give you a more useful experience.”). This is not a common carrier in the communications context. See Midwest Video, 440 U.S. at 701 (“A common carrier does not make ‘individualized decisions, in particular cases, whether on what terms to deal.’”) (quoting Nat’l Ass’n of Reg. Util. Comm’rs v. FCC, 525 F.2d 630, 641 (D.C. Cir. 1976)).

For instance, if a user searches for restaurants, Google’s algorithm may not only take into consideration the location of the user, but also whether the user previously clicked on particular options when running a similar query, or even if the user visited a particular restaurant’s website. While the results are developed algorithmically, this is much more like answering a question than it is transporting a private communication between two individuals like a telephone or telegraph.

Importantly, users often receive a different result even for the same search. See Why your Google Search results might differ from other people, Google (last accessed Jan. 18, 2024), https://support.google.com/websearch/answer/12412910 (“You may get the same or similar results to someone else who searches on Google Search. But sometimes, Google may give you different results based on things like time, context, or personalized results.”). Google is clearly making “‘individualized’ content- and viewpoint-based decisions” when it comes to search results. Cf. Moody v. NetChoice, 34 F.4th 1196, 1220 (11th Cir. 2022) (quoting Midwest Video, 440 U.S. at 701).

While the court emphasized at the motion to dismiss stage that a reasonable factfinder could find Google offers to hold itself out to the public in its mission “to organize the world’s information and make it universally accessible and universal,” see MTD Opinion at 7, this does not “change [its] status to common carrier[]… unless [it] undertake[s] to carry for all people indifferently.” Loveless, 106 Ohio App. 3d at 52. As the above facts demonstrate, there is no basis for finding that Google search offers an undifferentiated product to its users. The court should find Google is not a common carrier under Ohio law.

II. Google’s Search Results Are Protected by the First Amendment from Common Carriage Nondiscrimination Requirements

Ohio ultimately seeks to restrict the ability of Google to favor its own products in its search results. But this runs into a real constitutional problem: search results are protected by the First Amendment.

Moreover, as this court has previously found, the First Amendment scrutinizes not the label of common carriage, but the burdens which come with it. Here, the nondiscrimination requirement Ohio asks for is what is at issue.

This nondiscrimination requirement is inconsistent with the First Amendment. While this court thought it should be subject to intermediate scrutiny, the First Amendment requires strict scrutiny when speech is compelled. The cases cited by the court are inapposite when a speaker is delivering its own message, i.e. search results, rather than simply hosting speech of others.

A. Federal District Court Cases Establish Google Search Results Are Protected by the First Amendment

While no appellate court has considered the issue, several federal district courts have recognized search engines have a First Amendment interest in their search results. Some decisions have framed the results themselves as speech. Others have considered the issue as one of editorial judgment. But under either approach, Google Search results are protected by the First Amendment.

For instance, in Jian Zhang v. Baidu.com, 10 F. Supp. 3d 433 (S.D. N.Y. Mar. 28, 2014), the court found that the application of a New York public accommodations law to a Chinese search engine that “censored” pro-democracy speech is inconsistent with the right to editorial discretion. The court found that “there is a strong argument to be made that the First Amendment fully immunizes search-engine results from most, if not all, kinds of civil liability and government regulation.” Id. at 438.  The court noted that “the central purpose of a search engine is to retrieve relevant information from the vast universe of data on the Internet and to organize it in a way that would be most helpful to the searcher. In doing so, search engines inevitably make editorial judgments about what information (or kinds of information) to include in the results and how and where to display that information (for example, on the first page of the search results or later).” Id.  Other courts have similarly found search engines have a right to editorial discretion over their results. See also e-ventures Worldwide, LLC v. Google, Inc., 2017 WL 2210029, at *4 (M.D. Fla. Feb. 8, 2017); Langdon v. Google, Inc., 474 F. Supp. 2d 622, 629-30 (D. Del. 2007).

In this sense, Google’s search results are analogous to the decisions of what to print made by the newspaper in Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241 (1974), or the parade organizer in Hurley v. Irish-American Gay, Lesbian, & Bisexual Group of Boston, 515 U.S. 557 (1995).

At least one court has found that search results themselves are protected opinions. In Search King Inc. v. Google Technology, Inc., 2003 WL 21464568, at *4 (WD. Okla. May 27, 2003), the court found that search results “are opinions—opinions of the significance of particular web sites as they correspond to a search query. Other search engines express different opinions, as each search engine’s method of determining relative significance is unique.”

Under this line of reasoning, Google’s responses to queries are opinions directing users to what it thinks is the best answer given all the information it has on the user, her behavior, and her preferences. This is in itself protected speech. Cf. Eugene Volokh & Donald M. Falk, Google: First Amendment Protection for Search Results, 8 J. L. Econ. & Pol’y 883, 884 (2012) (“[S]earch engines are speakers… they convey information that the search engine has itself prepared or compiled [and] they direct users to material created by others… Such reporting about others’ speech is itself constitutionally protected speech.”).

In sum, the First Amendment protects Google’s search results.

B. A Common Carriage Label Does Not Change First Amendment Analysis

Amici argued that because Google is a common carrier, the nondiscrimination requirement is merely an economic regulation that is not subject to heightened First Amendment scrutiny. See Claremont Amicus at 17. But the issue here is not simply the label of common carriage, it is the regulatory scheme sought by Ohio. Cf. Denver Area Educ. Telecomm. Consortium, Inc. v. FCC, 518 U.S. 727, 825 (1996) (Thomas, J., concurring in the judgment in part and dissenting in part) (“Labeling leased access a common carrier scheme has no real First Amendment consequences.”); MTD Opinion at 16 (“As for the State’s request for declaratory relief, merely declaring or designating Google Search to be a common carrier does not, of itself, violate the First Amendment or infringe on Google’s constitutional speech rights…. It is the burdens and obligations accompanying that designation that implicate the First Amendment.”).

In other words, when reviewing the nondiscrimination requirement sought by Ohio, the labeling of this as a common carriage obligation does not matter under the First Amendment.

C. The Nondiscrimination Requirement Should be Subject to Strict Scrutiny

Ohio and amici have characterized the nondiscrimination requirement that comes with common carriage as a content-neutral requirement to host the speech of others. See MTD Opinion at 16; Claremont Amicus at 15, 17. This court agreed that this was possible at the motion to dismiss stage. But the remedy sought is not content-neutral, nor is it dealing purely with the speech of others. As a result, it should be subject to strict scrutiny.

This court found that a “restriction of this type must satisfy intermediate scrutiny” as a “content-neutral restriction on speech.” MTD Opinion at 16. The court compared the situation to Turner Broadcasting System Inc. v. FCC, 512 U.S. 622 (1994). But the nondiscrimination requirement is clearly content-based.

Ohio is asking this court to enjoin Google from prioritizing its own products in its search results. See Complaint at para. 77. The only way to know whether Google is doing that is to consider the content of its search results. See, e.g.Reed v. Town of Gilbert, Ariz., 576 U.S. 155, 163 (2015) (“Government regulation of speech is content based if a law applies to particular speech because of the topic discussed or the idea or message expressed.”). The idea or message expressed here is that Google’s products would be a better answer to an inquiry than another. By definition, the nondiscrimination requirement is a content-based regulation of speech, and must therefore be subject to strict scrutiny.

Nor is this just an issue of the speech of others. This court stated that “infringing on a private actor’s speech by requiring that actor to host another person’s speech does not always violate the First Amendment.” MTD Opinion at 17. The court cited PruneYard Shopping Ctr. v. Robins, 447 U.S. 74 (1980), Rumsfeld v. Forum for Academic and Institutional Rights, Inc., 547 U.S. 47 (2007), and Red Lion Broadcasting Co. v. FCC, 395 U.S. 367 (1969). But none of these cases deals with a situation analogous to applying nondiscrimination requirements to Google’s search results.

Here, as explained above, Google’s search results are themselves protected speech. Collectively, each search result is Google’s opinion of the best set of answers, in the optimal order, to questions provided by users to Google. Requiring Google to present different results, or results in a different order, or with different degrees of prioritization would impermissibly compel Google to speak, similar to requiring car owners to display license plates saying “Live Free or Die,” see Wooley v. Maynard, 430 U.S. 705 (1977), or forcing a student to stand for the Pledge of Allegiance, see West Virginia State Bd. of Educ. V. Barnette, 319 U.S. 624 (1943). It is, in short, impossible to require “Google [to] carr[y] all responsive search results on an equal basis,” Complaint at 5, without compelling it to speak in ways it does not choose to speak.

Even if Google’s interest in its search results is characterized as editorial discretion over others’ speech rather its own speech (a dubious distinction), this would still be distinguishable from the above cases. Google is clearly identified with its results by users, unlike the shopping center with its customers in PruneYard or the law schools with military recruiters in FAIR. See Complaint at paras. 48-50 (alleging that Google was built on expectations from users that the search algorithm was in some way neutral). This is especially the case when Google is, as alleged, prioritizing its own products in search results. See id. at paras. 64-70. Google clearly believes, and its users appear to agree, that these products are what its users want to see. See Complaint at 2 (“Google Search is perceived to deliver the best search results…”). Otherwise, those users could just use another service. Cf. Zhang, 10 F. Supp. 3d at 441 (a user dissatisfied with search results can just use another search engine).

Notably, this stands in contrast to the court’s characterization of the speech at issue. See MTD Opinion at 19-20 (“When a user searches a speech by former President Donald Trump on Google Search and that speech is retrieved by Google with a link to the speech on YouTube, no rational person would conclude that Google is associating with President Trump or endorsing what is seen in the video.”). It is not the content of the links that users associate with Google, but the search results themselves, which includes the order in which each link is presented, the presentation of certain prioritized results in a different format, and the exclusion or deprioritization of certain results Google thinks the user will not find relevant. A search engine is more than a “passive receptacle or conduit” for the speech of others; the “choice of material” and how it is presented in its search results “constitute the exercise of editorial control and judgment.” Tornillo, 418 U.S. at 258.

In sum, the reasons for subjecting must-carry provisions in Turner to intermediate scrutiny do not apply here. First, the nondiscrimination requirement sought by Ohio is not content-neutral; indeed, it is precisely Ohio’s dissatisfaction with the specific content Google provides that impels its proposed law. Cf. Turner, 512 U.S. at 653-55 (emphasizing the content-neutrality of the must-carry requirements). Second, Google must alter its message in its search results due to the regulation, as it is expressing a clear opinion that its own products are the best answer—an answer with which Google is identified and which distinguishes it from its search engine competitors. Cf. id. at 655-56 (finding the must-carry requirements would not force cable operators to alter their own messages or identify them with the speech they carry). Third, Google does not have the ability to prevent its users from accessing information, whether from other general search engines, specialized search engines, or just typing a website into the browser. Cf. Turner, 512 U.S. at 656 (“When an individual subscribes to cable, the physical connection between the television set and the cable network gives the cable operator bottleneck, or gatekeeper control over most (if not all) of the television programming that is channeled into the subscriber’s home… A cable operator, unlike other speakers in other media, can thus silence the voice of competing speakers with a mere flick of the switch.”). Absent these countervailing justifications for intermediate scrutiny in Turner, Ohio’s nondiscrimination requirement must be subject to strict scrutiny.

Finally, while it is true that economic regulation like antitrust law can be consistent with the First Amendment, see Claremont Amicus at 17 (citing Associated Press v. United States, 326 U.S. 1, 20), that does not mean every legal restriction on speech so characterized is constitutional. For instance, in Associated Press, the Supreme Court found the organization in violation of antitrust law, but in footnote 18 disclaimed the power to “compel AP or its members to permit publication of anything which their ‘reason’ tells them should not be published.” Associated Press, 316 U.S. at 20, n. 18. The Court echoed this in Tornillo to argue that the remedy sought by Florida’s right-to-reply law was unconstitutional government compulsion of speech that would violate the newspaper’s right to editorial discretion. See Tornillo, 418 U.S. at 254-58. Restricting Google’s right to editorial discretion over its search results is similarly unconstitutional.

Conclusion

Ohio’s attempted end-run of competition law and the First Amendment by declaring Google a common carrier must be rejected by this court. Google is not a common carrier. And the nondiscrimination requirement requested by Ohio is inconsistent with the First Amendment.

[1] Amicus state that no counsel for any party authored this brief in whole or in part, and that no entity or person other than amicus and its counsel made any monetary contribution toward the preparation and submission of this brief.

Continue reading
Innovation & the New Economy