Showing 9 of 75 Publications in News & Social Media

Gus Hurwitz on the Supreme Court’s Murthy Case

Presentations & Interviews ICLE Director of Law & Economics Programs Gus Hurwitz was a guest on The Cyberlaw Podcast, where he discussed the U.S. Supreme Court’s Murthy v. . . .

ICLE Director of Law & Economics Programs Gus Hurwitz was a guest on The Cyberlaw Podcast, where he discussed the U.S. Supreme Court’s Murthy v. Missouri free speech case and a unanimous decision by the court on when a public official may use a platform’s tools to suppress critics posting on his or her social-media page. Other topics included AI deepfakes, the congressional bill to force the divestment of TikTok, and the Federal Trade Commission’s lawsuit against Meta. Audio of the full episode is embedded below.

Continue reading
Antitrust & Consumer Protection

Murthy Oral Arguments: Standing, Coercion, and the Difficulty of Stopping Backdoor Government Censorship

TOTM With Monday’s oral arguments in Murthy v. Missouri, we now have more of a feel for how the U.S. Supreme Court appears to be considering . . .

With Monday’s oral arguments in Murthy v. Missouri, we now have more of a feel for how the U.S. Supreme Court appears to be considering the issues of social-media censorship—in this case, done allegedly at the behest of federal officials.

In the International Center for Law & Economics’ (ICLE) amicus brief in the case, we argued that the First Amendment protects a marketplace of ideas, and government agents can’t intervene in that marketplace by coercing social-media companies into removing disfavored speech. But if the oral arguments are any indication, there are reasons to be skeptical that the Court will uphold the preliminary injunction the district court issued against the government officials (later upheld in a more limited form by the 5th U.S. Circuit Court of Appeals).

Read the full piece here.

Continue reading
Innovation & the New Economy

ICLE Comments to FTC on Children’s Online Privacy Protection Rule NPRM

Regulatory Comments Introduction We thank the Federal Trade Commission (FTC) for this opportunity to comment on the notice of proposed rulemaking (NPRM) to update the Children’s Online . . .

Introduction

We thank the Federal Trade Commission (FTC) for this opportunity to comment on the notice of proposed rulemaking (NPRM) to update the Children’s Online Privacy Protection Rule (“COPPA Rule”).

The International Center for Law and Economics (ICLE) is a nonprofit, nonpartisan research center whose work promotes the use of law & economics methodologies to inform public-policy debates. We believe that intellectually rigorous, data-driven analysis will lead to efficient policy solutions that promote consumer welfare and global economic growth.[1]

ICLE’s scholars have written extensively on privacy and data-security issues, including those related to children’s online safety and privacy. We also previously filed comments as part of the COPPA Rule Review and will make some of the same points below.[2]

The Children’s Online Privacy Protection Act (COPPA) sought to strike a balance in protecting children without harming the utility of the internet for children. As Sen. Richard Bryan (D-Nev.) put it when he laid out the purpose of COPPA:

The goals of this legislation are: (1) to enhance parental involvement in a child’s online activities in order to protect the privacy of children in the online environment; (2) to enhance parental involvement to help protect the safety of children in online fora such as chatrooms, home pages, and pen-pal services in which children may make public postings of identifying information; (3) to maintain the security of personally identifiable information of children collected online; and (4) to protect children’s privacy by limiting the collection of personal information from children without parental consent. The legislation accomplishes these goals in a manner that preserves the interactivity of children’s experience on the Internet and preserves children’s access to information in this rich and valuable medium.[3]

In other words, COPPA was designed to protect children from online threats by promoting parental involvement in a way that also preserves a rich and vibrant marketplace for children’s content online. Consequently, the pre-2013 COPPA Rule did not define personal information to include persistent identifiers standing alone. It is these persistent identifiers that are critical for the targeted advertising that funds the interactive online platforms and the creation of children’s content the legislation was designed to preserve.

COPPA applies to the “operator of any website or online service” that is either “directed to children that collects personal information from children” or that has “actual knowledge that it is collecting personal information from a child.”[4] These operators must “obtain verifiable parental consent for the collection, use, or disclosure of personal information.” The NPRM, following the mistaken 2013 amendments to the COPPA Rule, continues to define “personal information” to include persistent identifiers that are necessary for the targeted advertising undergirding the internet ecosystem.

Below, we argue that, before the FTC moves further toward restricting platform operators and content creators’ ability to monetize their work through targeted advertising, it must consider the economics of multisided platforms. The current path will lead to less available free content for children and more restrictions on their access to online platforms that depend on targeted advertising. Moreover, the proposed rules are inconsistent with the statutory text of COPPA, as persistent identifiers do not by themselves enable contacting specific individuals. Including them in the definition of “personal information” is also contrary to the statute’s purpose, as it will lead to a less vibrant internet ecosystem for children.

Finally, there are better ways to protect children online, including by promoting the use of available technological and practical solutions to avoid privacy harms. To comply with existing First Amendment jurisprudence regarding online speech, it is necessary to rely on these less-restrictive means to serve the goal of protecting children without unduly impinging their speech interests online.

I. The Economics of Online Multisided Platforms

Most of the “operators of websites and online services” subject to the COPPA Rule are what economists call multisided markets, or platforms.[5] Such platforms derive their name from the fact that they serve at least two different types of customers and facilitate their interaction. Multisided platforms generate “indirect network effects,” described by one economist as a situation where “participants on one side value being able to interact with participants on the other side… lead[ing] to interdependent demand.”[6]

Online platforms provide content to one side and access to potential consumers on the other side. In order to keep demand high, online platforms often offer free access to users, whose participation is subsidized by those participants on the other side of the platform (such as advertisers) that wish to reach them.[7] This creates a positive feedback loop in which more participants on one side of the platform leads to more participants on the other.

This dynamic is also true of platforms with content “directed to children.” Revenue is collected not from those users, but primarily from the other side of the platform—i.e., advertisers who pay for access to the platform’s users. To be successful, online platforms must keep enough—and the right type of—users engaged to maintain demand for advertising.

Moreover, many “operators” under COPPA are platforms that rely on user-generated content. Thus, they must also consider how to attract and maintain high-demand content creators, often accomplished by sharing advertising revenue. If platforms fail to serve the interests of high-demand content creators, those creators may leave the platform, thus reducing its value.

Online platforms acting within the market process are usually going to be the parties best-positioned to make decisions on behalf of platforms users. Operators with content directed to children may even compete on privacy policies and protections for children by providing tools to help users avoid what they (or, in this context, their parents and guardians) perceive to be harms, while keeping users on the platform and maintaining value for advertisers.[8]

There may, however, be examples where negative externalities[9] stemming from internet use are harmful to society more broadly. A market failure could result, for instance, if platforms’ incentives lead them to collect too much (or the wrong types of) information for targeted advertising, or to offer up content that is harmful for children or keeps them hooked to using the platform.

In situations where there are negative externalities from internet use, there may be a case to regulate online platforms in various ways. Any case for regulation must, however, acknowledge potential transaction costs, as well as how platforms and users may respond to changes in those costs. To get regulation right, the burden of avoiding a negative externality should fall on the least-cost avoider.

The Coase Theorem, derived from the work of Nobel-winning economist Ronald Coase[10] and elaborated on in the subsequent literature,[11] helps to explain the issue at-hand:

  1. The problem of externalities is bilateral;
  2. In the absence of transaction costs, resources will be allocated efficiently, as the parties bargain to solve the externality problem;
  3. In the presence of transaction costs, the initial allocation of rights does matter; and
  4. In such cases, the burden of avoiding the externality’s harm should be placed on the least-cost avoider, while taking into consideration the total social costs of the institutional framework.

In one of Coase’s examples, the noise from a confectioner using his candy-making machine is a potential cost to the doctor next door, who consequently cannot use his office to conduct certain testing. Simultaneously, the doctor moving his office next door to the confectioner is a potential cost to the confectioner’s ability to use his equipment.

In a world of well-defined property rights and low transaction costs, the initial allocation of rights would not matter, because the parties could bargain to overcome the harm in a mutually beneficial manner—i.e., the confectioner could pay the doctor for lost income or to set up sound-proof walls, or conversely, the doctor could pay the confectioner to reduce the sound of his machines.[12] But since there are transaction costs that prevent this sort of bargain, it is important whether the initial right is allocated to the doctor or the confectioner. To maximize societal welfare, the cost should be placed on the entity that can avoid the harm at the lowest cost.[13]

In the context of the COPPA Rule, website operators and online services create incredible value for their users, but they also can, at times, impose negative externalities relevant to children who use their services. In the absence of transaction costs, it would not matter whether operators must obtain verifiable parental consent before collecting, using, or disclosing personal information, or whether the initial burden is placed on parents and children to avoid the harms associated with such collection, use, or disclosure.

But given that there are transaction costs involved in obtaining (and giving) verifiable parental consent,[14] it matters how the law defines personal information (which serves as a proxy for a property right, in Coase’s framing). If personal information is defined too broadly and the transaction costs for providers to gain verifiable parental consent are too high, the result may be that the societal benefits of children’s internet use will be lost, as platform operators restrict access beyond the optimum level.

The threat of liability for platform operators under COPPA also risks excessive collateral censorship.[15] This arguably has already occurred, as operators like YouTube have restricted content creators’ ability to monetize their work through targeted advertising, leading on balance to less children’s content. By wrongly placing the burden on operators to avoid harms associated with targeted advertising, societal welfare is reduced, including the welfare of children who no longer get the benefits of that content.

On the other hand, there are situations where website operators and online services are the least-cost avoiders. For example, they may be the parties best-placed to monitor and control harms associated with internet use in cases where it is difficult or impossible to hold those using their platforms accountable for the harms they cause.[16] In other words, operators should still be held liable under COPPA when they facilitate adults’ ability to message children, or to identify a child’s location without parental consent, in ways that could endanger children.[17] Placing the burden on children or their parents to avoid such harms could allow operators to impose un- or undercompensated harms on society.

Thus, in order to get the COPPA Rule’s balance right, it is important to determine whether it is the operators or their users who are the least-cost avoiders. Placing the burden on the wrong parties would harm societal welfare, either by reducing the value that online platforms confer to their users, or in placing more uncompensated negative externalities on society.

II. Persistent Identifiers and ‘Personal Information’

As mentioned above, under COPPA, a website operator or online service that is either directed to children or that has actual knowledge that it collects personal information from a child must obtain “verifiable parental consent” for the “collection, use or disclosure” of that information.[18] But the NPRM continues to apply the expanded definition of “personal information” to include persistent identifiers from the 2013 amendments.

COPPA’s definition for personal information is “individually identifiable information” collected online.[19] The legislation included examples such as first and last name; home or other physical address; as well as email address, telephone number, or Social Security number.[20] These are all identifiers obviously connected to people’s real identities. COPPA does empower the FTC to determine whether other identifiers should be included, but the commission must permit “the physical or online contacting of a specific individual”[21] or “information concerning the child or the parents of that child that the website collects online from the child and combines with an identifier described in this paragraph.”[22]

In 2013, the FTC amended the definition of personal information to include:

A persistent identifier that can be used to recognize a user over time and across different Web sites or online services. Such persistent identifier includes, but is not limited to, a customer number held in a cookie, an Internet Protocol (IP) address, a processor or device serial number, or unique device identifier.[23]

The NPRM here continues this error.

Neither IP addresses nor device identifiers alone “permit the physical or online contacting of a specific individual,” as required by 15 U.S.C. §?6501(8)(F). A website or app could not identify personal identity or whether a person is an adult or child from these pieces of information alone. In order for persistent identifiers, like those relied upon for targeted advertising, to be counted as personal information under 15 U.S.C. §?6501(8)(G), they need to be combined with other identifiers listed in the definitions. In other words, it is only when a persistent identifier is combined with a first and last name, an address, an email, a phone number, or a Social Security number that it should be considered personal information protected by the statute.

While administrative agencies receive Chevron deference in court challenges when definitions are ambiguous, this text, when illuminated by canons of statutory construction,[24] is clear. The canon of ejusdem generis applies when general words follow an enumeration of two or more things.[25] The general words are taken to apply only to persons or things of the same general kind or class as those mentioned specifically. Persistent identifiers, such as cookies, bear little resemblance to the other examples of “personally identifiable information” listed in the statute, such as first and last name, address, phone, email, or Social Security number. Only when combined with such information could a persistent identifier become personal information.

The NPRM states that the Commission is “not persuaded” by this line of argumentation, pointing back to the same reasoning offered in the 2013 amendments. The NPRM states that it is “the reality that at any given moment a specific individual is using that device,” which “underlies the very premise behind behavioral advertising.”[26] Moreover the NPRM reasons that “while multiple people in a single home often use the same phone number, home address, and email address, Congress nevertheless defined these identifiers as ‘individually identifiable information’ in the COPPA statute.”[27] But this reasoning is flawed.

While multiple people regularly share an address, and sometimes even a phone number or email, each of these identifiers allows for contacting an individual person in a way that a persistent identifier simply does not. In each of those cases, bad actors can use such information to send direct messages to people (phone numbers and emails); find their physical location (address); and potentially to cause them harm.

A persistent identifier, on its own, is not the same. Without the subpoena of an internet service provider (ISP) or virtual private network (VPN), a bad actor that intended harm could not tell either where the person to whom the persistent identifier is assigned is located, or to message them directly. Persistent identifiers are useful primarily to online platforms in supporting their internal operations (which the NPRM continues to allow) and serving users targeted advertising.

Moreover, the fact that bills seeking to update COPPA—proposed but never passed by Congress—have proposed expanding the definition of personal information to include persistent identifiers suggests that the FTC has asserted authority that it does not have under the current statute.[28] Under Supreme Court precedent,[29] when considering whether an agency has the authority that it claims to pass rules, courts must consider whether Congress has rejected proposals to expand the agency’s jurisdiction in similar ways.

The NPRM also ignores the practical realities of the relationship between parents and children when it comes to devices and internet use. Parental oversight is already built into any type of advertisement (including targeted ads) that children see. Few children can view those advertisements without their parents providing them a device and the internet access to do so. Even fewer children can realistically make their own purchases. Consequently, the NPRM misunderstands targeted advertising in the context of children’s content, which is not based on any knowledge about the users as individuals, but on the browsing and search history of the device they happen to be using.

Children under age 13, in particular, are extremely unlikely to have purchased the devices they use; to have paid for the internet access to use those devices; or to have any disposable income or means to pay for goods and services online. Thus, contrary to the NPRM’s assumptions, the actual “targets” of this advertising—even on websites or online services that host children’s content—are the children’s parents.

This NPRM continues the 2013 amendments’ mistake and will continue to greatly reduce the ability of children’s content to generate revenue through the use of relatively anonymous persistent identifiers. As we describe in the next section, the damage done by the 2013 amendments is readily apparent, and the Commission should take this opportunity to rectify the problem.

III. More Parental Consent, Less Children’s Content

As outlined above, in a world without transaction costs—or, at least, one in which such costs are sufficiently low—verifiable parental consent would not matter, because it would be extremely easy for a bargain to be struck between operators and parents. In the real world, however, transaction costs exist. In fact, despite the FTC’s best efforts under the COPPA Rule, the transaction costs associated with obtaining verifiable parental consent continue to be sufficiently high as to prevent most operators from seeking that consent for persistent identifiers. As we stated in our previous comments, the economics are simple: if content creators lose access to revenue from targeted advertising, there will be less content created from which children can benefit.

FIGURE 1: Supply Curve for Children’s Online Content

The supply curve for children’s online content shifts left as the marginal cost of monetizing it increases. The marginal cost of monetizing such content is driven upward by the higher compliance costs of obtaining verifiable parental consent before serving targeted advertising. This supply shift means that less online content will be created for children.

These results are not speculative at this point. Scholars who have studied the issue have found the YouTube settlement, made pursuant to the 2013 amendments, has resulted in less child-directed online content, due to creators’ inability to monetize that content through targeted advertising. In their working paper “COPPAcalypse? The YouTube Settlement’s Impact on Kids Content,”[30] Garrett Johnson, Tesary Lin, James C. Cooper, & Liang Zhong summarized the issue as follows:

The Children’s Online Privacy Protection Act (COPPA), and its implementing regulations, broadly prohibit operators of online services directed at children under 13 from collecting personal information without providing notice of its data collection and use practices and obtaining verifiable parental consent. Because obtaining verifiable parental consent for free online services is difficult and rarely cost justified, COPPA essentially acts as a de facto ban on the collection of personal information by providers of free child-directed content. In 2013, the FTC amended the COPPA rules to include in the definition of personal information “persistent identifier that can be used to recognize a user over time and across different Web sites or online services,” such as a “customer number held in a cookie . . . or unique device identifier.” This regulatory change meant that, as a practical matter, online operators who provide child-directed content could no longer engage in personalized advertising.

On September 4, 2019, the FTC entered into a consent agreement with YouTube to settle charges that it had violated COPPA. The FTC’s allegations focused on YouTube’s practice of serving personalized advertising on child-directed content at children without obtaining verifiable parental consent. Although YouTube maintains it is a general audience website and users must be at least 13 years old to obtain a Google ID (which makes personalized advertising possible), the FTC complaint alleges that YouTube knew that many of its channels were popular with children under 13, citing YouTube’s own claims to advertisers. The settlement required YouTube to identify child-directed channels and videos and to stop collecting personal information from visitors to these channels. In response, YouTube required channel owners producing [“made-for-kids”] MFK content to designate either their entire channels or specific videos as MFK, beginning on January 1, 2020. YouTube supplemented these self-designations with an automated classifier designed to identify content that was likely directed at children younger than 13. In so doing, YouTube effectively shifted liability under COPPA to the channel owners, who could face up to $42,530 in fines per video if they fail to self-designate and are not detected by YouTube’s classifier.[31]

By requiring verifiable parental consent, the rule change and settlement increased the transaction costs imposed on online platforms that host content created by others. YouTube’s economically rational response was to restrict content creators’ ability to benefit from (considerably more lucrative) personalized advertising. The result was less content created for children, including by driving out less-profitable content creators:

Consistent with a loss in personalized ad revenue, we find that child-directed content creators produce 13% less content and pivot towards producing non-child-directed content. On the demand side, views of child-directed channels fall by 22%. Consistent with the platform’s degraded capacity to match viewers to content, we find that content creation and content views become more concentrated among top child-directed YouTube channels.[32]

This is not the only finding regarding COPPA’s role in reducing the production of content for children. Morgan Reed—president of the App Association, a global trade association for small and medium-sized technology companies—presented extensively at the FTC’s 2019 COPPA Workshop.[33] Reed’s testimony detailed that the transaction costs associated with obtaining verifiable parental consent did little to enhance parental control, but much to reduce the quality and quantity of content directed to children.

It is worth highlighting, in particular, Reed’s repeated use of the words “friction,” “restriction,” and “cost” to describe how COPPA’s institutional features affect the behavior of social-media platforms, parents, and children. While noting that general audience content is “unfettered, meaning that you do not feel restricted by what you can get to, how you do it. It’s easy, it’s low friction. Widely available. I can get it on any platform, in any case, in any context and I can get to it rapidly,” Reed said that COPPA-regulated apps and content are, by contrast, all about:

Friction, restriction, and cost. Every layer of friction you add alters parent behavior significantly. We jokingly refer to it as the over the shoulder factor. If a parent wants access to something and they have to pass it from the back seat to the front seat of the car more than one time, the parent moves on to the next thing. So the more friction you add to an application directed at children the less likely it is that the parent is going to take the steps necessary to get through it because the competition, of course, is as I said, free, unfettered, widely available. Restriction. Kids balk against some of the restrictions. I can’t get to this, I can’t do that. And they say that to the parent. And from the parent’s perspective, fine, I’ll just put in a different age date. They’re participating, they’re parenting but they’re not using the regulatory construction that we all understand.

The COPPA side, expensive, onerous or friction full. We have to find some way around that. Restrictive, fewer features, fewer capabilities, less known or available, and it’s entertaining-ish. …

Is COPPA the barrier? I thought this quote really summed it up. “Seamlessness is expected. But with COPPA, seamlessness is impossible.” And that has been one of the single largest areas of concern. Our folks are looking to provide a COPPA compliant environment. And they’re finding doing VPC is really hard. We want to make it this way, we just walked away. And why do they want to do it? We wanted to create a hub for kids to promote creativity. So these are not folks who are looking to take data and provide interest based advertising. They’re trying to figure out how to do it so they can build an engaging product. Parental consent makes the whole process very complicated. And this is the depressing part. …

We say that VPC is intentional friction. It’s clear from everything we’ve heard in the last two panels that the authors of COPPA, we don’t really want information collected on kids. So friction is intentional. And this is leading to the destruction of general audience applications basically wiping out COPPA apps off the face of the map.[34]

Reed’s use of the word “friction” is particularly enlightening. The economist Mike Munger of Duke University has often described transaction costs as frictions—explaining that, to consumers, all costs are transaction costs.[35] When higher transaction costs are imposed on social-media platforms, end users feel the impact. In this case, the result is that children and parents receive less quality children’s apps and content.

Thus, when the NPRM states that “the Commission [doesn’t] find compelling the argument that the 2013 persistent identifier modification has caused harm by hindering the ability of operators to monetize online content through targeted advertising,”[36] in part because “the 2013 Amendments permit monetization… through providing notice and seeking parental consent for the use of personal information for targeted advertising,”[37] it misses how transaction costs prevent this outcome. The FTC should not ignore the data provided by scholars who have researched the question, nor the direct testimony of app developers.

IV. Lower-Cost Ways to Avoid Harms to Children

Widely available practical and technological means are a lower-cost way to avoid the negative externalities associated with internet use, relative to verifiable-parental-consent laws. As NetChoice put it in the complaint the group filed against Arkansas’ social-media age-verification law, “[p]arents have myriad ways to restrict their children’s access to online services and to keep their children safe on such services.”[38]

NetChoice’s complaint recognized the subjective nature of negative externalities, stating:

Just as people inevitably have different opinions about what books, television shows, and video games are appropriate for minors, people inevitably have different views about whether and to what degree online services are appropriate for minors. While many minors use online services in wholesome and productive ways, online services, like many other technologies, can be abused in ways that may harm minors.[39]

They proceeded to list all the ways that parents can take control and help their children avoid online harms, including with respect to the decisions to buy devices for their children and to set terms for how and when they are permitted to use them.[40] Parents can also choose to use tools offered by cell-phone carriers and broadband providers to block certain apps and sites from their children’s devices, or to control with whom their children can communicate and for how long they can use the devices.[41]

NetChoice also pointed to wireless routers that allow parents to filter and monitor online content;[42] parental controls at the device level;[43] third-party filtering applications;[44] and numerous tools offered by NetChoice members that offer relatively low-cost monitoring and control by parents, or even by teen users acting on their own behalf.[45] Finally, they noted that, in response to market demand,[46] NetChoice members expend significant resources curating content to ensure that it is appropriate.[47]

Similarly, parents can protect their children’s privacy simply by taking control of the devices they allow their children to use. Tech-savvy parents can, if they so choose, install software or use ad-blockers to prevent collection of persistent identifiers.[48] Even less tech-savvy parents can make sure that their children are not subject to ads and tracking simply by monitoring their device usage and ensuring they only use YouTube Kids or other platforms created explicitly for children. In fact, most devices and operating systems now have built-in, easy-to-use controls that enable both monitoring and blocking of children’s access to specific apps and websites.[49]

This litany of less-restrictive means to accomplish the goal of protecting children online bears repeating, because even children have some First Amendment interests in receiving online speech.[50] If a court were to examine the COPPA Rule as a speech regulation that forecloses children’s access to online content, it would be subject to strict scrutiny. This means the rules would need to be the least-restrictive possible in order to fulfill the statute’s purpose. Educating parents and children on the available practical and technological means to avoid harms associated with internet use, including the collection of data for targeted advertising, would clearly be a less-restrictive alternative to a de facto ban of targeted advertising.

A less-restrictive COPPA rule could still enhance parental involvement and protect children from predators without impairing the marketplace for children’s online content significantly. Parents already have the ability to review their children’s content-viewing habits on devices they buy for them. A COPPA rule that enhances parental control by requiring verifiable parental consent when children are subject to sharing personal information—like first and last name, address, phone number, email address, or Social Security number—obviously makes sense, along with additions like geolocation data. But it is equally obvious that it is possible to avoid, at lower cost, the relatively anonymized collection of persistent identifiers used to support targeted ads through practical and technological means, without requiring costly verifiable parental consent.

V. Perils of Bringing More Entities Under the COPPA Rule

The costs of the COPPA Rule would be further exacerbated by the NPRM’s proposal to modify the criteria for determining whether a site or service is directed toward children.[51] These proposed changes, particularly the reliance on third-party services and comparisons with “similar websites or online services,” raise significant concerns about both their practical implementation and potential unintended consequences. The latter could include further losses of online content for both children and adults, as content creators drawn into COPPA’s orbit lose access to revenue from targeted advertising.

The FTC’s current practice employs a multi-factor test to ascertain whether a site or service is directed at children under 13. This comprehensive approach considers various elements, including subject matter, visual and audio content, and empirical evidence regarding audience composition.[52] The proposed amendments aim to expand this test by introducing such factors as marketing materials, representations to third parties and, notably, reviews by users or third parties and comparisons with similar websites or services.[53]

The inclusion of third-party reviews and comparisons with similar services as factors in determining a site’s target audience introduces a level of ambiguity and unreliability that would be counterproductive to COPPA’s goals. Without clear standards to evaluate their competence or authority, relying on third-party reviews would leave operators without a solid foundation upon which to assess compliance. This ambiguity could lead to overcompliance. In particular, online platforms that carry third-party content may err on the side of caution in order to align with the spirit of the rule. This threatens to stifle innovation and free expression by restricting creators’ ability to monetize content that has any chance to be considered “directed to children.” Moreover, to avoid this loss of revenue, content creators could shift their focus exclusively to content clearly aimed only at adults, rather than that which could be interesting to adults and children alike.

Similarly, the proposal to compare operators with “similar websites or online services” is fraught with challenges. The lack of guidance on how to evaluate similarity or to determine which service sets the standard for compliance would increase burdens on operators, with little evidence of tangible realized benefits. It’s also unclear who would make these determinations and how disputes would be resolved, leading to further compliance costs and potential litigation. Moreover, operators may be left in a position where it is impractical to accurately assess the audience of similar services, thereby further complicating compliance efforts.

Given these considerations, the FTC should not include reliance on third-party services or comparisons with similar websites or online services in its criteria for determining whether content is directed at children under 13. These approaches introduce a level of uncertainty and unreliability that could lead to overcompliance, increased costs, and unintended negative impacts on online content and services, including further restrictions on content creators who create content interesting to both adults and children. Instead, the FTC should focus on providing clear, direct guidelines that allow operators to assess their compliance with COPPA confidently, without the need to rely on potentially biased or manipulative third-party assessments. This approach will better serve the FTC’s goal of protecting children’s online privacy, while ensuring a healthy, innovative online ecosystem.

Conclusion

The FTC should reconsider the inclusion of standalone persistent identifiers in the definition of “personal information.” The NPRM continues to enshrine the primary mistake of the 2013 amendments. This change was inconsistent with the purposes and text of the COPPA statute. It already has reduced, and will continue to reduce, the availability of children’s online content.

[1] ICLE has received financial support from numerous companies, organizations, and individuals, including firms with interests both supportive of and in opposition to the ideas expressed in this and other ICLE-supported works. Unless otherwise noted, all ICLE support is in the form of unrestricted, general support. The ideas expressed here are the authors’ own and do not necessarily reflect the views of ICLE’s advisors, affiliates, or supporters.

[2] Much of these comments are adapted from ICLE’s 2019 COPPA Rule Review Comments, available at https://laweconcenter.org/wp-content/uploads/2019/12/COPPA-Comments-2019.pdf; Ben Sperry, A Law & Economics Approach to Social-Media Regulation, CPI TechREG Chronicle (Feb. 29, 2022), https://laweconcenter.org/resources/a-law-economics-approach-to-social-media-regulation; Ben Sperry, A Coasean Analysis of Online Age-Verification and Parental-Consent Regimes (ICLE Issue Brief, Nov. 9, 2023), available at https://laweconcenter.org/wp-content/uploads/2023/11/Issue-Brief-Transaction-Costs-of-Protecting-Children-Under-the-First-Amendment-.pdf.

[3] 144 Cong. Rec. 11657 (1998) (Statement of Sen. Richard Bryan), available at https://www.congress.gov/crec/1998/10/07/CREC-1998-10-07.pdf#page=303.

[4] 15 U.S.C. §?6502(b)(1)(A).

[5] See, e.g., Jean-Charles Rochet & Jean Tirole, Platform Competition in Two-Sided Markets, 1 J. Euro. Econ. Ass’n 990 (2003).

[6] David S. Evans, Multisided Platforms in Antitrust Practice, at 3 (Oct. 17, 2023), forthcoming, Michael Noel, Ed., Elgar Encyclopedia on the Economics of Competition and Regulation, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4606511.

[7] For instance, many nightclubs hold “ladies’ night” events in which female patrons receive free admission or discounted drinks in order to attract more men, who pay full fare for both.

[8] See, e.g., Ben Sperry, Congress Should Focus on Protecting Teens from Real Harms, Not Targeted Ads, The Hill (Feb. 16, 2023), https://thehill.com/opinion/congress-blog/3862238-congress-should-focus-on-protecting-teens-from-real-harms-not-targeted-ads.

[9] An externality is a side effect of an activity that is not reflected in the cost of that activity—basically, what occurs when we do something whose consequences affect other people. A negative externality occurs when a third party does not like the effects of an action.

[10] See Ronald H. Coase, The Problem of Social Cost, 3 J. L. & Econ. 1 (1960)

[11] See Steven G. Medema, The Coase Theorem at Sixty, 58 J. Econ. Lit. 1045 (2020).

[12] See Coase, supra note 8, at 8-10.

[13] See id. at 34 (“When an economist is comparing alternative social arrangements, the proper procedure is to compare the total social product yielded by these different arrangements.”).

[14] See Part III below.

[15] See Felix T. Wu, Collateral Censorship and the Limits of Intermediary Liability, 87 Notre Dame L. Rev. 293, 295-96 (2011); Geoffrey A. Manne, Ben Sperry, & Kristian Stout, Who Moderates the Moderators: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Computer & Tech. L J. 26, 39 (2022); Ben Sperry, The Law & Economics of Children’s Online Safety: The First Amendment and Online Intermediary Liability, Truth on the Market (May 12 2023), https://truthonthemarket.com/2023/05/12/the-law-economics-of-childrens-online-safety-the-firstamendment-and-online-intermediary-liability.

[16] See Geoffrey A. Manne, Kristian Stout, & Ben Sperry, Twitter v. Taamneh and the Law & Economics of Intermediary Liability, Truth on the Market (Mar. 8, 2023), https://truthonthemarket.com/2023/03/08/twitter-v-taamneh-and-the-law-economics-of-intermediary-liability; Ben Sperry, Right to Anonymous Speech, Part 2: A Law & Economics Approach, Truth on the Market (Sep. 6, 2023), https://truthonthemarket.com/2023/09/06/right-to-anonymous-speech-part-2-a-law-economics-approach.

[17] See Statement of Commissioner Alvaro M. Bedoya On the Issuance of the Notice of Proposed Rulemaking to Update the Children’s Online Privacy Protection Rule (COPPA Rule), at 3-4 (Dec. 20, 2023), available at https://www.ftc.gov/system/files/ftc_gov/pdf/BedoyaStatementonCOPPARuleNPRMFINAL12.20.23.pdf (listing examples of these types of enforcement actions).

[18] 15 U.S.C. §?6502(b)(1)(A)(ii).

[19] 15 U.S.C. §?6501(8).

[20] 15 U.S.C. §?6501(8)(A)-(E).

[21] 15 U.S.C. §?6501(8)(F).

[22] 15 U.S.C. §?6501(8)(G).

[23] 16 CFR § 312.2 (Personal information)(7).

[24] See Chevron U.S.A. Inc. v. Natural Resources Defense Council, Inc., 467 U. S. 837, 843 n.9 (1984) (“If a court, employing traditional tools of statutory construction, ascertains that Congress had an intention on the precise question at issue, that intention is the law and must be given effect.”).

[25] What is EJUSDEM GENERIS?, The Law Dictionary: Featuring Black’s Law Dictionary Free Online Legal Dictionary 2nd Ed. (last accessed Dec. 9, 2019), https://thelawdictionary.org/ejusdem-generis.

[26] NPRM at 2043.

[27] Id.

[28] See, e.g., Children and Teens’ Online Privacy Protection Act, S. 1418, §2(a)(3) 118th Cong. (2024).

[29] See FDA v. Brown & Williamson, 529 U.S. 120, 148-50 (2000).

[30] Garrett A. Johnson, Tesary Lin, James C. Cooper, & Liang Zhong, COPPAcalypse? The YouTube Settlement’s Impact on Kids Content, SSRN (Apr. 26, 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4430334.

[31] Id. at 6-7 (emphasis added).

[32] Id. at 1.

[33] The Future of the COPPA Rule: An FTC Workshop Part 2, Federal Trade Commission (Oct. 7, 2019), available at https://www.ftc.gov/system/files/documents/public_events/1535372/transcript_of_coppa_workshop_part_2_1.pdf.

[34] Id. at 6 (emphasis added).

[35] See Michael Munger, To Consumers, All Costs are Transaction Costs, Am. Inst. Econ. Rsch. (June 13, 2023), https://www.aier.org/article/to-consumers-all-costs-are-transaction-costs.

[36] NPRM at 2043.

[37] Id. at 2034, n. 121.

[38] See NetChoice Complaint, NetChoice LLC v. Griffin, NO. 5:23-CV-05105, 2023 U.S. Dist. LEXIS 154571 (W.D. Ark. 2023), available at https://netchoice.org/wp-content/uploads/2023/06/NetChoice-v-Griffin_-Complaint_2023-06-29.pdf.

[39] Id. at para. 13.

[40] See id. at para. 14

[41] See id.

[42] See id. at para 15.

[43] See id. at para 16.

[44] See id.

[45] See id. at para. 17, 19-21

[46] Sperry, supra note 8.

[47] See NetChoice Complaint, supra note 36, at para. 18.

[48] See, e.g., Mary James & Catherine McNally, The Best Ad Blockers 2024, all about cookies (last updated Feb. 29, 2024), https://allaboutcookies.org/best-ad-blockers.

[49] See, e.g., Parental Controls for Apple, Android, and Other Devices, internet matters (last accessed Mar. 7, 2024), https://www.internetmatters.org/parental-controls/smartphones-and-other-devices.

[50] See, e.g., Brown v. Ent. Merchants Ass’n, 564 U.S. 786, 794-95 (2011); NetChoice, LLC v. Griffin, 2023 WL 5660155, at *17 (W.D. Ark. Aug. 31, 2023) (finding Arkansas’s Act 689 “obviously burdens minors’ First Amendment rights” by “bar[ring] minors from opening accounts on a variety of social media platforms.”).

[51] See NPRM at 2047.

[52] See id. at 2046-47.

[53] Id. at 2047 (“Additionally, the Commission believes that other factors can help elucidate the intended or actual audience of a site or service, including user or third-party reviews and the age of users on similar websites or services.”).

Continue reading
Data Security & Privacy

A Law & Economics Approach to Social-Media Regulation

Popular Media The thesis of this essay is that policymakers must consider what the nature of social media companies as multisided platforms means for regulation. The balance . . .

The thesis of this essay is that policymakers must consider what the nature of social media companies as multisided platforms means for regulation. The balance struck by social media companies acting in response to the incentives they face in the market could be upset by regulation that favors the interests of some users over others. Promoting the use of technological and practical means to avoid perceived harms by users themselves would preserve the benefits of social media to society without the difficult tradeoffs of regulation. Part I will introduce the economics of multisided platforms like social media, and how this affects the incentives of these platforms. Social-media platforms, acting within the market process, are best usually best positioned to balance the interests of their users, but there could be occasions where the market process fails due to negative externalities. Part II will consider these situations where there are negative externalities due to social media and introduce the least-cost avoider principle. Usually, social-media users are the least-cost avoiders of harms, but sometimes social media are better placed to monitor and control harms. This involves a balance, as the threat of collateral censorship or otherwise reducing opportunities to speak and receive speech could result from social media regulation. Part III will then apply the insights from Part I and II to the areas of privacy, children’s online safety, and speech regulation.

I. Introduction

Policymakers at both the state and federal levels have been actively engaged in recent years with proposals to regulate social media, whether the subject is privacy, children’s online safety, or concerns about censorship, misinformation, and hate speech.[1] While there may not be consensus about precisely why social media is bad, there is broad agreement that the major online platforms are to blame for at least some harms to society. It is also generally recognized, though often not emphasized, that social media brings great value to its users. In other words, there are costs and benefits, and policymakers should be cautious when introducing new laws that would upset the balance that social-media companies must strike in order to serve their users well.

This essay will propose a general approach, informed by the law & economics tradition, to assess when and how social media should be regulated. Part I will introduce the economics of multisided platforms, and how they affects social-media platforms’ incentives. The platforms themselves, acting within the market process, are best usually best-positioned to balance the interests of their users, but there could be occasions where the market process fails due to negative externalities. Part II will consider such externalities and introduce the least-cost avoider principle. Usually, social-media users are the least-cost avoiders of harms, but platforms themselves are sometimes better placed to monitor and control harms. This requires a balance, as social-media regulation raises the threat of collateral censorship or otherwise reducing opportunities to speak and receive speech. Part III will apply the insights from Part I and II to the areas of privacy, children’s online safety, and speech regulation.

The thesis of this essay is that policymakers must consider social-media companies’ status as multisided platforms means for regulation. The balance struck by social-media companies acting in response to the market incentives they face could be upset by regulation that favors the interests of some users over others. Promoting the use of technological and practical means to avoid perceived harms would allow users to preserve the benefits of social media without the difficult tradeoffs of regulation.

II. The Economics of Social-Media Platforms

Mutually beneficial trade is a fundamental bedrock of the market process. Entrepreneurs—including those that act through formal economic institutions like business corporations—seek to discover the best ways to serve consumers. Various types of entities help connect those who wish to buy products or services to those who are trying to sell them. Physical marketplaces are common around the world: those places set up to facilitate interactions between buyers and sellers. If those marketplaces fail to serve the interests of those who use them, others will likely arise.

Social-media companies are a virtual example of what economists call multi-sided markets or platforms.[2] Such platforms derive their name from the face that they serve at least two different types of customers and facilitate their interaction. Multi-sided platforms have “indirect network effects,” described by one economist as a situation where “participants on one side value being able to interact with participants on the other side… lead[ing] to interdependent demand.”[3] In some situations, a platform may determine it can only raise revenue from one side of the platform if demand on the other side of the platform is high. In such cases, the platform may choose to offer one side free access to the platform to boost such demand, which is subsidized by participants on the other side of the platform.[4] This creates a positive feedback loop in which more participants on one side of the platform leads to more participants on the other.

In this sense, social-media companies are much like newspapers or television in that, by solving a transaction cost problem,[5] these platforms bring together potential buyers and sellers by providing content to one side and access to consumers on the other side. Recognizing that their value lies in reaching users, these platforms sell advertising and offer access to content for a lower price, often at the price of zero (or free). In other words, advertisers subsidize the access to content for platform users.

Therefore, most social-media companies are free for users. Revenue is primarily collected from the other side of the platform—i.e., from advertisers. In effect, social-media companies are attention platforms: They supply content to users, while collecting data for targeted advertisements for businesses who seek access to those users. To be successful, social-media companies must keep enough (and the right type of) users engaged so as to maintain demand for advertising. Social-media companies must curate content that users desire in order to persuade them to spend time on the platform.

But unlike newspapers or television, social-media companies primarily rely on their users to produce content rather than creating their own. Thus, they must also consider how to attract and maintain high-demand content creators, as well as how to match user-generated content to the diverse interests of other users. If they fail to serve the interests of high-demand content creators, those users may leave the platform, thus reducing time spent on the platform by all users, which thereby reduces the value of advertising. Similarly, if they fail to match content to user interests, those users will be less engaged on the platform, reducing its value to advertisers.

Moreover, this means that social-media companies need to balance the interests of advertisers and other users. Advertisers may desire more data to be collected for targeting, but users may desire less data collection. Similarly, advertisers may desire more ads, while users may prefer fewer ads. Advertisers may prefer content that keeps users engaged on the platform, even if it is harmful for society, whether because it is false, hateful, or leads to mental-health issues for minors. On the other hand, brand-conscious advertisers may not want to run ads next to content with which they disagree. Moreover, users may not want to see certain content. Social-media companies need to strike a balance that optimizes their value, recognizing that losing participants on either side would harm the other.

Usually, social-media companies acting within the market process are going to be best-positioned to make decisions on behalf of their users. Thus, they may create community rules that restrict content that would, on net, reduce user engagement.[6] This could include limitations on hate speech and misinformation. On the other hand, if they go too far in restricting content that users consider desirable, that could reduce user engagement and thus value to advertisers. Social-media companies therefore compete on moderation policies, trying to strike the appropriate balance to optimize platform value. A similar principle applies when it comes to privacy policies and protections for minors: social-media companies may choose to compete by providing tools to help users avoid what they perceive as harms, while keeping users on the platform and maintaining value for advertisers.

There may, however, be scenarios where social media produces negative externalities[7] that are harmful to society. A market failure could result, for instance, if platforms have too great of an incentive to allow misinformation or hate speech that keeps users engaged, or to collect too much (or the wrong types of) information for targeted advertising, or to offer up content that is harmful for minors and keeps them hooked to using the platform.

In sum, social-media companies are multi-sided platforms that facilitate interactions between advertisers and users by curating user-generated content that drives attention to their platforms. To optimize the platform’s value, a social-media company must keep users engaged. This will often include privacy policies, content-moderation standards, and special protections for minors. On the other hand, incentives could become misaligned and lead to situations where social-media usage leads to negative externalities due to insufficient protection of privacy, too much hate speech or misinformation, or harms to minors.

III. Negative Social-Media Externalities and the Least-Cost-Avoider Principle

In situations where there are negative externalities from social-media usage, there may be a case for regulation. Any case for regulation must, however, recognize the presence of transaction costs, and consider how platforms and users may respond to changes in those costs. To get regulation right, the burden of avoiding a negative externality should fall on the least-cost avoider.

The Coase Theorem, derived from the work of Nobel-winning economist Ronald Coase[8] and elaborated on in the subsequent literature,[9] helps to explain the issue at hand:

  1. The problem of externalities is bilateral;
  2. In the absence of transaction costs, resources will be allocated efficiently, as the parties bargain to solve the externality problem;
  3. In the presence of transaction costs, the initial allocation of rights does matter; and
  4. In such cases, the burden of avoiding the externality’s harm should be placed on the least-cost avoider, while taking into consideration the total social costs of the institutional framework.

In one of Coase’s examples, the noise from a confectioner using his machinery is a potential cost to the doctor next door, who consequently can’t use his office to conduct certain testing. Simultaneously, the doctor moving his office next door is a potential cost to the confectioner’s ability to use his equipment. In a world of well-defined property rights and low transaction costs, the initial allocation of a right would not matter, because the parties could bargain to overcome the harm in a beneficial manner—i.e., the confectioner could pay the doctor for lost income or to set up sound-proof walls, or the doctor could pay the confectioner to reduce the sound of his machines.[10] But since there are transaction costs that prevent this sort of bargain, it is important whether the initial right is allocated to the doctor or the confectioner. To maximize societal welfare, the cost should be placed on the entity that can avoid the harm at the lowest cost.[11]

Here, social-media companies create incredible value for their users, but they also arguably impose negative externalities in the form of privacy harms, misinformation and hate speech, and harms particular to minors. In the absence of transaction costs, the parties could simply bargain away the harms associated with social-media usage. But since there are transaction costs, it matters whether the burden to avoid harms is placed on the users or the social-media companies. If the burden is wrongly placed, it may end up that the societal benefits of social media will be lost.

For instance, imposing liability on social-media companies risks collateral censorship, which occurs when platforms decide that liability risk is too large and opt to over-moderate or not host user-generated content, or to restrict access to such content either by charging higher prices or excluding those who could be harmed (like minors).[12] By wrongly placing the burden to avoid harms on social-media platforms, societal welfare will be reduced.

On the other hand, there may be situations where social-media companies are the least-cost avoiders. For instance, they may be best-placed to monitor and control harms associated with social-media usage when it is difficult or impossible to hold those using their platforms accountable for harms they cause.[13] For instance, if a social-media company allows anonymous or pseudonymous use, with no realistic possibility of tracking down users who cause harms, illegal conduct could go undeterred. In such cases, placing the burden on social-media users could lead to social media imposing uncompensated harms on society.

Thus, it is important to determine whether the social-media companies or their users are the least-cost avoiders. Placing the burden on the wrong party or parties would harm societal welfare, either by reducing the value of social media or by creating more uncompensated negative externalities.

IV. Applying the Lessons of Law & Economics to Social-Media Regulation

Below, I will examine the areas of privacy, children’s online safety, and content moderation, and consider both the social-media companies’ incentives and whether the platforms or their users are the least-cost avoiders.

A. Privacy

As discussed above, social-media companies are multi-sided platforms that provide content to attract attention from users, while selling information collected from those users for targeted advertising. This leads to the possibility that social-media companies will collect too much information in order to increase revenue from targeted advertising. In other words, as the argument goes, the interests of the paying side of the platform will outweigh the interests of social-media users, thereby imposing a negative externality on them.

Of course, this assumes that the collection and use of information for targeted advertisements is considered a negative externality by social-media users. While this may be true for some, for others, it may be something they care little about or even value, because targeted advertisements are more relevant to them. Moreover, many consumers appear to prefer free content with advertising to paying a subscription fee.[14]

It does seem likely, however, that negative externalities are more likely to arise when users don’t know what data is being collected or how it is being used. Moreover, it is a clear harm if social-media companies misrepresent what they are collecting and how they are using it. Thus, it is generally unobjectionable—at least, in theory—for the Federal Trade Commission or another enforcer to hold social-media companies accountable for their privacy policies.[15]

On the other hand, privacy regulation that requires specific disclosures or verifiable consent before collecting or using data would increase the cost of targeted advertising, thus reducing its value to advertisers, and thereby further reducing the platform’s incentives of to curate valuable content for users. For instance, in response to the FTC’s consent agreement with YouTube charging that it violated the Children’s Online Privacy Protection Act (COPPA), YouTube required channel owners producing children’s content to designate their channels as such, along with automated processes designed to identify the same.[16] This reduced content creators’ ability to benefit from targeted advertising if their content was directed to children. The result was less content created for children with poorer matching as well:

Consistent with a loss in personalized ad revenue, we find that child-directed content creators produce 13% less content and pivot towards producing non-child-directed content. On the demand side, views of child-directed channels fall by 22%. Consistent with the platform’s degraded capacity to match viewers to content, we find that content creation and content views become more concentrated among top child-directed YouTube channels.

Alternatively, a social-media company could raise the price it charges to users, as it can no longer use advertising revenue to subsidize users’ access. This is, in fact, exactly what has happened in Europe, as Meta now offers an ad-free version of Facebook and Instagram for $14 a month.[18]

In other words, placing the burden on social-media companies to avoid the perceived harms from the collection and use of information for targeted advertising could lead to less free content available to consumers. This is a significant tradeoff, and not one that most social-media consumers appear willing to make voluntarily.

On the other hand, it appears that social-media users could avoid much of the harm from the collection and use of their data by using available tools, including those provided by social-media companies. For instance, most of the major social-media companies offer two-factor authentication, privacy-checkup tools, the ability to browse the service privately, to limit audience, and to download and delete data.[19] Social-media users could also use virtual private networks (VPNs) to protect their data privacy while online.[20] Finally, users could just not post private information or could limit interactions with businesses (through likes or clicks on ads) if they want to reduce the amount of information used for targeted advertising.

B. Children’s Online Safety

Some have argued that social-media companies impose negative externalities on minors by serving them addictive content and/or content that results in mental-health harms.[21] They argue that social-media companies benefit from these harms because they are able to then sell data from minors to advertisers.

While it is true that social-media companies want to attract users through engaging content and interfaces, and that they make money through targeted advertising, it is highly unlikely that they are making much money from minors themselves. Very few social-media users under 18 have considerable disposable income or access to payment-card options that would make them valuable to advertisers. Thus, regulations that raise the costa to social-media companies of serving minors, whether through a regulatory duty of care[22] or through age verification and verifiable parental consent,[23] could lead social-media companies to invest more excluding minors than in creating vibrant and safe online spaces for them.

Federal courts considering age-verification laws have noted there are costs to companies, as well as users, in obtaining this information. In Free Speech Coalition Inc. v. Colmenero,[24] the U.S. District Court in Austin, Texas, considered a law that required age verification before viewing online pornography, and found that the costs of obtaining age verification were high, citing the complaint that stated “several commercial verification services, showing that they cost, at minimum, $40,000.00 per 100,000 verifications.”[25] But just as importantly, the transaction costs in this example also include the subjective costs borne by those who actually go through with verifying their age to access pornography. As the court noted, “the law interferes with the Adult Video Companies’ ability to conduct business, and risks deterring adults from visiting the websites.”[26] Similarly, in NetChoice v. Griffin,[27] the U.S. District Court for Western District of Arkansas found that a challenged law’s age-verification requirements were “costly” and would put social-media companies covered by the law in the position of needing to take drastic action to either implement age verification, restrict access for Arkansans, or face the possibility of civil and criminal enforcement.[28]

On the other hand, social-media companies—responding to demand from minor users and their parents—have also exerted considerable effort to reduce harmful content being introduced to minors. For instance, they have invested in content-moderation policies and their enforcement, including through algorithms, automated tools, and human review, to remove, restrict, or add warnings to content inappropriate for minors.[29] On top of that, social-media companies offer tools to help minors and their parents avoid many of the harms associated with social-media usage.[30] There are also options available at the ISP, router, device, and browser level to protect minors while online. As the court put it in Griffin, “parents may rightly decide to regulate their children’s use of social media—including restricting the amount of time they spend on it, the content they may access, or even those they chat with. And many tools exist to help parents with this.”[31]

In other words, parents and minors working together can use technological and practical means to make marginal decisions about social-media usage at a lower cost than a regulatory environment that would likely lead to social-media companies restricting use by minors altogether.[32]

C. Content Moderation

There have been warring allegations about social-media companies’ incentives when it comes to content moderation. Some claim that salacious misinformation and hate speech drives user engagement, making platforms more profitable for advertisers; others argue that social-media companies engage in too much “censorship” by removing users and speech in a viewpoint-discriminatory way.[33] The U.S. Supreme Court is currently reviewing laws from Florida and Texas that would force social-media companies to carry speech.[34]

Both views fail to take into account that social-media companies are largely just responding to the incentives they face as multi-sided platforms. Social-media companies are solving a Coasean speech problem, wherein some users don’t want to be subject to certain speech from other users. As explained above, social-media companies must balance these interests by setting and enforcing community rules for speech. This may include rules against misinformation and hate speech. On the other hand, social-media companies can’t go too far in restricting high-demand speech, or they will risk losing users. Thus, they must strike a delicate balance.

Laws that restrict the “editorial discretion” of social-media companies may fail the First Amendment,[35] but they also reduce the companies’ ability to give their customers a valuable product in light of user (and advertiser) demand. For instance, the changes in the moderation standards of X (formerly Twitter) in the last year since the purchase by Elon Musk have led to many users and advertisers exiting the platform due to a perceived increase in hate speech and misinformation.[36]

Social-media companies need to be free to moderate as they see fit, free from government interference. Such interference includes not just the forced carriage of speech, but in government efforts to engage in censorship-by-proxy, as has been alleged in Murthy v. Missouri.[37] From the perspective of the First Amendment, government intervention by coercing or significantly encouraging the removal of disfavored speech, even in the name of misinformation, is just as harmful as the forced carriage of speech.[38] But more importantly for our purposes here, such government actions reduce platforms’ value by upsetting the balance that social-media companies strike with respect to their users’ speech interests.

Users can avoid being exposed to unwanted speech by averting their digital eyes from it—i.e., by refusing to interact with it and thereby training social-media companies’ algorithms to serve speech that they prefer. They can also take their business elsewhere by joining a social-media network with speech-moderation policies more to their liking. Voting with one’s digital feet (and eyes) is a much lower-cost alternative than either mandating the carriage of speech or censorship by government actors.

V. Conclusion

Social-media companies are multisided platforms that must curate compelling content while restricting harms to users in order to optimize their value to the advertisers that pay for access. This doesn’t mean they always get it right. But they are generally best-positioned to make those decisions, subject to the market process. Sometimes, there may be negative externalities that aren’t fully internalized. But as Coase taught us, that is only the beginning of the analysis. If social-media users can avoid harms at lower cost than social-media companies, then regulation should not place the burden on social-media companies. There are tradeoffs in social-media regulation, including the possibility that it will result in a less-valuable social-media experience for users.

[1] See e.g. Mary Clare Jalonick, Congress eyes new rules for tech, social media: What’s under consideration, Associated Press (May 8, 2023), https://www.wvtm13.com/article/whats-under-consideration-congress-eyes-new-rules-for-tech-social-media/43821405#;  Khara Boender, Jordan Rodell, & Alex Spyropoulos, The State of Affairs: What Happened in Tech Policy During 2023 State Legislative Sessions?, Project Disco (Jul. 25, 2023), https://www.project-disco.org/competition/the-state-of-affairs-statetech-policy-in-2023 (noting laws passed and proposed addressing consumer data privacy, content moderation, and children’s online safety at the state level).

[2] See e.g. Jean-Charles Rochet & Jean Tirole, Platform Competition in Two-Sided Markets, 1 J. Euro. Econ. Ass’n 990 (2003).

[3] David S. Evans, Multisided Platforms in Antitrust Practice, at 3 (Oct. 17, 2023), forthcoming, Michael Noel, Ed., Elgar Encyclopedia on the Economics of Competition and Regulation, available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4606511.

[4] For instance, many nightclubs hold “Ladies Night” where ladies get in free in order to attract more men who pay for entrance.

[5] Transaction costs are the additional costs borne in the process of buying or selling, separate and apart from the price of the good or service itself — i.e. the costs of all actions involved in an economic transaction. Where transaction costs are present and sufficiently large, they may prevent otherwise beneficial agreements from being concluded.

[6] See David S. Evans, Governing Bad Behavior by Users of Multi-Sided Platforms, 27 Berkeley Tech. L. J. 1201 (2012); Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 HARV. L. REV. 1598 (2018).

[7] An externality is a side effect of an activity that is not reflected in the cost of that activity — basically, what occurs when we do something whose consequences affect other people. A negative externality occurs when a third party does not like the effects of an action.

[8] See R.H. Coase, The Problem of Social Cost, 3 J. L. & Econ. 1 (1960)

[9] See Steven G. Medema, The Coase Theorem at Sixty, 58 J. Econ. Lit. 1045 (2020).

[10] See Coase, supra note 9, at 8-10.

[11] See id. at 34 (“When an economist is comparing alternative social arrangements, the proper procedure is to compare the total social product yielded by these different arrangements.”).

[12] See Felix T. Wu, Collateral Censorship and the Limits of Intermediary Liability, 87 Notre Dame L. Rev. 293, 295-96 (2011); Geoffrey A. Manne, Ben Sperry & Kristian Stout, Who Moderates the Moderators: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Computer & Tech. L J. 26, 39 (2022); Ben Sperry, The Law & Economics of Children’s Online Safety: The First Amendment and Online Intermediary Liability, Truth on the Market (May 12 2023), https://truthonthemarket.com/2023/05/12/the-law-economics-of-childrens-online-safety-the-firstamendment-and-online-intermediary-liability.

[13] See Geoffrey A. Manne, Kristian Stout & Ben Sperry, Twitter v. Taamneh and the Law & Economics of Intermediary Liability, Truth on the Market (Mar. 8, 2023), https://truthonthemarket.com/2023/03/08/twitter-v-taamneh-and-the-law-economics-of-intermediary-liability; Ben Sperry, Right to Anonymous Speech, Part 2: A Law & Economics Approach, Truth on the Market (Sep. 6, 2023), https://truthonthemarket.com/2023/09/06/right-to-anonymous-speech-part-2-a-law-economics-approach.

[14] See, e.g., Matt Kaplan, What Do U.S. consumers Think About Mobile Advertising?, InMobi (Dec. 15, 2021), https://www.inmobi.com/blog/what-us-consumers-think-about-mobile-advertising (55% of consumers agree or strongly agree that they prefer mobile apps with ads rather than paying to download apps); John Glenday, 65% of US TV viewers will tolerate ads for free content, according to report, The Drum (Apr. 22, 2022), https://www.thedrum.com/news/2022/04/22/65-us-tv-viewers-will-tolerate-ads-free-content-according-report (noting that a report from TiVO found 65% of consumers prefer free TV with ads to paying without ads). Consumers often prefer lower subscription fees with ads to higher subscription fees without ads as well. See e.g. Toni Fitzgerald, Netflix Gets it Right: Study Confirms People Prefer Paying Less With Ads, Forbes (Apr. 25, 2023), https://www.forbes.com/sites/tonifitzgerald/2023/04/25/netflix-gets-it-right-study-confirms-more-people-prefer-paying-less-with-ads/.

[15] See 15 U.S.C. § 45.

[16] See Garrett A. Johnson, Tesary Lin, James C. Cooper, & Liang Zhong, COPPAcalypse? The YouTube Settlement’s Impact on Kids Content, at 6-7, SSRN (Apr. 26, 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4430334.

[17] Id. at 1.

[18] See Sam Schechner, Meta Plans to Charge $14 a Month for Ad-Free Instagram or Facebook, Wall Street J. (Oct. 3, 2023), https://www.wsj.com/tech/meta-floats-charging-14-a-month-for-ad-free-instagram-or-facebook-5dbaf4d5.

[19] See Christopher Lin, Tools to Protect Your Privacy on Social Media, NetChoice (Nov. 16, 2023), https://netchoice.org/tools-to-protect-your-privacy-on-social-media/.

[20] See e.g. Chris Stobing, The Best VPN Services for 2024, PC Mag (Jan. 4, 2024), https://www.pcmag.com/picks/the-best-vpn-services.

[21] See e.g. Jonatahan Stempel, Diane Bartz & Nate Raymond, Meta’s Instagram linked to depression, anxiety, insomnia in kids – US state’s lawsuit, Reuters (Oct. 25, 2023), https://www.reuters.com/legal/dozens-us-states-sue-meta-platforms-harming-mental-health-young-people-2023-10-24/ (describing complaint from 33 states alleging Meta “knowingly induced young children and teenagers into addictive and compulsive social media use”).

[22] See e.g. California Age-Appropriate Design Code Act, AB 2273 (2022), https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202120220AB2273AADC; Kids Online Safety Act, S. 1409, 118th Cong. (2023), as amended and posted by the Senate Committee on Commerce, Science, and Transportation on July 27, 2023, available at  https://www.congress.gov/bill/118th-congress/senate-bill/1409 (last accessed Dec. 19, 2023).

[23] See e.g. Arkansas Act 689 of 2023, the “Social Media Safety Act.”

[24] Free Speech Coal. Inc. v. Colmenero, No. 1:23-CV-917-DAE, 2023 U.S. Dist. LEXIS 154065 (W.D. Tex., Aug. 31, 2023), available at https://storage.courtlistener.com/recap/gov.uscourts.txwd.1172751222/gov.uscourts.txwd.1172751222.36.0.pdf.

[25] Id. at 10.

[26] Id.

[27] NetChoice LLC. v. Griffin, Case No. 5:23-CV-05105 (W.D. Ark., Aug. 31, 2023), available at https://netchoice.org/wpcontent/uploads/2023/08/GRIFFIN-NETCHOICE-GRANTED.pdf.

[28] See id. at 23.

[29] See id. at 18-19.

[30] See id. at 19-20.

[31] Id. at 15.

[32] For more, see Ben Sperry, A Coasean Analysis of Online Age-Verification and Parental-Consent Regimes, at 23 (ICLE Issue Brief, Nov. 9, 2023), https://laweconcenter.org/wp-content/uploads/2023/11/Issue-Brief-Transaction-Costs-of-Protecting-Children-Under-the-First-Amendment-.pdf.

[33] For an example of a hearing where Congressional Democrats argue the former and Congressional Republicans argue the latter, see Preserving Free Speech and Reining in Big Tech Censorship, Libr. of Cong. (Mar. 28, 2023), https://www.congress.gov/event/118th-congress/house-event/115561.

[34] See Moody v. NetChoice, No. 22-555 (challenging Florida’s SB 7072); NetChoice v. Paxton, No. 22-277 (challenging Texas’s HB 20).

[35] See e.g. Brief of International Center for Law & Economics as Amicus Curiae in Favor of Petitioners in 22-555 and Respondents in 22-277, Moody v. NetChoice, NetChoice v. Paxton, In the Supreme Court of the United States (Dec. 7, 2023), available at https://www.supremecourt.gov/DocketPDF/22/22-277/292986/20231211144416746_Nos.%2022-277%20and%2022-555_Brief_corrected.pdf. .

[36] See e.g. Ryan Mac & Tiffany Hsu, Twitter’s U.S. Ad Sales Plunge 59% as Woes Continue, New York Times (Jun. 5, 2023), https://www.nytimes.com/2023/06/05/technology/twitter-ad-sales-musk.html (“Six ad agency executives who have worked with Twitter said their clients continued to limit spending on the platform. They cited confusion over Mr. Musk’s changes to the service, inconsistent support from Twitter and concerns about the persistent presence of misleading and toxic content on the platform.”); Kate Conger, Tiffany Hsu & Ryan Mac, Elon Musk’s Twitter Faces Exodus of Advertisers and Executives, New York Times (Nov. 1, 2022), https://www.nytimes.com/2022/11/01/technology/elon-musk-twitter-advertisers.html (“At the same time, advertisers — which provide about 90 percent of Twitter’s revenue — are increasingly grappling with Mr. Musk’s ownership of the platform. The billionaire, who is meeting advertising executives in New York this week, has spooked some advertisers because he has said he would loosen Twitter’s content rules, which could lead to a surge in misinformation and other toxic content.”).

[37] See Murthy v. Missouri, No.23A-243; see also Missouri v. Biden, No. 23-30445, slip op. (5th Cir. Sept. 8, 2023).

[38] See Ben Sperry, Knowledge and Decisions in the Information Age: The Law & Economics of Regulating Misinformation on Social Media Platforms, (ICLE White Paper Sept. 22, 2023), forthcoming 59 Gonz. L. Rev. (2023), available at https://laweconcenter.org/resources/knowledge-and-decisions-in-the-information-age-the-law-economics-of-regulating-misinformation-on-social-media-platforms/.

 

Should be block quote

Continue reading
Innovation & the New Economy

NetChoice, the Supreme Court, and the State Action Doctrine

TOTM George Orwell’s “Nineteen Eighty-Four” is frequently invoked when political actors use language to obfuscate what they are doing. Ambiguity in language can allow both sides . . .

George Orwell’s “Nineteen Eighty-Four” is frequently invoked when political actors use language to obfuscate what they are doing. Ambiguity in language can allow both sides to appeal to the same words, like “the First Amendment” or “freedom of speech.” In a sense, the arguments over online speech currently before the U.S. Supreme Court really amount to a debate about whether private actors can “censor” in the same sense as the government.

In the oral arguments in this week’s NetChoice cases, several questions from Justices Clarence Thomas and Samuel Alito suggested that they believed social-media companies engaged in “censorship,” conflating the right of private actors to set rules for their property with government oppression. This is an abuse of language, and completely inconsistent with Supreme Court precedent that differentiates between state and private action.

Read the full piece here.

Continue reading
Innovation & the New Economy

So, Is It a Tech Panic?

Popular Media Having analyzed both bodies of research, it is apparent that the research on social media and teen well-being shares many of the same flaws as . . .

Having analyzed both bodies of research, it is apparent that the research on social media and teen well-being shares many of the same flaws as the violent video game research. They are both largely based on correlational studies which rely on self-reported data and use poor proxies for the effect that they are trying to measure. As we saw in  Brown v. Entertainment Merchants Association, this alone is enough to create roadblocks for laws banning or regulating the use of social media by children.

Read the full piece here.

Continue reading
Innovation & the New Economy

Flawed Evidence in the Social Media Debate

Popular Media In our last post, we examined the numerous reasons for which the Supreme Court found the State’s evidence to be lacking in the case of Brown . . .

In our last post, we examined the numerous reasons for which the Supreme Court found the State’s evidence to be lacking in the case of Brown v. Entertainment Merchants Association. This week we will examine the evidence in the social media debate keeping in mind the qualities that the Supreme Court found to be problematic in establishing a connection between video games and violence.

Read the full piece here.

Continue reading
Innovation & the New Economy

Evidence in the Violent Video Game Debate

Popular Media As seen in Scalia’s critique, one of the main flaws in the research used by the state of California in Brown v. Entertainment Merchants Association was its lack . . .

As seen in Scalia’s critique, one of the main flaws in the research used by the state of California in Brown v. Entertainment Merchants Association was its lack of causational evidence. Most of the research concerning the relationship between violent video games relied upon correlational evidence. Correlational studies measure two variables and their relationship to one another. Establishing a correlation between two variables often serves as a starting point for research, but it does not prove causation. Consider the case of ice cream sales and violent crime—as one rises so does the other. The two are correlated, but no one is seriously considering banning ice cream. That is because, upon deeper analysis, it was determined that there is a third cause that drives an increase in both: higher temperatures.

Read the full piece here.

Continue reading
Innovation & the New Economy

Is the Debate Around Social Media Another Tech Panic?

Popular Media In 2005, California proposed legislation to ban the sale of violent video games to minors. This law was a culmination of growing concerns that violent . . .

In 2005, California proposed legislation to ban the sale of violent video games to minors. This law was a culmination of growing concerns that violent video games were causing children to become more aggressive. Commentators noted that perpetrators of mass shootings, as in the case of Columbine, Heath High School, and Sandy Hook, often played video games considered to be violent such as Doom, Grand Theft Auto, and Call of Duty.1 Studies on the connection between video games and aggression came pouring out. In response, policymakers began to introduce laws banning or otherwise regulating the sale of violent video games to minors.

This would seem to be the ideal result. Lawmakers were able to come together and pass a law that addressed the issue at hand. The only problem is that there is little to no evidence that video games, even violent ones, lead to increases in aggressive behavior let alone that they are a driving factor behind school shootings.

Read the full piece here.

Continue reading
Innovation & the New Economy