Showing Latest Publications

Mercados Digitales: Lecciones Desde Europa

Popular Media La implementación de la Ley de Mercados Digitales (LMD) en Europa nos puede dar algunas lecciones regulatorias a los países que, desde este lado del . . .

La implementación de la Ley de Mercados Digitales (LMD) en Europa nos puede dar algunas lecciones regulatorias a los países que, desde este lado del Atlántico, queremos emular al viejo continente (ya hay una iniciativa en Brazil; y la Comunidad Andina y México han indicado su intención de estudiar el tema).

Read the full piece (in Spanish) here.

Continue reading
Antitrust & Consumer Protection

ICLE on the Merger of Groceries Kroger and Albertsons

Presentations & Interviews ICLE research on the proposed merger of supermarkets Kroger and Albertsons was cited on an episode of the Econception podcast. The full episode is embedded . . .

ICLE research on the proposed merger of supermarkets Kroger and Albertsons was cited on an episode of the Econception podcast. The full episode is embedded below. Discussion of the ICLE white paper begins at around the 20:43 mark.

Continue reading
Antitrust & Consumer Protection

How Epic v. Apple Operationalizes Ohio v. Amex

Scholarship Abstract The Supreme Court’s landmark decision in Ohio v. American Express (“Amex”) remains central to the enforcement of antitrust laws involving digital markets. Specifically, the . . .

Abstract

The Supreme Court’s landmark decision in Ohio v. American Express (“Amex”) remains central to the enforcement of antitrust laws involving digital markets. Specifically, the decision established a framework to assess business conduct involving transactional, multisided platforms from both an economic and legal perspective. At its crux, the Court in Amex integrated both the relevant market and competitive effects analysis across the two distinct groups who interact on the Amex platform, that is, cardholders and merchants. This unified, integrated approach has been controversial, however. The primary debate is whether the Court’s ruling places an undue burden on plaintiffs under the rule of reason paradigm to meet its burden of production to establish harm to competition. Enter Epic v. Apple (“Epic”): a case involving the legality of various Apple policies governing its iOS App Store, which, like Amex, is a transactional, multisided platform. While both the district court and the Ninth Circuit largely ruled in favor of Apple over Epic, these decisions are of broader interest for their fidelity to Amex. A careful review of the decisions reveals that the Epic courts operationalized Amex in a practical, sensible way. The courts did not engage in extensive balancing across developers and users as some critics of Amex contended would be required. Ultimately, the courts in Epic (a) considered evidence of effects across both groups on the platform and (b) gave equal weight to both the procompetitive and anticompetitive effects evidence, which, this Article contends, are the essential elements of the Amex precedent. Relatedly, the Epic decisions illustrate that the burden of production on plaintiffs in multisided platform cases is not higher than in cases involving regular, single-sided markets. Additionally, both parties, whether litigating single-sided or multi-sided markets, are fully incentivized to bring evidence to bear on all aspects of the case. Finally, this Article details how the integrated Amex approach deftly avoids potential issues involving the out-of-market effects doctrine in antitrust, which limits what type of effects courts can consider in assessing conduct.

Read at SSRN.

Continue reading
Antitrust & Consumer Protection

A Law & Economics Approach to Social-Media Regulation

Popular Media The thesis of this essay is that policymakers must consider what the nature of social media companies as multisided platforms means for regulation. The balance . . .

The thesis of this essay is that policymakers must consider what the nature of social media companies as multisided platforms means for regulation. The balance struck by social media companies acting in response to the incentives they face in the market could be upset by regulation that favors the interests of some users over others. Promoting the use of technological and practical means to avoid perceived harms by users themselves would preserve the benefits of social media to society without the difficult tradeoffs of regulation. Part I will introduce the economics of multisided platforms like social media, and how this affects the incentives of these platforms. Social-media platforms, acting within the market process, are best usually best positioned to balance the interests of their users, but there could be occasions where the market process fails due to negative externalities. Part II will consider these situations where there are negative externalities due to social media and introduce the least-cost avoider principle. Usually, social-media users are the least-cost avoiders of harms, but sometimes social media are better placed to monitor and control harms. This involves a balance, as the threat of collateral censorship or otherwise reducing opportunities to speak and receive speech could result from social media regulation. Part III will then apply the insights from Part I and II to the areas of privacy, children’s online safety, and speech regulation.

I. Introduction

Policymakers at both the state and federal levels have been actively engaged in recent years with proposals to regulate social media, whether the subject is privacy, children’s online safety, or concerns about censorship, misinformation, and hate speech.[1] While there may not be consensus about precisely why social media is bad, there is broad agreement that the major online platforms are to blame for at least some harms to society. It is also generally recognized, though often not emphasized, that social media brings great value to its users. In other words, there are costs and benefits, and policymakers should be cautious when introducing new laws that would upset the balance that social-media companies must strike in order to serve their users well.

This essay will propose a general approach, informed by the law & economics tradition, to assess when and how social media should be regulated. Part I will introduce the economics of multisided platforms, and how they affects social-media platforms’ incentives. The platforms themselves, acting within the market process, are best usually best-positioned to balance the interests of their users, but there could be occasions where the market process fails due to negative externalities. Part II will consider such externalities and introduce the least-cost avoider principle. Usually, social-media users are the least-cost avoiders of harms, but platforms themselves are sometimes better placed to monitor and control harms. This requires a balance, as social-media regulation raises the threat of collateral censorship or otherwise reducing opportunities to speak and receive speech. Part III will apply the insights from Part I and II to the areas of privacy, children’s online safety, and speech regulation.

The thesis of this essay is that policymakers must consider social-media companies’ status as multisided platforms means for regulation. The balance struck by social-media companies acting in response to the market incentives they face could be upset by regulation that favors the interests of some users over others. Promoting the use of technological and practical means to avoid perceived harms would allow users to preserve the benefits of social media without the difficult tradeoffs of regulation.

II. The Economics of Social-Media Platforms

Mutually beneficial trade is a fundamental bedrock of the market process. Entrepreneurs—including those that act through formal economic institutions like business corporations—seek to discover the best ways to serve consumers. Various types of entities help connect those who wish to buy products or services to those who are trying to sell them. Physical marketplaces are common around the world: those places set up to facilitate interactions between buyers and sellers. If those marketplaces fail to serve the interests of those who use them, others will likely arise.

Social-media companies are a virtual example of what economists call multi-sided markets or platforms.[2] Such platforms derive their name from the face that they serve at least two different types of customers and facilitate their interaction. Multi-sided platforms have “indirect network effects,” described by one economist as a situation where “participants on one side value being able to interact with participants on the other side… lead[ing] to interdependent demand.”[3] In some situations, a platform may determine it can only raise revenue from one side of the platform if demand on the other side of the platform is high. In such cases, the platform may choose to offer one side free access to the platform to boost such demand, which is subsidized by participants on the other side of the platform.[4] This creates a positive feedback loop in which more participants on one side of the platform leads to more participants on the other.

In this sense, social-media companies are much like newspapers or television in that, by solving a transaction cost problem,[5] these platforms bring together potential buyers and sellers by providing content to one side and access to consumers on the other side. Recognizing that their value lies in reaching users, these platforms sell advertising and offer access to content for a lower price, often at the price of zero (or free). In other words, advertisers subsidize the access to content for platform users.

Therefore, most social-media companies are free for users. Revenue is primarily collected from the other side of the platform—i.e., from advertisers. In effect, social-media companies are attention platforms: They supply content to users, while collecting data for targeted advertisements for businesses who seek access to those users. To be successful, social-media companies must keep enough (and the right type of) users engaged so as to maintain demand for advertising. Social-media companies must curate content that users desire in order to persuade them to spend time on the platform.

But unlike newspapers or television, social-media companies primarily rely on their users to produce content rather than creating their own. Thus, they must also consider how to attract and maintain high-demand content creators, as well as how to match user-generated content to the diverse interests of other users. If they fail to serve the interests of high-demand content creators, those users may leave the platform, thus reducing time spent on the platform by all users, which thereby reduces the value of advertising. Similarly, if they fail to match content to user interests, those users will be less engaged on the platform, reducing its value to advertisers.

Moreover, this means that social-media companies need to balance the interests of advertisers and other users. Advertisers may desire more data to be collected for targeting, but users may desire less data collection. Similarly, advertisers may desire more ads, while users may prefer fewer ads. Advertisers may prefer content that keeps users engaged on the platform, even if it is harmful for society, whether because it is false, hateful, or leads to mental-health issues for minors. On the other hand, brand-conscious advertisers may not want to run ads next to content with which they disagree. Moreover, users may not want to see certain content. Social-media companies need to strike a balance that optimizes their value, recognizing that losing participants on either side would harm the other.

Usually, social-media companies acting within the market process are going to be best-positioned to make decisions on behalf of their users. Thus, they may create community rules that restrict content that would, on net, reduce user engagement.[6] This could include limitations on hate speech and misinformation. On the other hand, if they go too far in restricting content that users consider desirable, that could reduce user engagement and thus value to advertisers. Social-media companies therefore compete on moderation policies, trying to strike the appropriate balance to optimize platform value. A similar principle applies when it comes to privacy policies and protections for minors: social-media companies may choose to compete by providing tools to help users avoid what they perceive as harms, while keeping users on the platform and maintaining value for advertisers.

There may, however, be scenarios where social media produces negative externalities[7] that are harmful to society. A market failure could result, for instance, if platforms have too great of an incentive to allow misinformation or hate speech that keeps users engaged, or to collect too much (or the wrong types of) information for targeted advertising, or to offer up content that is harmful for minors and keeps them hooked to using the platform.

In sum, social-media companies are multi-sided platforms that facilitate interactions between advertisers and users by curating user-generated content that drives attention to their platforms. To optimize the platform’s value, a social-media company must keep users engaged. This will often include privacy policies, content-moderation standards, and special protections for minors. On the other hand, incentives could become misaligned and lead to situations where social-media usage leads to negative externalities due to insufficient protection of privacy, too much hate speech or misinformation, or harms to minors.

III. Negative Social-Media Externalities and the Least-Cost-Avoider Principle

In situations where there are negative externalities from social-media usage, there may be a case for regulation. Any case for regulation must, however, recognize the presence of transaction costs, and consider how platforms and users may respond to changes in those costs. To get regulation right, the burden of avoiding a negative externality should fall on the least-cost avoider.

The Coase Theorem, derived from the work of Nobel-winning economist Ronald Coase[8] and elaborated on in the subsequent literature,[9] helps to explain the issue at hand:

  1. The problem of externalities is bilateral;
  2. In the absence of transaction costs, resources will be allocated efficiently, as the parties bargain to solve the externality problem;
  3. In the presence of transaction costs, the initial allocation of rights does matter; and
  4. In such cases, the burden of avoiding the externality’s harm should be placed on the least-cost avoider, while taking into consideration the total social costs of the institutional framework.

In one of Coase’s examples, the noise from a confectioner using his machinery is a potential cost to the doctor next door, who consequently can’t use his office to conduct certain testing. Simultaneously, the doctor moving his office next door is a potential cost to the confectioner’s ability to use his equipment. In a world of well-defined property rights and low transaction costs, the initial allocation of a right would not matter, because the parties could bargain to overcome the harm in a beneficial manner—i.e., the confectioner could pay the doctor for lost income or to set up sound-proof walls, or the doctor could pay the confectioner to reduce the sound of his machines.[10] But since there are transaction costs that prevent this sort of bargain, it is important whether the initial right is allocated to the doctor or the confectioner. To maximize societal welfare, the cost should be placed on the entity that can avoid the harm at the lowest cost.[11]

Here, social-media companies create incredible value for their users, but they also arguably impose negative externalities in the form of privacy harms, misinformation and hate speech, and harms particular to minors. In the absence of transaction costs, the parties could simply bargain away the harms associated with social-media usage. But since there are transaction costs, it matters whether the burden to avoid harms is placed on the users or the social-media companies. If the burden is wrongly placed, it may end up that the societal benefits of social media will be lost.

For instance, imposing liability on social-media companies risks collateral censorship, which occurs when platforms decide that liability risk is too large and opt to over-moderate or not host user-generated content, or to restrict access to such content either by charging higher prices or excluding those who could be harmed (like minors).[12] By wrongly placing the burden to avoid harms on social-media platforms, societal welfare will be reduced.

On the other hand, there may be situations where social-media companies are the least-cost avoiders. For instance, they may be best-placed to monitor and control harms associated with social-media usage when it is difficult or impossible to hold those using their platforms accountable for harms they cause.[13] For instance, if a social-media company allows anonymous or pseudonymous use, with no realistic possibility of tracking down users who cause harms, illegal conduct could go undeterred. In such cases, placing the burden on social-media users could lead to social media imposing uncompensated harms on society.

Thus, it is important to determine whether the social-media companies or their users are the least-cost avoiders. Placing the burden on the wrong party or parties would harm societal welfare, either by reducing the value of social media or by creating more uncompensated negative externalities.

IV. Applying the Lessons of Law & Economics to Social-Media Regulation

Below, I will examine the areas of privacy, children’s online safety, and content moderation, and consider both the social-media companies’ incentives and whether the platforms or their users are the least-cost avoiders.

A. Privacy

As discussed above, social-media companies are multi-sided platforms that provide content to attract attention from users, while selling information collected from those users for targeted advertising. This leads to the possibility that social-media companies will collect too much information in order to increase revenue from targeted advertising. In other words, as the argument goes, the interests of the paying side of the platform will outweigh the interests of social-media users, thereby imposing a negative externality on them.

Of course, this assumes that the collection and use of information for targeted advertisements is considered a negative externality by social-media users. While this may be true for some, for others, it may be something they care little about or even value, because targeted advertisements are more relevant to them. Moreover, many consumers appear to prefer free content with advertising to paying a subscription fee.[14]

It does seem likely, however, that negative externalities are more likely to arise when users don’t know what data is being collected or how it is being used. Moreover, it is a clear harm if social-media companies misrepresent what they are collecting and how they are using it. Thus, it is generally unobjectionable—at least, in theory—for the Federal Trade Commission or another enforcer to hold social-media companies accountable for their privacy policies.[15]

On the other hand, privacy regulation that requires specific disclosures or verifiable consent before collecting or using data would increase the cost of targeted advertising, thus reducing its value to advertisers, and thereby further reducing the platform’s incentives of to curate valuable content for users. For instance, in response to the FTC’s consent agreement with YouTube charging that it violated the Children’s Online Privacy Protection Act (COPPA), YouTube required channel owners producing children’s content to designate their channels as such, along with automated processes designed to identify the same.[16] This reduced content creators’ ability to benefit from targeted advertising if their content was directed to children. The result was less content created for children with poorer matching as well:

Consistent with a loss in personalized ad revenue, we find that child-directed content creators produce 13% less content and pivot towards producing non-child-directed content. On the demand side, views of child-directed channels fall by 22%. Consistent with the platform’s degraded capacity to match viewers to content, we find that content creation and content views become more concentrated among top child-directed YouTube channels.

Alternatively, a social-media company could raise the price it charges to users, as it can no longer use advertising revenue to subsidize users’ access. This is, in fact, exactly what has happened in Europe, as Meta now offers an ad-free version of Facebook and Instagram for $14 a month.[18]

In other words, placing the burden on social-media companies to avoid the perceived harms from the collection and use of information for targeted advertising could lead to less free content available to consumers. This is a significant tradeoff, and not one that most social-media consumers appear willing to make voluntarily.

On the other hand, it appears that social-media users could avoid much of the harm from the collection and use of their data by using available tools, including those provided by social-media companies. For instance, most of the major social-media companies offer two-factor authentication, privacy-checkup tools, the ability to browse the service privately, to limit audience, and to download and delete data.[19] Social-media users could also use virtual private networks (VPNs) to protect their data privacy while online.[20] Finally, users could just not post private information or could limit interactions with businesses (through likes or clicks on ads) if they want to reduce the amount of information used for targeted advertising.

B. Children’s Online Safety

Some have argued that social-media companies impose negative externalities on minors by serving them addictive content and/or content that results in mental-health harms.[21] They argue that social-media companies benefit from these harms because they are able to then sell data from minors to advertisers.

While it is true that social-media companies want to attract users through engaging content and interfaces, and that they make money through targeted advertising, it is highly unlikely that they are making much money from minors themselves. Very few social-media users under 18 have considerable disposable income or access to payment-card options that would make them valuable to advertisers. Thus, regulations that raise the costa to social-media companies of serving minors, whether through a regulatory duty of care[22] or through age verification and verifiable parental consent,[23] could lead social-media companies to invest more excluding minors than in creating vibrant and safe online spaces for them.

Federal courts considering age-verification laws have noted there are costs to companies, as well as users, in obtaining this information. In Free Speech Coalition Inc. v. Colmenero,[24] the U.S. District Court in Austin, Texas, considered a law that required age verification before viewing online pornography, and found that the costs of obtaining age verification were high, citing the complaint that stated “several commercial verification services, showing that they cost, at minimum, $40,000.00 per 100,000 verifications.”[25] But just as importantly, the transaction costs in this example also include the subjective costs borne by those who actually go through with verifying their age to access pornography. As the court noted, “the law interferes with the Adult Video Companies’ ability to conduct business, and risks deterring adults from visiting the websites.”[26] Similarly, in NetChoice v. Griffin,[27] the U.S. District Court for Western District of Arkansas found that a challenged law’s age-verification requirements were “costly” and would put social-media companies covered by the law in the position of needing to take drastic action to either implement age verification, restrict access for Arkansans, or face the possibility of civil and criminal enforcement.[28]

On the other hand, social-media companies—responding to demand from minor users and their parents—have also exerted considerable effort to reduce harmful content being introduced to minors. For instance, they have invested in content-moderation policies and their enforcement, including through algorithms, automated tools, and human review, to remove, restrict, or add warnings to content inappropriate for minors.[29] On top of that, social-media companies offer tools to help minors and their parents avoid many of the harms associated with social-media usage.[30] There are also options available at the ISP, router, device, and browser level to protect minors while online. As the court put it in Griffin, “parents may rightly decide to regulate their children’s use of social media—including restricting the amount of time they spend on it, the content they may access, or even those they chat with. And many tools exist to help parents with this.”[31]

In other words, parents and minors working together can use technological and practical means to make marginal decisions about social-media usage at a lower cost than a regulatory environment that would likely lead to social-media companies restricting use by minors altogether.[32]

C. Content Moderation

There have been warring allegations about social-media companies’ incentives when it comes to content moderation. Some claim that salacious misinformation and hate speech drives user engagement, making platforms more profitable for advertisers; others argue that social-media companies engage in too much “censorship” by removing users and speech in a viewpoint-discriminatory way.[33] The U.S. Supreme Court is currently reviewing laws from Florida and Texas that would force social-media companies to carry speech.[34]

Both views fail to take into account that social-media companies are largely just responding to the incentives they face as multi-sided platforms. Social-media companies are solving a Coasean speech problem, wherein some users don’t want to be subject to certain speech from other users. As explained above, social-media companies must balance these interests by setting and enforcing community rules for speech. This may include rules against misinformation and hate speech. On the other hand, social-media companies can’t go too far in restricting high-demand speech, or they will risk losing users. Thus, they must strike a delicate balance.

Laws that restrict the “editorial discretion” of social-media companies may fail the First Amendment,[35] but they also reduce the companies’ ability to give their customers a valuable product in light of user (and advertiser) demand. For instance, the changes in the moderation standards of X (formerly Twitter) in the last year since the purchase by Elon Musk have led to many users and advertisers exiting the platform due to a perceived increase in hate speech and misinformation.[36]

Social-media companies need to be free to moderate as they see fit, free from government interference. Such interference includes not just the forced carriage of speech, but in government efforts to engage in censorship-by-proxy, as has been alleged in Murthy v. Missouri.[37] From the perspective of the First Amendment, government intervention by coercing or significantly encouraging the removal of disfavored speech, even in the name of misinformation, is just as harmful as the forced carriage of speech.[38] But more importantly for our purposes here, such government actions reduce platforms’ value by upsetting the balance that social-media companies strike with respect to their users’ speech interests.

Users can avoid being exposed to unwanted speech by averting their digital eyes from it—i.e., by refusing to interact with it and thereby training social-media companies’ algorithms to serve speech that they prefer. They can also take their business elsewhere by joining a social-media network with speech-moderation policies more to their liking. Voting with one’s digital feet (and eyes) is a much lower-cost alternative than either mandating the carriage of speech or censorship by government actors.

V. Conclusion

Social-media companies are multisided platforms that must curate compelling content while restricting harms to users in order to optimize their value to the advertisers that pay for access. This doesn’t mean they always get it right. But they are generally best-positioned to make those decisions, subject to the market process. Sometimes, there may be negative externalities that aren’t fully internalized. But as Coase taught us, that is only the beginning of the analysis. If social-media users can avoid harms at lower cost than social-media companies, then regulation should not place the burden on social-media companies. There are tradeoffs in social-media regulation, including the possibility that it will result in a less-valuable social-media experience for users.

[1] See e.g. Mary Clare Jalonick, Congress eyes new rules for tech, social media: What’s under consideration, Associated Press (May 8, 2023), https://www.wvtm13.com/article/whats-under-consideration-congress-eyes-new-rules-for-tech-social-media/43821405#;  Khara Boender, Jordan Rodell, & Alex Spyropoulos, The State of Affairs: What Happened in Tech Policy During 2023 State Legislative Sessions?, Project Disco (Jul. 25, 2023), https://www.project-disco.org/competition/the-state-of-affairs-statetech-policy-in-2023 (noting laws passed and proposed addressing consumer data privacy, content moderation, and children’s online safety at the state level).

[2] See e.g. Jean-Charles Rochet & Jean Tirole, Platform Competition in Two-Sided Markets, 1 J. Euro. Econ. Ass’n 990 (2003).

[3] David S. Evans, Multisided Platforms in Antitrust Practice, at 3 (Oct. 17, 2023), forthcoming, Michael Noel, Ed., Elgar Encyclopedia on the Economics of Competition and Regulation, available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4606511.

[4] For instance, many nightclubs hold “Ladies Night” where ladies get in free in order to attract more men who pay for entrance.

[5] Transaction costs are the additional costs borne in the process of buying or selling, separate and apart from the price of the good or service itself — i.e. the costs of all actions involved in an economic transaction. Where transaction costs are present and sufficiently large, they may prevent otherwise beneficial agreements from being concluded.

[6] See David S. Evans, Governing Bad Behavior by Users of Multi-Sided Platforms, 27 Berkeley Tech. L. J. 1201 (2012); Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 HARV. L. REV. 1598 (2018).

[7] An externality is a side effect of an activity that is not reflected in the cost of that activity — basically, what occurs when we do something whose consequences affect other people. A negative externality occurs when a third party does not like the effects of an action.

[8] See R.H. Coase, The Problem of Social Cost, 3 J. L. & Econ. 1 (1960)

[9] See Steven G. Medema, The Coase Theorem at Sixty, 58 J. Econ. Lit. 1045 (2020).

[10] See Coase, supra note 9, at 8-10.

[11] See id. at 34 (“When an economist is comparing alternative social arrangements, the proper procedure is to compare the total social product yielded by these different arrangements.”).

[12] See Felix T. Wu, Collateral Censorship and the Limits of Intermediary Liability, 87 Notre Dame L. Rev. 293, 295-96 (2011); Geoffrey A. Manne, Ben Sperry & Kristian Stout, Who Moderates the Moderators: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Computer & Tech. L J. 26, 39 (2022); Ben Sperry, The Law & Economics of Children’s Online Safety: The First Amendment and Online Intermediary Liability, Truth on the Market (May 12 2023), https://truthonthemarket.com/2023/05/12/the-law-economics-of-childrens-online-safety-the-firstamendment-and-online-intermediary-liability.

[13] See Geoffrey A. Manne, Kristian Stout & Ben Sperry, Twitter v. Taamneh and the Law & Economics of Intermediary Liability, Truth on the Market (Mar. 8, 2023), https://truthonthemarket.com/2023/03/08/twitter-v-taamneh-and-the-law-economics-of-intermediary-liability; Ben Sperry, Right to Anonymous Speech, Part 2: A Law & Economics Approach, Truth on the Market (Sep. 6, 2023), https://truthonthemarket.com/2023/09/06/right-to-anonymous-speech-part-2-a-law-economics-approach.

[14] See, e.g., Matt Kaplan, What Do U.S. consumers Think About Mobile Advertising?, InMobi (Dec. 15, 2021), https://www.inmobi.com/blog/what-us-consumers-think-about-mobile-advertising (55% of consumers agree or strongly agree that they prefer mobile apps with ads rather than paying to download apps); John Glenday, 65% of US TV viewers will tolerate ads for free content, according to report, The Drum (Apr. 22, 2022), https://www.thedrum.com/news/2022/04/22/65-us-tv-viewers-will-tolerate-ads-free-content-according-report (noting that a report from TiVO found 65% of consumers prefer free TV with ads to paying without ads). Consumers often prefer lower subscription fees with ads to higher subscription fees without ads as well. See e.g. Toni Fitzgerald, Netflix Gets it Right: Study Confirms People Prefer Paying Less With Ads, Forbes (Apr. 25, 2023), https://www.forbes.com/sites/tonifitzgerald/2023/04/25/netflix-gets-it-right-study-confirms-more-people-prefer-paying-less-with-ads/.

[15] See 15 U.S.C. § 45.

[16] See Garrett A. Johnson, Tesary Lin, James C. Cooper, & Liang Zhong, COPPAcalypse? The YouTube Settlement’s Impact on Kids Content, at 6-7, SSRN (Apr. 26, 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4430334.

[17] Id. at 1.

[18] See Sam Schechner, Meta Plans to Charge $14 a Month for Ad-Free Instagram or Facebook, Wall Street J. (Oct. 3, 2023), https://www.wsj.com/tech/meta-floats-charging-14-a-month-for-ad-free-instagram-or-facebook-5dbaf4d5.

[19] See Christopher Lin, Tools to Protect Your Privacy on Social Media, NetChoice (Nov. 16, 2023), https://netchoice.org/tools-to-protect-your-privacy-on-social-media/.

[20] See e.g. Chris Stobing, The Best VPN Services for 2024, PC Mag (Jan. 4, 2024), https://www.pcmag.com/picks/the-best-vpn-services.

[21] See e.g. Jonatahan Stempel, Diane Bartz & Nate Raymond, Meta’s Instagram linked to depression, anxiety, insomnia in kids – US state’s lawsuit, Reuters (Oct. 25, 2023), https://www.reuters.com/legal/dozens-us-states-sue-meta-platforms-harming-mental-health-young-people-2023-10-24/ (describing complaint from 33 states alleging Meta “knowingly induced young children and teenagers into addictive and compulsive social media use”).

[22] See e.g. California Age-Appropriate Design Code Act, AB 2273 (2022), https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202120220AB2273AADC; Kids Online Safety Act, S. 1409, 118th Cong. (2023), as amended and posted by the Senate Committee on Commerce, Science, and Transportation on July 27, 2023, available at  https://www.congress.gov/bill/118th-congress/senate-bill/1409 (last accessed Dec. 19, 2023).

[23] See e.g. Arkansas Act 689 of 2023, the “Social Media Safety Act.”

[24] Free Speech Coal. Inc. v. Colmenero, No. 1:23-CV-917-DAE, 2023 U.S. Dist. LEXIS 154065 (W.D. Tex., Aug. 31, 2023), available at https://storage.courtlistener.com/recap/gov.uscourts.txwd.1172751222/gov.uscourts.txwd.1172751222.36.0.pdf.

[25] Id. at 10.

[26] Id.

[27] NetChoice LLC. v. Griffin, Case No. 5:23-CV-05105 (W.D. Ark., Aug. 31, 2023), available at https://netchoice.org/wpcontent/uploads/2023/08/GRIFFIN-NETCHOICE-GRANTED.pdf.

[28] See id. at 23.

[29] See id. at 18-19.

[30] See id. at 19-20.

[31] Id. at 15.

[32] For more, see Ben Sperry, A Coasean Analysis of Online Age-Verification and Parental-Consent Regimes, at 23 (ICLE Issue Brief, Nov. 9, 2023), https://laweconcenter.org/wp-content/uploads/2023/11/Issue-Brief-Transaction-Costs-of-Protecting-Children-Under-the-First-Amendment-.pdf.

[33] For an example of a hearing where Congressional Democrats argue the former and Congressional Republicans argue the latter, see Preserving Free Speech and Reining in Big Tech Censorship, Libr. of Cong. (Mar. 28, 2023), https://www.congress.gov/event/118th-congress/house-event/115561.

[34] See Moody v. NetChoice, No. 22-555 (challenging Florida’s SB 7072); NetChoice v. Paxton, No. 22-277 (challenging Texas’s HB 20).

[35] See e.g. Brief of International Center for Law & Economics as Amicus Curiae in Favor of Petitioners in 22-555 and Respondents in 22-277, Moody v. NetChoice, NetChoice v. Paxton, In the Supreme Court of the United States (Dec. 7, 2023), available at https://www.supremecourt.gov/DocketPDF/22/22-277/292986/20231211144416746_Nos.%2022-277%20and%2022-555_Brief_corrected.pdf. .

[36] See e.g. Ryan Mac & Tiffany Hsu, Twitter’s U.S. Ad Sales Plunge 59% as Woes Continue, New York Times (Jun. 5, 2023), https://www.nytimes.com/2023/06/05/technology/twitter-ad-sales-musk.html (“Six ad agency executives who have worked with Twitter said their clients continued to limit spending on the platform. They cited confusion over Mr. Musk’s changes to the service, inconsistent support from Twitter and concerns about the persistent presence of misleading and toxic content on the platform.”); Kate Conger, Tiffany Hsu & Ryan Mac, Elon Musk’s Twitter Faces Exodus of Advertisers and Executives, New York Times (Nov. 1, 2022), https://www.nytimes.com/2022/11/01/technology/elon-musk-twitter-advertisers.html (“At the same time, advertisers — which provide about 90 percent of Twitter’s revenue — are increasingly grappling with Mr. Musk’s ownership of the platform. The billionaire, who is meeting advertising executives in New York this week, has spooked some advertisers because he has said he would loosen Twitter’s content rules, which could lead to a surge in misinformation and other toxic content.”).

[37] See Murthy v. Missouri, No.23A-243; see also Missouri v. Biden, No. 23-30445, slip op. (5th Cir. Sept. 8, 2023).

[38] See Ben Sperry, Knowledge and Decisions in the Information Age: The Law & Economics of Regulating Misinformation on Social Media Platforms, (ICLE White Paper Sept. 22, 2023), forthcoming 59 Gonz. L. Rev. (2023), available at https://laweconcenter.org/resources/knowledge-and-decisions-in-the-information-age-the-law-economics-of-regulating-misinformation-on-social-media-platforms/.

 

Should be block quote

Continue reading
Innovation & the New Economy

NetChoice, the Supreme Court, and the State Action Doctrine

TOTM George Orwell’s “Nineteen Eighty-Four” is frequently invoked when political actors use language to obfuscate what they are doing. Ambiguity in language can allow both sides . . .

George Orwell’s “Nineteen Eighty-Four” is frequently invoked when political actors use language to obfuscate what they are doing. Ambiguity in language can allow both sides to appeal to the same words, like “the First Amendment” or “freedom of speech.” In a sense, the arguments over online speech currently before the U.S. Supreme Court really amount to a debate about whether private actors can “censor” in the same sense as the government.

In the oral arguments in this week’s NetChoice cases, several questions from Justices Clarence Thomas and Samuel Alito suggested that they believed social-media companies engaged in “censorship,” conflating the right of private actors to set rules for their property with government oppression. This is an abuse of language, and completely inconsistent with Supreme Court precedent that differentiates between state and private action.

Read the full piece here.

Continue reading
Innovation & the New Economy

SEPs: The West Need Not Cede to China

TL;DR TL;DR Background: Policymakers on both sides of the Atlantic are contemplating new regulations on standard-essential patents (SEPs). While the European Union (EU) is attempting to . . .

TL;DR

Background: Policymakers on both sides of the Atlantic are contemplating new regulations on standard-essential patents (SEPs). While the European Union (EU) is attempting to pass legislation toward that end, U.S. authorities like the Department of Commerce and U.S. Patent and Trademark Office are examining the issues and potentially contemplating their own reforms to counteract changes made by the EU.

But… These efforts would ultimately hand an easy geopolitical win to rivals like China. Not only do the expected changes risk harming U.S. and EU innovators and the standardization procedures upon which they rely, but they lend legitimacy to concerning Chinese regulatory responses that clearly and intentionally place a thumb on the scale in favor of domestic firms. The SEP ecosystem is extremely complex, and knee-jerk regulations may create a global race to the bottom that ultimately harms the very firms and consumers they purport to protect.

KEY TAKEAWAYS

EUROPEAN LEGISLATION, GLOBAL REACH

In April 2023, the EU published its “Proposal for a Regulation on Standard Essential Patents.” The proposal seeks to improve transparency by creating a register of SEPs (and accompanying essentiality checks), and to accelerate the diffusion of these technologies by, among other things, implementing a system of nonbinding arbitration of aggregate royalties and “fair, reasonable, and non-discriminatory” (FRAND) terms. 

But while the proposal nominally applies only to European patents, its effects would be far broader. Notably, the opinions on aggregate royalties and FRAND terms would apply worldwide. European policymakers would thus rule (albeit in nonbinding fashion) on the appropriate royalties to be charged around the globe. This would further embolden foreign jurisdictions to respond in kind, often without the guardrails and independence that have traditionally served to cabin policymakers in the West.

CHINA’S EFFORTS TO BECOME A ‘CYBER GREAT POWER’

Chinese policymakers have long considered the SEPs to be of vital strategic importance, and have taken active steps to protect Chinese interests in this space. The latest move came from the Chongqing First Intermediate People’s Court in a dispute between Chinese firm Oppo and Finland’s Nokia. In a controversial December 2023 ruling, the court limited the maximum FRAND royalties that Nokia could charge Oppo for use of Nokia’s SEPs pertaining to the 5G standard.

Unfortunately, the ruling appears obviously biased toward Chinese interests. In calculating the royalties that Nokia could charge Oppo, the court applied a sizable discount in China. It’s been reported that, in reaching its conclusion, the court defined an aggregate royalty rate for all 5G patents, and divided the proceeds by the number of patents each firm held—a widely discredited metric.

The court’s ruling has widely been seen as a protectionist move, which has elicited concern from western policymakers. It appears to set a dangerous precedent in which geopolitical considerations will begin to play an increasingly large role in the otherwise highly complex and technical field of SEP policy.

TRANSPARENCY, AGGREGATE ROYALTY MANDATES, AND FRAND DETERMINATIONS

Leaving aside how China may respond, the EU’s draft regulation will likely be detrimental to innovators. The regulation would create a system of government-run essentiality checks and nonbinding royalty arbitrations. The goal would be to improve transparency and verify that patents declared “standard essential” truly qualify for that designation.

This system would, however, be both costly and difficult to operate. It would require such a large number of qualified experts to serve as evaluators and conciliators that it may prove exceedingly difficult (or impossible) to find them. The sheer volume of work required for these experts would likely be insurmountable, with the costs borne by industry players. Inventors would also be precluded from seeking out injunctions while arbitration is ongoing. Ultimately, while nonbinding, the system may lead to a de facto royalty cap that lowers innovation.

Finally, it’s unclear whether this form of coordinated information sharing and collective royalty setting may give rise to collusion at various points in the value chain. This threatens both to harm consumers and to deter firms from commercializing standardized technologies. 

In short, these kinds of top-down initiatives likely fail to capture the nuances of individualized patents and standards. They may also add confusion and undermine the incentives that drive affordable innovation.

WESTERN POLICYMAKERS MUST RESIST CHINA’S INDUSTRIAL POLICY

The bottom line is that the kinds of changes under consideration by both U.S. and EU policymakers may undermine innovation in the West. SEP entrepreneurs have been successful because they have been able to monetize their innovations. If authorities take steps that needlessly imbalance the negotiation process between innovators and implementers—as Chinese courts have started to do and Europe’s draft regulation may unintendedly achieve—it will harm both U.S. and EU leadership in intellectual-property-intensive industries. In turn, this would accelerate China’s goal of becoming “a cyber great power.”

For more on this issue, see the ICLE issue brief “FRAND Determinations Under the EU SEP Proposal: Discarding the Huawei Framework,” as well as the “ICLE Comments to USPTO on Issues at the Intersection of Standards and Intellectual Property.”

Continue reading
Intellectual Property & Licensing

Getting Merger Guidelines Right

Scholarship Abstract This paper is on the new Merger Guidelines. It makes several arguments. First, that the Guidelines should be understood as existing in a political . . .

Abstract

This paper is on the new Merger Guidelines. It makes several arguments. First, that the Guidelines should be understood as existing in a political equilibrium. Second, that the new structural presumption of the Merger Guidelines (HHI = 1,800) is too strict, and that an economically reasonable revision in the structural presumption would have increased rather than decreased the threshold. Whereas the new Guidelines lowers the threshold to HHI 1,800 from HHI 2,500, an economically reasonable revision would have increased the threshold to HHI 3,200. I justify this argument using a bare-bones model of Cournot competition. Third, it seems unlikely, as an empirical matter, that merger enforcement under the existing Guidelines is socially desirable. Fourth, that federal merger enforcement raises serious constitutional issues, originally discussed in 1904, and that it may be time now, in view of the new Guidelines, to return to these foundational constitutional questions.

Read at SSRN.

Continue reading
Antitrust & Consumer Protection

Questions Arise on SB 1596: The Right to Repair Bill

Popular Media The Oregon Senate earlier this month approved SB 1596, the so-called “right to repair” bill. This legislation now awaits consideration in the Oregon House, with . . .

The Oregon Senate earlier this month approved SB 1596, the so-called “right to repair” bill. This legislation now awaits consideration in the Oregon House, with a hearing of the House Committee on Business and Labor scheduled for Wednesday.

While motivated by good intentions, this legislation risks unintended consequences that could ultimately harm consumers. Lawmakers should proceed cautiously.

Read the full piece here.

Continue reading
Intellectual Property & Licensing

Has Law Become Stagnant?

Popular Media My last post provided an overview of my draft article The Cost of Justice at the Dawn of AI and explained the basic logic of Baumol’s cost disease for . . .

My last post provided an overview of my draft article The Cost of Justice at the Dawn of AI and explained the basic logic of Baumol’s cost disease for the practice of law. Just as in any other market, if the productivity of lawyers increases at a slower rate than the rest of the economy, legal services will become more expensive. And if a technology like artificial intelligence leads legal productivity to increase at a faster rate than the rest of the economy, then legal services will become cheaper.

Continue reading
Innovation & the New Economy