What are you looking for?
Showing 9 of 76 Publications in Internet Governance
Scholarship Abstract The growing economic importance of technical standards has heightened the need for a better understanding of why they succeed or fail. While existing literature . . .
The growing economic importance of technical standards has heightened the need for a better understanding of why they succeed or fail. While existing literature has scrutinized the role of public governance, particularly in the realms of regulation, antitrust, and intellectual property, to date legal scholars have largely overlooked the role of private organizational and contractual lawyering in determining the path of technical standardization.
In this Article, we explore this dimension through a case study of the effects of private organizational governance and contracting practices on the fortunes of a nascent Internet security standard. The standard, known as Resource Public Key Infrastructure (“RPKI”), is designed to increase the trustworthiness of information about Internet routing. Through analysis of private organizational and contractual documents, semi-structured interviews with participants in the Internet operations industry, and attendance and participation in key industry conferences, we gained an embedded perspective on the role that private lawyering played in shaping would-be adopters’ perceptions and decisions regarding the technical standard.
According to our interviewees, contract and organizational bureaucracy mattered greatly. Notably, we found that the terms of contractual agreements prevented some potential adopters from experimenting with the technology and deterred others from proposing that their organizations adopt the technology. This was due to the perceived costs of involving organizational lawyers in technology-adoption decisions. In addition, contract terms deterred actors from increasing the functional value of the standard via complementary innovation and the development of complementary information services. Remarkably, even the basic mechanisms for presenting and assenting to contract terms chilled prospects for adoption. Regarding organization, we found that stark differences of governance and mission between key North American and European nonprofits contributed to different patterns of adoption. Taken together, these findings reveal the continuing importance of old-school transaction-cost engineering even in the most technical realms of Internet operation and standardization.
ICLE White Paper A comprehensive survey of the law & economics of online intermediary liability, which concludes that any proposed reform of Section 230 must meaningfully reduce the incidence of unlawful or tortious online content such that its net benefits outweigh its net costs.
A quarter-century since its enactment as part of the Communications Decency Act of 1996, a growing number of lawmakers have been seeking reforms to Section 230. In the 116th Congress alone, 26 bills were introduced to modify the law’s scope or to repeal it altogether. Indeed, we have learned much in the last 25 years about where Section 230 has worked well and where it has not.
Although the current Section 230 reform debate popularly—and politically—revolves around when platforms should be forced to host certain content politically favored by one faction (i.e., conservative speech) or when they should be forced to remove certain content disfavored by another (i.e., alleged “misinformation” or hate speech), this paper does not discuss, nor even entertain, such reform proposals. Rather, such proposals are (and should be) legal non-starters under the First Amendment.
Indeed, such reforms are virtually certain to harm, not improve, social welfare: As frustrating as imperfect content moderation may be, state-directed speech codes are much worse. Moreover, the politicized focus on curbing legal and non-tortious speech undermines the promise of making any progress on legitimate issues: The real gains to social welfare will materialize from reforms that better align the incentives of online platforms with the social goal of deterring or mitigating illegal or tortious conduct.
Section 230 contains two major provisions: (1) that an online service provider will not be treated as the speaker or publisher of the content of a third party, and (2) that actions taken by an online service provider to moderate the content hosted by its services will not trigger liability. In essence, Section 230 has come to be seen as a broad immunity provision insulating online platforms from liability for virtually all harms caused by user-generated content hosted by their services, including when platforms might otherwise be deemed to be implicated because of the exercise of their editorial control over that content.
To the extent that the current legal regime permits social harms online that exceed concomitant benefits, it should be reformed to deter those harms if such reform can be accomplished at sufficiently low cost. The salient objection to Section 230 reform is not one of principle, but of practicality: are there effective reforms that would address the identified harms without destroying (or excessively damaging) the vibrant Internet ecosystem by imposing punishing, open-ended legal liability? We believe there are.
First and foremost, we believe that Section 230(c)(1)’s intermediary-liability protections for illegal or tortious conduct by third parties can and should be conditioned on taking reasonable steps to curb such conduct, subject to procedural constraints that will prevent a tide of unmeritorious litigation.
This basic principle is not without its strenuous and thoughtful detractors, of course. A common set of objections to Section 230 reform has grown out of legitimate concerns that the economic and speech gains that have accompanied the rise of the Internet over the last three decades would be undermined or reversed if Section 230’s liability shield were weakened. Our paper thus establishes a proper framework for evaluating online intermediary liability and evaluates the implications of the common objections to Section 230 reform within that context. Indeed, it is important to take those criticisms seriously, as they highlight many of the pitfalls that could attend imprudent reforms. We examine these criticisms both to find ways to incorporate them into an effective reform agenda, and to highlight where the criticisms themselves are flawed.
Our approach is rooted in the well-established law & economics analysis of liability rules and civil procedure, which we use to introduce a framework for understanding the tradeoffs faced by online platforms under differing legal standards with differing degrees of liability for the behavior and speech of third-party users. This analysis is bolstered by a discussion of common law and statutory antecedents that allow us to understand how courts and legislatures have been able to develop appropriate liability regimes for the behavior of third parties in different, but analogous, contexts. Ultimately, and drawing on this analysis, we describe the contours of our recommended duty-of-care standard, along with a set of necessary procedural reforms that would help to ensure that we retain as much of the value of user-generated content as possible, while encouraging platforms to better police illicit and tortious content on their services.
An important goal of civil tort law is to align individual incentives with social welfare such that costly behavior is deterred and individuals are encouraged to take optimal levels of precaution against risks of injury. Not uncommonly, the law even holds intermediaries—persons or businesses that have a special relationship with offenders or victims—accountable when they are the least-cost avoider of harms, even when those harms result from the actions of third parties.
Against this background, the near-complete immunity granted to online platforms by Section 230 for harms caused by platform users is a departure from normal rules governing intermediary behavior. This immunity has certainly yielded benefits in the form of more user-generated online content and the ability of platforms to moderate without fear of liability. But it has also imposed costs to the extent that broad immunity fails to ensure that illegal and tortious conduct are optimally deterred online.
The crucial question for any proposed reform of Section 230 is whether it could pass a cost-benefit test—that is, whether it is likely to meaningfully reduce the incidence of unlawful or tortious online content while sufficiently addressing the objections to the modification of Section 230 immunity, such that its net benefits outweigh its net costs. In the context of both criminal and tort law generally, this balancing is sought through a mix of direct and collateral enforcement actions that, ideally, minimizes the total costs of misconduct and enforcement. Section 230, as it is currently construed, however, eschews entirely the possibility of collateral liability, foreclosing an important mechanism for properly adjusting the overall liability scheme.
But there is no sound reason to think this must be so. While many objections to Section 230 reform—that is, to the imposition of any amount of intermediary liability—are well-founded, they also frequently suffer from overstatement or unsupported suppositions about the magnitude of harm. At the same time, some of the expressed concerns are either simply misplaced or serve instead as arguments for broader civil-procedure reform (or decriminalization), rather than as defenses of the particularized immunity afforded by Section 230 itself.
Unfortunately, the usual course of discussion typically fails to acknowledge the tradeoffs that Section 230—and its reform—requires. These tradeoffs embody value judgments about the quantity and type of speech that should exist online, how individuals threatened by tortious and illegal conduct online should be protected, how injured parties should be made whole, and what role online platforms should have in helping to negotiate these tradeoffs. This paper’s overarching goal, even more important than any particular recommendation, is to make explicit what these tradeoffs entail.
Of central importance to the approach taken in this paper, our proposals presuppose a condition frequently elided by defenders of the Section 230 status quo, although we believe nearly all of them would agree with the assertion: that there is actual harm—violations of civil law and civil rights, violations of criminal law, and tortious conduct—that occurs on online platforms and that imposes real costs on individuals and society at-large. Our proposal proceeds on the assumption, in other words, that there are very real, concrete benefits that would result from demanding greater accountability from online intermediaries, even if that also leads to “collateral censorship” of some lawful speech.
It is necessary to understand that the baseline standard for speech and conduct—both online and offline—is not “anything goes,” but rather self-restraint enforced primarily by incentives for deterrence. Just as the law may deter some amount of speech, so too is speech deterred by fear of reprisal, threat of social sanction, and people’s baseline sense of morality. Some of this “lost” speech will be over-deterred, but one hopes that most deterred speech will be of the harmful or, at least, low-value sort (or else, the underlying laws and norms should be changed). Moreover, not even the most valuable speech is of infinite value, such that any change in a legal regime that results in relatively less speech can be deemed per se negative.
A proper evaluation of the merits of an intermediary-liability regime must therefore consider whether user liability alone is insufficient to deter bad actors, either because it is too costly to pursue remedies against users directly, or because the actions of platforms serve to make it less likely that harmful speech or conduct is deterred. The latter concern, in other words, is that intermediaries may—intentionally or not—facilitate harmful speech that would otherwise be deterred (self-censored) were it not for the operation of the platform.
Arguably, the incentives offered by each of the forces for self-restraint are weakened in the context of online platforms. Certainly everyone is familiar with the significantly weaker operation of social norms in the more attenuated and/or pseudonymous environment of online social interaction. While this environment facilitates more legal speech and conduct than in the offline world, it also facilitates more illegal and tortious speech and conduct. Similarly, fear of reprisal (i.e., self-help) is often attenuated online, not least because online harms are often a function of the multiplier effect of online speech: it is frequently not the actions of the original malfeasant actor, but those of neutral actors amplifying that speech or conduct, that cause harm. In such an environment, the culpability of the original actor is surely mitigated and may be lost entirely. Likewise, in the normal course, victims of tortious or illegal conduct and law enforcers acting on their behalf are the primary line of defense against bad actors. But the relative anonymity/pseudonymity of online interactions may substantially weaken this defense.
Many argue, nonetheless, that holding online intermediaries responsible for failing to remove offensive content would lead to a flood of lawsuits that would ultimately overwhelm service providers, and sub-optimally diminish the value these firms provide to society—a so-called “death by ten thousand duck-bites.” Relatedly, firms that face potentially greater liability would be forced to internalize some increased—possibly exorbitant—degree of compliance costs even if litigation never materialized.
There is certainly some validity to these concerns. Given the sheer volume of content online and the complexity, imprecision, and uncertainty of moderation processes, even very effective content-moderation algorithms will fail to prevent all actionable conduct, which could result in many potential claims. At the same time, it can be difficult to weed out unlawful conduct without inadvertently over-limiting lawful activity.
But many of the unique features of online platforms also cut against the relaxation of legal standards online. Among other things—and in addition to the attenuated incentives for self-restraint mentioned above—where traditional (offline) media primarily host expressive content, online platforms facilitate a significant volume of behavior and commerce that isn’t purely expressive. Tortious and illegal content tends to be less susceptible to normal deterrence online than in other contexts, as individuals can hide behind varying degrees of anonymity. Even users who are neither anonymous nor pseudonymous can sometimes prove challenging to reach with legal process. And, perhaps most importantly, online content is disseminated both faster and more broadly than offline media.
At the same time, an increase in liability risk for online platforms may lead not to insurmountable increases in litigation costs, but to other changes that may be less privately costly to a platform than litigation, and which may be socially desirable. Among these changes may be an increase in preemptive moderation; smaller, more specialized platforms and/or tighter screening of platform participants on the front end (both of which are likely to entail stronger reputational and normative constraints); the establishment of more effective user-reporting and harm-mitigation mechanisms; the development and adoption of specialized insurance offerings; or any number of other possible changes.
Thus the proper framework for evaluating potential reforms to Section 230 must include the following considerations: To what degree would shifting the legal rules governing platform liability increase litigation costs, increase moderation costs, constrain the provision of products and services, increase “collateral censorship,” and impede startup formation and competition, all relative to the status quo, not to some imaginary ideal state? Assessing the marginal changes in all these aspects entails, first, determining how they are affected by the current regime. It then requires identifying both the direction and magnitude of change that would result from reform. Next, it requires evaluating the corresponding benefits that legal change would bring in increasing accountability for tortious or criminal conduct online. And, finally, it necessitates hazarding a best guess of the net effect. Virtually never is this requisite analysis undertaken with any real degree of rigor. Our paper aims to correct that.
What is called for is a properly scoped reform that applies the same political, legal, economic, and other social preferences offline as online, aimed at ensuring that we optimally deter illegal content without losing the benefits of widespread user-generated content. Properly considered, there is no novel conflict between promoting the flow of information and protecting against tortious or illegal conduct online. While the specific mechanisms employed to mediate between these two principles online and offline may differ—and, indeed, while technological differences can alter the distribution of costs and benefits in ways that must be accounted for—the fundamental principles that determine the dividing line between actionable and illegal or tortious content offline can and should be respected online, as well. Indeed, even Google has argued for exactly this sort of parity, recently calling on the Canadian government to “take care to ensure that their proposal does not risk creating different legal standards for online and offline environments.”
Keeping in mind the tradeoffs embedded in Section 230, we believe that, in order to more optimally mitigate truly harmful conduct on Internet platforms, intermediary-liability law should develop a “duty-of-care” standard that obliges service providers to reasonably protect their users and others from the foreseeable illegal or tortious acts of third parties. As a guiding principle, we should not hold online platforms vicariously liable for the speech of third parties, both because of the sheer volume of user-generated content online and the generally attenuated relationship between online platforms and users, as well as because of the potentially large costs to overly chilling free expression online. But we should place at least the same burden to curb unlawful behavior on online platforms that we do on traditional media operating offline.
Nevertheless, we hasten to add that this alone would likely be deficient: adding an open-ended duty of care to the current legal system could generate a volume of litigation that few, if any, platform providers could survive. Instead, any new duty of care should be tempered by procedural reforms designed to ensure that only meritorious litigation survives beyond a pre-discovery motion to dismiss.
Procedurally, Section 230 immunity protects service providers not just from liability for harm caused by third-party content, but also from having to incur substantial litigation costs. Concern for judicial economy and operational efficiency are laudable, of course, but such concerns are properly addressed toward minimizing the costs of litigation in ways that do not undermine the deterrent and compensatory effects of meritorious causes of action. While litigation costs that exceed the minimum required to properly assign liability are deadweight losses to be avoided, the cost of liability itself—when properly found—ought to be borne by the party best positioned to prevent harm. Thus, a functional regime will attempt to accurately balance excessive litigation costs against legitimate and necessary liability costs.
In order to achieve this balance, we recommend that, while online platforms should be responsible for adopting reasonable practices to mitigate illegal or tortious conduct by their users, they should not face liability for communication torts (e.g., defamation) arising out of user-generated content unless they fail to remove content they knew or should have known was defamatory. Further, we propose that Section 230(c)(2)’s safe harbor should remain in force and that, unlike for traditional media operating offline, the act of reasonable content moderation by online platforms should not, by itself, create liability exposure.
In sum, we propose that Section 230 should be reformed to incorporate the following high-level elements, encompassing two major components: first, a proposal to alter the underlying intermediary-liability rules to establish a “duty of care” requiring adherence to certain standards of conduct with respect to user-generated content; and second, a set of procedural reforms that are meant to phase in the introduction of the duty of care and its refinement by courts and establish guardrails governing litigation of the duty.
Online intermediaries should operate under a duty of care to take appropriate measures to prevent or mitigate foreseeable harms caused by their users’ conduct.
Section 230(c)(1) should not preclude intermediary liability when an online service provider fails to take reasonable care to prevent non-speech-related tortious or illegal conduct by its users
As an exception to the general reasonableness rule above, Section 230(c)(1) should preclude intermediary liability for communication torts arising out of user-generated content unless an online service provider fails to remove content it knew or should have known was defamatory.
Section 230(c)(2) should provide a safe harbor from liability when an online service provider does take reasonable steps to moderate unlawful conduct. In this way, an online service provider would not be held liable simply for having let harmful content slip through, despite its reasonable efforts.
The act of moderation should not give rise to a presumption of knowledge. Taking down content may indicate an online service provider knows it is unlawful, but it does not establish that the online service provider should necessarily be liable for a failure to remove it anywhere the same or similar content arises.
But Section 230 should contemplate “red-flag” knowledge, such that a failure to remove content will not be deemed reasonable if an online service provider knows or should have known that it is illegal or tortious. Because the Internet creates exceptional opportunities for the rapid spread of harmful content, a reasonableness obligation that applies only ex ante may be insufficient. Rather, it may be necessary to impose certain ex post requirements for harmful content that was reasonably permitted in the first instance, but that should nevertheless be removed given sufficient notice.
In order to effect the safe harbor for reasonable moderation practices that nevertheless result in harmful content, we propose the establishment of “certified” moderation standards under the aegis of a multi-stakeholder body convened by an overseeing government agency. Compliance with these standards would operate to foreclose litigation at an early stage against online service providers in most circumstances. If followed, a defendant could provide its certified moderation practices as a “certified answer” to any complaint alleging a cause of action arising out of user-generated content. Compliant practices will merit dismissal of the case, effecting a safe harbor for such practices.
In litigation, after a defendant answers a complaint with its certified moderation practices, the burden would shift to the plaintiff to adduce sufficient evidence to show that the certified standards were not actually adhered to. Such evidence should be more than mere res ipsa loquitur; it must be sufficient to demonstrate that the online service provider should have been aware of a harm or potential harm, that it had the opportunity to cure or prevent it, and that it failed to do so. Such a claim would need to meet a heightened pleading requirement, as for fraud, requiring particularity.
Finally, we believe any executive or legislative oversight of this process should be explicitly scheduled to sunset. Once the basic system of intermediary liability has had some time to mature, it should be left to courts to further manage and develop the relevant common law.
Our proposal does not demand perfection from online service providers in their content-moderation decisions—only that they make reasonable efforts. What is appropriate for YouTube, Facebook, or Twitter will not be the same as what’s appropriate for a startup social-media site, a web-infrastructure provider, or an e-commerce platform. A properly designed duty-of-care standard should be flexible and account for the scale of a platform, the nature and size of its user base, and the costs of compliance, among other considerations. Indeed, this sort of flexibility is a benefit of adopting a “reasonableness” standard, such as is found in common law negligence. Allowing courts to apply the flexible common law duty of reasonable care would also enable the jurisprudence to evolve with the changing nature of online intermediaries, the problems they pose, and the moderating technologies that become available.
Read the full working paper here.
TL;DR Legal history offers examples of areas where attempting to apply liability directly to bad actors is likely to be ineffective, but where certain related parties might be able to either control the bad actors or mitigate the damage they cause.
Legal history offers examples of areas where attempting to apply liability directly to bad actors is likely to be ineffective, but where certain related parties might be able to either control the bad actors or mitigate the damage they cause. In such cases, the common law has long embraced indirect or vicarious liability, holding one party liable for wrongs committed by another. The purpose of this kind of indirect liability is to align incentives where they can be most useful by placing responsibility on the least-cost avoider.
The immunity from liability granted to online platforms by Section 230 of the Communications Decency Act is a departure from normal rules governing intermediary behavior. It is impossible to know exactly how a robust common law of online intermediary liability would have developed in a world where Section 230 immunity never existed.
Lessons can be drawn from how the offline world has dealt with third-party liability, especially when an intermediary operates under a duty of care. The common law offers several examples of duties that business owners owe to their customers or, sometimes, to the outside world. Central among these is the legal obligation to take reasonable steps to curb harm from the use of a business’ goods and services. If the business has created a situation or environment that puts people at risk, it has an obligation to mitigate that risk. It also can have obligations to prevent risk of harm to customers or others with whom it has entered into a relationship, even if the business did not directly create the risk.
Read the full explainer here.
TL;DR The Communications Decency Act of 1996’s Section 230 holds that the law will not treat online service providers as speakers or publishers of third-party content, and that actions the providers take to moderate content hosted by their services will not trigger liability.
The Communications Decency Act of 1996’s Section 230 holds that the law will not treat online service providers as speakers or publishers of third-party content, and that actions the providers take to moderate content hosted by their services will not trigger liability. A quarter-century later, a growing number of lawmakers seek reforms to Section 230. In the 116th Congress alone, 26 bills were introduced to modify the law’s scope or to repeal it altogether.
While the current debate popularly centers on whether platforms should be forced to host certain content or when they should be forced to remove other content, such reforms are virtually certain to harm, not improve, social welfare: As frustrating as imperfect content moderation may be, state-directed speech codes are much worse.
The real gains to social welfare will materialize from reforms that better align the incentives of online platforms with the social goal of deterring or mitigating illegal or tortious conduct. To the extent that the current legal regime permits social harms online that exceed concomitant benefits, it should be reformed to deter those harms if such reform can be accomplished at sufficiently low cost.
TL;DR President Joe Biden has called for “future-proof” broadband infrastructure as part of his Build Back Better plan, and some members of the U.S. Senate want the Federal Communications Commission (FCC) to update its definition of broadband to comprise both download and upload speeds of at least 100 Mbps.
President Joe Biden has called for “future-proof” broadband infrastructure as part of his Build Back Better plan, and some members of the U.S. Senate want the Federal Communications Commission (FCC) to update its definition of broadband to comprise both download and upload speeds of at least 100 Mbps. States like California have likewise advanced bills to prioritize funding for infrastructure that supports 100 Mbps or greater download speeds. It is widely believed that the FCC will update the definition of broadband from the 2015 standard of 25 Mbps download/3 Mbps upload speeds.
Studies of U.S. broadband usage suggest that typical consumers do not need upload speeds to be as fast as download speeds. Moreover, they typically require download speeds of less than 100 Mbps. Linking public funding to a required symmetrical 100 Mbps speed tier, or using that tier as a benchmark to define adequate broadband deployment, would have negative consequences for broadband buildout.
TOTM The European Commission recently issued a formal Statement of Objections (SO) in which it charges Apple with antitrust breach. In a nutshell, the commission argues that Apple . . .
The European Commission recently issued a formal Statement of Objections (SO) in which it charges Apple with antitrust breach. In a nutshell, the commission argues that Apple prevents app developers—in this case, Spotify—from using alternative in-app purchase systems (IAPs) other than Apple’s own, or steering them towards other, cheaper payment methods on another site. This, the commission says, results in higher prices for consumers in the audio streaming and ebook/audiobook markets.
Read the full piece here.
ICLE Issue Brief The COVID-19 pandemic has highlighted the resilience of U.S. broadband infrastructure, the extent to which we rely on that infrastructure, and the geographies and communities . . .
The COVID-19 pandemic has highlighted the resilience of U.S. broadband infrastructure, the extent to which we rely on that infrastructure, and the geographies and communities where broadband build-out lags behind. As the extent and impact of the digital divide has been made clearer, there is renewed interest in the best ways to expand broadband access to better serve all Americans.
At ICLE, we would caution policymakers to eschew calls to address the digital divide simply by throwing vast sums of money at the problem. They should, instead, pursue a principled approach designed to encourage entry in new regions, while avoiding poorly managed subsidies and harmful price controls that would discourage investment and innovation by incumbent internet service providers (ISPs). Here is how to do that.
Read the full brief here.
TOTM In his recent concurrence in Biden v. Knight, Justice Clarence Thomas sketched a roadmap for how to regulate social-media platforms. The animating factor for Thomas, . . .
In his recent concurrence in Biden v. Knight, Justice Clarence Thomas sketched a roadmap for how to regulate social-media platforms. The animating factor for Thomas, much like for other conservatives, appears to be a sense that Big Tech has exhibited anti-conservative bias in its moderation decisions, most prominently by excluding former President Donald Trump from Twitter and Facebook. The opinion has predictably been greeted warmly by conservative champions of social-media regulation, who believe it shows how states and the federal government can proceed on this front.
Read the full piece here.
Amicus Brief ICLE supports the appeal filed by ACA Connects et al. seeking review of the district court’s denial of a preliminary injunction. As detailed herein, the district court failed to consider economic and empirical realities that militate in favor of finding irreparable harm to the Appellants’ members. Moreover, the same economic and empirical realities tip the balance of equities in favor of the Appellants, and establish that the public interest is in granting a preliminary injunction against enforcement of the California Internet Consumer Protection and Net Neutrality Act of 2018.
In 2018, the FCC issued its Restoring Internet Freedom Order, 33 FCC Rcd. 311 (2018) [“2018 Order”], which returned broadband Internet access service (“broadband”) to a classification as a Title I information service. The FCC determined that a “light touch” regulatory regime was necessary to promote investment in broadband. Id. ¶¶ 1-2. While removing the “no-blocking” and “no-throttling” rules previously imposed under the 2015 Open Internet Order, Protecting and Promoting the Open Internet, Report and Order on Remand, Declaratory Ruling, and Order, 30 FCC Rcd. 5601 (2015) [“2015 Order”], the FCC also removed the “general conduct” standard—an open-ended regulatory catch-all that would permit the FCC to examine any conduct of broadband providers that it deemed potentially threatening to Internet openness. Cf. 2018 Order ¶¶ 239-245. Yet, notably, the FCC elected to keep a version of the 2015 Order’s transparency rule in place, which requires broadband providers to disclose any blocking, throttling, paid prioritization, or similar conduct. Id.
In retaining the transparency rule, the FCC noted that the FTC and state attorneys general are in a position to prevent anticompetitive consumer harm through the enforcement of consumer protection and antitrust laws. See 2018 Order ¶ 142. Thus, the overarching goal of the 2018 Order was to ensure business conduct which could be beneficial to consumers was not foreclosed by regulatory fiat, as would have been the case under the 2015 Order, while empowering the FCC, FTC, and state attorneys general to identify and address discrete consumer harms.
The Mozilla court noted that the FCC could invoke conflict preemption principles in order to prevent inconsistent state laws from interfering with the 2018 Order. Mozilla Corp. v. FCC, 940 F.3d 1, 85 (D.C. Cir. 2019) (per curiam). Without such preemption, a patchwork of inconsistent state laws would confuse compliance efforts and drive up broadband deployment costs. Cf. Id. Relying as it does on a common carriage approach to regulating the Internet, and fragmenting the regulation of broadband providers between the federal and state levels, SB-822 is at odds with the purpose of the 2018 Order.
The district court found the balance of the equities and the public interest both weighed in favor of California in enforcing SB-822, stating the law “provides crucial protections for California’s economy, democracy, and society as a whole,” Transcript of Proceedings, American Cable Ass’n v. Becerra, No. 2:18 cv-02684 (E.D. Cal. Feb. 23, 2021) (ER-7–78) [“Tr.”], and that a preliminary injunction would “negatively impact the State of California more than [it would benefit] the ISP companies.” Id. at 69. In denying the motion for a preliminary injunction, the court also found the Appellants failed to show a likelihood of success on the merits. Id. at 67.
The district court wrongly concluded the balance of equities tips in favor of Defendant-Appellee, the state of California, and incorrectly assumed that the Appellants’ members would not suffer irreparable harm. The economics underlying broadband deployment, combined with competition and consumer protection law, provide adequate protection to consumers and firms in the marketplace without enforcement of SB-822. And, because of the sovereign immunity provided to California under the Eleventh Amendment, the potential damages suffered by the Appellants’ members are unable to be remedied. On the other hand, the enforcement of this law will significantly harm the Appellants’ members as well as the public by allowing states to create a patchwork of inconsistent laws and bans on consumer welfare-enhancing conduct like zero-rating.
The district court made crucial errors in its analysis when balancing the equities.
First, when evaluating the likelihood of ISPs acting in ways that would reduce Internet openness, it failed to consider the economic incentives that militate against this outcome.
ISPs operate as multi-sided markets—their ability to draw consumers and edge providers on both sides of their platforms depends on behavior that comports with consumer expectations. Both broadband consumers and edge providers demand openness, and there is no reason to expect ISPs to systematically subvert those desires and risk losing revenue and suffering reputational harm. Contrary to the district court’s characterization, the good behavior of ISPs is not attributable to scrutiny during the pendency of the current litigation: rather, it is a rational response to consumer demand and part of a course of conduct that has existed for decades.
Second, the district court discounted the legal backdrop that both would hold ISPs to their promises, as well as prevent them from committing competitive harms.
All of the major ISPs have made public promises to refrain from blocking, throttling, or engaging in paid prioritization. See infra Part I (A) at 17. Further, the FCC’s 2018 Order creates a transparency regime that would prevent ISPs from covertly engaging in the practices SB-822 seeks to prevent. The FTC’s Section 5 authority to prevent “unfair or deceptive acts or practices” empowers that agency to pursue ISPs that make such promises and break them while state attorneys general can also bring enforcement actions under state consumer protection laws. 2018 Order ¶¶ 140-41.
In addition to the consumer protection enforcement noted above, antitrust law provides a well-developed set of legal rules that would prevent ISP’s from engaging in anticompetitive conduct. This would include preventing ISPs from entering into anticompetitive agreements with each other, or with edge providers, that harm competition, as well as prevent anticompetitive unilateral conduct.
In summary, the district court failed to properly balance the equities and, in so doing, sanctioned net harm to the public interest. Both the underlying economic incentives and existing laws ensure ISPs will continue to provide broadband service that meets consumer expectations. By contrast, SB-822, in going further than even the 2015 Order, actually permits a great deal of harm against the public interest by presumptively banning practices, like zero-rating, that increase consumer welfare without harming competition.