What are you looking for?

Showing 9 of 221 Results for "net neutrality"

ICLE Brief for D.C. Circuit in State of New York v Facebook

Amicus Brief In this amicus brief for the U.S. Court of Appeals for the D.C. Circuit, ICLE and a dozen scholars of law & economics address the broad consensus disfavoring how New York and other states seek to apply the “unilateral refusal to deal” doctrine in an antitrust case against Facebook.

United States Court of Appeals
for the District of Columbia Circuit

STATE OF NEW YORK, et al.,
Plaintiffs-Appellants,
v.
FACEBOOK, INC.,
Defendant-Appellee.

ON APPEAL FROM THE UNITED STATES DISTRICT COURT
FOR THE DISTRICT OF COLUMBIA
No. 1:20-cv-03589-JEB (Hon. James E. Boasberg)

BRIEF OF INTERNATIONAL CENTER FOR
LAW AND ECONOMICS AND SCHOLARS OF LAW
AND ECONOMICS AS AMICUS CURIAE SUPPORTING
DEFENDANT-APPELLEE FACEBOOK, INC. AND AFFIRMANCE

 

STATEMENT OF THE AMICUS CURIAE

Amici are leading scholars of economics, telecommunications, and/or antitrust. Their scholarship reflects years of experience and publications in these fields.

Amici’s expertise and academic perspectives will aid the Court in deciding whether to affirm in three respects. First, amici provide an explanation of key economic concepts underpinning how economists understand the welfare effects of a monopolist’s refusal to deal voluntarily with a competitor and why that supports affirmance here. Second, amici offer their perspective on the limited circumstances that might justify penalizing a monopolist’s unilateral refusal to deal—and why this case is not one of them. Third, amici explain why the District Court’s legal framework was correct and why a clear standard is necessary when analyzing alleged refusals to deal.

SUMMARY OF ARGUMENT

This brief addresses the broad consensus in the academic literature disfavoring a theory underlying plaintiff’s case—“unilateral refusal to deal” doctrine. The States allege that Facebook restricted access to an input (Facebook’s Platform) in order to prevent third parties from using that access to export Facebook data to competitors or compete directly with Facebook. But a unilateral refusal to deal involves more than an allegation that a monopolist refuses to enter into a business relationship with a rival.

Mainstream economists and competition law scholars are skeptical of imposing liability, even on a monopolist, based solely on its choice of business partners. The freedom of firms to choose their business partners is a fundamental tenet of the free market economy, and the mechanism by which markets produce the greatest welfare gains. Thus, cases compelling business dealings should be confined to particularly delineated circumstances.

In Part I below, amici describe why it is generally inefficient for courts to compel economic actors to deal with one another. Such “solutions” are generally unsound in theory and unworkable in practice, in that they ask judges to operate as regulators over the defendant’s business.

In Part II, amici explain why Aspen Skiing—the Supreme Court’s most prominent precedent permitting liability for a monopolist’s unilateral refusal to deal—went too far and should not be expanded as the States’ and some of their amici propose.

In Part III, amici explain that the District Court correctly held that the conduct at issue here does not constitute a refusal to deal under Aspen Skiing. A unilateral refusal to deal should trigger antitrust liability only where a monopolist turns down more profitable dealings with a competitor in an effort to drive that competitor’s exit or to disable its ability to compete, thereby allowing the monopolist to recoup its losses by increasing prices in the future. But the States’ allegations do not describe that scenario.

In Part IV, amici address that the District Court properly considered and dismissed the States’ “conditional dealing” argument. The States’ allegations are correctly addressed under the rubric of a refusal to deal—not exclusive dealing or otherwise. The States’ desire to mold their allegations into different legal theories highlights why courts should use a strict, clear standard to analyze refusals to deal.

Read the full brief here.

Continue reading
Antitrust & Consumer Protection

The Digital Markets Act and EU Antitrust Enforcement: Double & Triple Jeopardy

ICLE White Paper The European Union's Digital Markets Act will intersect with EU and national-level competition law in ways that subject tech platforms to the risk of double jeopardy and conflicting decisions for the same activity.

Executive Summary

In contrast to its stated aims to promote a Digital Single Market across the European Union, the proposed Digital Markets Act (DMA) could serve to fragment Europe’s legal framework even further, largely due to overlaps with competition law. This paper provides an analytical overview of areas where conflicts would inevitably arise from dual application of the DMA and European and national-level antitrust rules. It counsels full centralization of the DMA’s enforcement at the EU level to avoid further fragmentation, as well as constraining the law’s scope by limiting its application to a few large platform ecosystems.

Introduction

The Digital Markets Act (DMA) has entered the last and decisive stage of its approval process. With the Council of Europe having reached consensus on its general approach[1] and the European Parliament having adopted amendments,[2] the DMA proposal has moved into the inter-institutional negotiations known as the so-called “trilogue.”

The DMA has spurred a lively debate since it initially was proposed by the European Commission in December 2020.[3] This deliberative process has touched on all the proposal’s features, including its aims and scope, the regulations and rule-based approach it would adopt, and the measure’s institutional design. However, given the positions expressed by the Council and the Parliament, the rationale for DMA intervention and the proposal’s relationship with antitrust law remain relevant topics for exploration.

The DMA is grounded explicitly on the notion that competition law alone is insufficient to effectively address the challenges and systemic problems posed by the digital platform economy. Indeed, the scope of antitrust is limited to certain instances of market power (e.g., dominance on specific markets) and of anti-competitive behavior.[4] Further, its enforcement occurs ex post and requires extensive investigation on a case-by-case basis of what are often very complex sets of facts.[5] Moreover, it may not effectively address the challenges to well-functioning markets posed by the conduct of gatekeepers, who are not necessarily dominant in competition-law terms.[6] As a result, proposals such as the DMA invoke regulatory intervention to complement traditional antitrust rules by introducing a set of ex ante obligations for online platforms designated as gatekeepers. This also allows enforcers to dispense with the laborious process of defining relevant markets, proving dominance, and measuring market effects.

The DMA’s framers declare that the law aims to protect different legal interests than antitrust rules do. That is, rather than seeking to protect undistorted competition on any given market, the DMA look to ensure that markets where gatekeepers are present remain contestable and fair, independent from the actual, likely, or presumed effects of the conduct of a given gatekeeper.[7] Accordingly, the relevant legal basis for the DMA is found not in Article 103 of the Treaty on the Functioning of the European Union (TFEU), which is intended to implement antitrust rules pursuant to Articles 101 and 102 TFEU, but rather in Article 114 TFEU, covering “Common Rules on Competition, Taxation and Approximation of Laws.” Further, from an institutional-design perspective, the DMA opts for centralized implementation and enforcement at the EU level, rather than the traditional decentralized or parallel antitrust enforcement at the national level.

Because the intent of the DMA is to serve as a complementary regulatory scheme, traditional antitrust rules will remain applicable. However, those rules would not alleviate the obligations imposed on gatekeepers under the forthcoming DMA regulations and, particularly, efforts to make the DMA’s application uniform and effective.[8]

Despite claims that the DMA is not an instrument of competition law[9] and thus would not affect how antitrust rules apply in digital markets, the forthcoming regime appears to blur the line between regulation and antitrust by mixing their respective features and goals. Indeed, the DMA shares the same aims and protects the same legal interests as competition law.[10] Further, its list of prohibitions is effectively a synopsis of past and ongoing antitrust cases.[11] Therefore, the proposal can be described as a sector-specific competition law,[12] or a shift toward a more regulatory approach to competition law—one that is designed to allow assessments to be made more quickly and through a more simplified process.[13]

Acknowledging the continuum between competition law and the DMA, the European Competition Network (ECN) and some EU member states (self-anointed “friends of an effective DMA”) have proposed empowering national competition authorities (NCAs) to enforce DMA obligations.[14] Under this approach, while the European Commission would remain primarily responsible for enforcing the DMA and would have sole jurisdiction for designating gatekeepers or granting exemptions, NCAs would be permitted to enforce the DMA’s obligations and to use investigative and monitoring powers at their own initiative. According to supporters of this approach, the concurrent competence of the Commission and NCAs is needed to avoid the risks of conflicting decisions or remedies that would undermine the effectiveness and coherence of both the DMA and antitrust law (and, ultimately, the integrity of the internal market.)[15]

These risks have been heightened by the fact that Germany (one of the “friends of an effective DMA”) subsequently empowered its NCA, the Bundeskartellamt, to intervene at an early stage in cases where it finds that competition is threatened by large digital companies—in essence, granting the agency a regulatory tool that is functionally equivalent to the DMA.[16] Further, several member states are preparing to apply national rules on relative market power and economic dependence to large digital platforms, with the goal of correcting perceived imbalances of bargaining power between online platforms and business users.[17] As a result of these intersections among the DMA, national and European antitrust rules, and national laws on superior bargaining power, a digital platform may be subject to cumulative proceedings for the very same conduct, facing risks of double (or even triple and quadruple) jeopardy.[18]

The aim of this paper is to guide the reader through the jungle of potentially overlapping rules that will affect European digital markets in the post-DMA world. It attempts to demonstrate that, despite significant concerns about both the DMA’s content and its rationale, full centralization of its enforcement at EU level will likely be needed to reduce fragmentation and ensure harmonized implementation of the rules. Frictions with competition law would be further confined by narrowing the DMA’s scope to ecosystem-related issues, thereby limiting its application to the few large platforms that are able to orchestrate an ecosystem.

The paper is structured as follows. Section II analyzes the intersection between the DMA and competition law. Section III examines the DMA’s enforcement structure and the solutions advanced to safeguard cooperation and coordination with member states. Section IV illustrates the arguments supporting full centralization of DMA enforcement and the need to narrow its scope. Section V concludes.

Read the full white paper here.

[1] Proposal for a Regulation of the European Parliament and of the Council on Contestable and Fair Markets on the Digital Sector (Digital Markets Act) – General Approach, Council of the European Union (Nov. 16, 2021), available at https://data.consilium.europa.eu/doc/document/ST-13801-2021-INIT/en/pdf.

[2] Amendments Adopted on the Proposal for a Regulation of the European Parliament and of the Council on Contestable and Fair Markets in the Digital Sector (Digital Markets Act), European Parliament (Dec. 15, 2021), https://www.europarl.europa.eu/doceo/document/TA-9-2021-12-15_EN.html.

[3] Proposal for a Regulation on Contestable and Fair Markets in the Digital Sector (Digital Markets Act), European Commission (Dec. 15, 2020), available at https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020PC0842&from=en.

[4] Ibid., Recital 5.

[5] Ibid.

[6] Ibid.

[7] Ibid., Recital 10.

[8] Ibid., Recital 9 and Article 1(5).

[9] Margrethe Vestager, Competition in a Digital Age, speech to the European Internet Forum (Mar. 17, 2021), https://ec.europa.eu/commission/commissioners/2019-2024/vestager/announcements/competition-digital-age_en.

[10] Heike Schweitzer, The Art to Make Gatekeeper Positions Contestable and the Challenge to Know What Is Fair: A Discussion of the Digital Markets Act Proposal, 3 ZEuP 503 (Jun. 11, 2021).

[11] Cristina Caffarra and Fiona Scott Morton, The European Commission Digital Markets Act: A Translation, Vox EU (Jan. 5, 2021), https://voxeu.org/article/european-commission-digital-markets-act-translation.

[12] Nicolas Petit, The Proposed Digital Markets Act (DMA): A Legal and Policy Review, 12 J. Eur. Compet. Law Pract 529 (May 11, 2021).

[13] Marco Cappai and Giuseppe Colangelo, Taming Digital Gatekeepers: The More Regulatory Approach to Antitrust Law, 41 Comput. Law Secur. Rev. 1 (Apr. 9, 2021).

[14] How National Competition Agencies Can Strengthen the DMA, European Competition Network (Jun. 22, 2021), available at https://ec.europa.eu/competition/ecn/DMA_joint_EU_NCAs_paper_21.06.2021.pdf; Strengthening the Digital Markets Act and Its Enforcement, German Federal Ministry for Economic Affairs and Energy, French Ministére de l’Économie, les Finance et de la Relance, Dutch Ministry of Economic Affairs and Climate Policy, (May 27, 2021), available at https://www.bmwi.de/Redaktion/DE/Downloads/XYZ/zweites-gemeinsames-positionspapier-der-friends-of-an-effective-digital-markets-act.pdf?__blob=publicationFile&v=4.

[15] European Competition Network, supra note 14, 6-7.

[16] See Section 19a of the GWB Digitalization Act (Jan. 18, 2021), https://www.bundesrat.de/SharedDocs/beratungsvorgaenge/2021/0001-0100/0038-21.html.

[17] See, e.g., German GWB Digitalization Act, supra note 16; See, also, Belgian Royal Decree of 31 July 2020 Amending Books I and IV of the Code of Economic Law as Concerns the Abuse of Economic Dependence, Belgian Official Gazette (Jul. 19, 2020), http://www.ejustice.just.fgov.be/cgi_loi/change_lg.pl?language=fr&la=F&cn=2019040453&table_name=loi.

[18] Marco Cappai and Giuseppe Colangelo, A Unified Test for the European Ne Bis in Idem Principle: The Case Study of Digital Markets Regulation, SSRN working paper (Oct. 27, 2021), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3951088.

Continue reading
Antitrust & Consumer Protection

The Return of (De Facto) Rate Regulation: Title II Will Slow Broadband Deployment and Access

TOTM President Joe Biden’s nomination of Gigi Sohn to serve on the Federal Communications Commission (FCC)—scheduled for a second hearing before the Senate Commerce Committee Feb. 9—has been . . .

President Joe Biden’s nomination of Gigi Sohn to serve on the Federal Communications Commission (FCC)—scheduled for a second hearing before the Senate Commerce Committee Feb. 9—has been met with speculation that it presages renewed efforts at the FCC to enforce net neutrality. A veteran of tech policy battles, Sohn served as counselor to former FCC Chairman Tom Wheeler at the time of the commission’s 2015 net-neutrality order.

Read the full piece here.

 

Continue reading
Telecommunications & Regulated Utilities

Privacy and Security Implications of Regulation of Digital Services in the EU and in the US

Scholarship Written for the Transatlantic Technology Law Forum (TTLF) Working Paper Series, ICLE Senior Scholar Mikołaj Barczentewicz assesses privacy and security risks raised by U.S. and EU legislative proposals to regulate digital platforms.

The attached is a part of the Transatlantic Technology Law Forum’s (TTLF) Working Paper Series, which presents original research on technology, and business-related law and policy issues of the European Union and the United States. TTLF is a joint initiative of Stanford Law School and the University of Vienna School of Law.

Abstract

The goal of this project is to assess the data privacy and security implications of the “new wave” of legislation on digital services—both in the United States and in the EU. In the European Union, the proposals for the Digital Services Act and the Digital Markets Act include provisions that have potentially significant security and privacy implications, like interoperability obligations for online platforms or provisions for data access for researchers. Similar provisions, e.g., on interoperability, are included in bills currently being considered by the U.S .Congress (e.g., in Rep. David Cicilline’s American Choice and Innovation Online Act and in Sen. Amy Klobuchar’s American Innovation and Choice Online Act). Some stakeholders are advocating that the EU and U.S. legislatures go even further than currently contemplated in a direction that could potentially have negative security and privacy consequences—especially on interoperability. I aim to assess whether the legislative proposals in their current form adequately addresses potential privacy and security risks, and what changes in the proposed legislation might help to alleviate the risks.

Introduction

Increasing information privacy and security through the law is notoriously difficult, even if that is the explicit goal of legislation. Thus, perhaps we should instead expect the law at least not to unintentionally decrease the level of privacy and security. Unfortunately, pursuing even seemingly unrelated policy aims through legislation may have that negative effect. In this paper, I analyze several legislative proposals from the EU and from the United States belonging to the new “techlash” wave. All those bills purport to improve the situation of consumers or competitiveness of digital markets. However, as I argue, they would all have
negative and unaddressed consequences in terms of information privacy and security.

On the EU side, I consider the Digital Services Act (DSA) and the Digital Markets Act (DMA) proposals. The DSA and the DMA have been proceeding through the EU legislative process with unexpected speed and given what looks like significant political momentum, it is possible that they will become law. On the U.S. side, I look at Rep. David Cicilline’s (D-R.I.) American Choice and Innovation Online Act, Rep. Mary Gay Scanlon’s (D-Pa.) Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, Sen. Amy Klobuchar’s (D-Minn.) American Innovation and Choice Online Act, and Sen. Richard Blumenthal’s (D-Conn.) Open App Markets Act.

I chose to focus on three regulatory solutions: (1) mandating interoperability, (2) mandating device neutrality (a possibility of sideloading applications), and (3) compulsory data access (by vetted researchers or by authorities). The first two models are shared by most of the discussed legislative proposals, other than the DSA. The last one is only included in the DSA.

Read the full paper here.

Continue reading
Data Security & Privacy

Geoff Manne on Net Neutrality

CNBC – ICLE President Geoffrey Manne was quoted by CNBC in a piece about efforts at the Federal Communications Commission to invoke Title II to . . .

CNBC – ICLE President Geoffrey Manne was quoted by CNBC in a piece about efforts at the Federal Communications Commission to invoke Title II to regulate net neutrality. You can read the full piece here.

Geoffrey Manne, president of the International Center for Law & Economics, which has opposed Title II reclassification for ISPs, said social justice-centered arguments could have more resonance now. That could include minimal price regulation requiring ISPs to offer a low-cost tier of service, an outcome that, “I don’t think would be the end of the world,” Manne said.

 

Continue reading

Sam Bowman on Amazon’s Italian Fine

TechMonitor – ICLE Director of Competition Policy Sam Bowman was quoted by TechMonitor in a story about Amazon’s €1.3billion fine by Italy’s antitrust regulator. The full story is . . .

TechMonitor – ICLE Director of Competition Policy Sam Bowman was quoted by TechMonitor in a story about Amazon’s €1.3billion fine by Italy’s antitrust regulator. The full story is available here.

But not everyone is convinced. Sam Bowman, director of competition policy at the International Center for Law & Economics think tank, says Amazon argues that FBA offers a more reliable service to consumers than other delivery options, and notes that there is no suggestion in the Italian ruling that consumers were harmed by Amazon’s behaviour. “It’s a philosophical question of whether Amazon has the right to prioritise services in this way if it wants to,” he says. “What this kind of ruling does is reduce Amazon’s role to a facilitator of the network between customers and businesses.”

Bowman says the role of platforms like Amazon goes beyond that of an intermediary and argues they are useful for consumers who are not confident accessing digital markets. “They bring order to the chaos of the internet,” he says. “For a lot of people, navigating that chaos is very difficult, takes a lot of time and carries a lot of risk. The platform is not just a conduit, it applies rules and quasi-regulations on the market it creates. We hope those rules will benefit customers, and if not they will shop elsewhere. The logic of this decision is that Amazon does not have the right to apply its own rules, and is simply a downpipe between consumers and sellers.”

…But Bowman believes the impact of the ruling – and the DMA – could be that platforms such as Amazon withdraw their own products from the European market and take a more neutral position. “Neutrality sounds quite appealing, but in terms of usability it may make things worse for consumers,” he says. “A platform like eBay is much more open and neutral than Amazon, but not necessarily better. I think a consequence [of the self-preferencing ban] will be the eBay-ification of a lot of tech platforms.”

Bowman says this change could take some time, depending on how the final DMA takes shape. “At the moment the way it is written seems very prescriptive about what can and can’t be done. We don’t know if this will lead to companies changing what they do overnight, or whether they will continue as usual and wait to discover how the European commission interprets these rules through rulings or lawsuits.”

Businesses using Amazon’s marketplace in the UK are likely to see fewer changes, Bowman says, as the country’s proposed legislation for regulating digital platforms is less prescriptive, focusing on outcomes rather than strict rules. “The UK approach is likely to be a lot softer, with the companies developing a relationship with the regulator,” he says. “I’m not convinced this will do much for competition, but the ambiguity means it is less likely to generate unsatisfactory outcomes where ‘good’ practices are banned because they don’t comply with the rules.”

Continue reading

Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet

ICLE White Paper A comprehensive survey of the law & economics of online intermediary liability, which concludes that any proposed reform of Section 230 must meaningfully reduce the incidence of unlawful or tortious online content such that its net benefits outweigh its net costs.

Executive Summary

A quarter-century since its enactment as part of the Communications Decency Act of 1996, a growing number of lawmakers have been seeking reforms to Section 230. In the 116th Congress alone, 26 bills were introduced to modify the law’s scope or to repeal it altogether. Indeed, we have learned much in the last 25 years about where Section 230 has worked well and where it has not.

Although the current Section 230 reform debate popularly—and politically—revolves around when platforms should be forced to host certain content politically favored by one faction (i.e., conservative speech) or when they should be forced to remove certain content disfavored by another (i.e., alleged “misinformation” or hate speech), this paper does not discuss, nor even entertain, such reform proposals. Rather, such proposals are (and should be) legal non-starters under the First Amendment.

Indeed, such reforms are virtually certain to harm, not improve, social welfare: As frustrating as imperfect content moderation may be, state-directed speech codes are much worse. Moreover, the politicized focus on curbing legal and non-tortious speech undermines the promise of making any progress on legitimate issues: The real gains to social welfare will materialize from reforms that better align the incentives of online platforms with the social goal of deterring or mitigating illegal or tortious conduct.

Section 230 contains two major provisions: (1) that an online service provider will not be treated as the speaker or publisher of the content of a third party, and (2) that actions taken by an online service provider to moderate the content hosted by its services will not trigger liability. In essence, Section 230 has come to be seen as a broad immunity provision insulating online platforms from liability for virtually all harms caused by user-generated content hosted by their services, including when platforms might otherwise be deemed to be implicated because of the exercise of their editorial control over that content.

To the extent that the current legal regime permits social harms online that exceed concomitant benefits, it should be reformed to deter those harms if such reform can be accomplished at sufficiently low cost. The salient objection to Section 230 reform is not one of principle, but of practicality: are there effective reforms that would address the identified harms without destroying (or excessively damaging) the vibrant Internet ecosystem by imposing punishing, open-ended legal liability? We believe there are.

First and foremost, we believe that Section 230(c)(1)’s intermediary-liability protections for illegal or tortious conduct by third parties can and should be conditioned on taking reasonable steps to curb such conduct, subject to procedural constraints that will prevent a tide of unmeritorious litigation.

This basic principle is not without its strenuous and thoughtful detractors, of course. A common set of objections to Section 230 reform has grown out of legitimate concerns that the economic and speech gains that have accompanied the rise of the Internet over the last three decades would be undermined or reversed if Section 230’s liability shield were weakened. Our paper thus establishes a proper framework for evaluating online intermediary liability and evaluates the implications of the common objections to Section 230 reform within that context. Indeed, it is important to take those criticisms seriously, as they highlight many of the pitfalls that could attend imprudent reforms. We examine these criticisms both to find ways to incorporate them into an effective reform agenda, and to highlight where the criticisms themselves are flawed.

Our approach is rooted in the well-established law & economics analysis of liability rules and civil procedure, which we use to introduce a framework for understanding the tradeoffs faced by online platforms under differing legal standards with differing degrees of liability for the behavior and speech of third-party users. This analysis is bolstered by a discussion of common law and statutory antecedents that allow us to understand how courts and legislatures have been able to develop appropriate liability regimes for the behavior of third parties in different, but analogous, contexts. Ultimately, and drawing on this analysis, we describe the contours of our recommended duty-of-care standard, along with a set of necessary procedural reforms that would help to ensure that we retain as much of the value of user-generated content as possible, while encouraging platforms to better police illicit and tortious content on their services.

The Law & Economics of Online Intermediary Liability

An important goal of civil tort law is to align individual incentives with social welfare such that costly behavior is deterred and individuals are encouraged to take optimal levels of precaution against risks of injury. Not uncommonly, the law even holds intermediaries—persons or businesses that have a special relationship with offenders or victims—accountable when they are the least-cost avoider of harms, even when those harms result from the actions of third parties.

Against this background, the near-complete immunity granted to online platforms by Section 230 for harms caused by platform users is a departure from normal rules governing intermediary behavior. This immunity has certainly yielded benefits in the form of more user-generated online content and the ability of platforms to moderate without fear of liability. But it has also imposed costs to the extent that broad immunity fails to ensure that illegal and tortious conduct are optimally deterred online.

The crucial question for any proposed reform of Section 230 is whether it could pass a cost-benefit test—that is, whether it is likely to meaningfully reduce the incidence of unlawful or tortious online content while sufficiently addressing the objections to the modification of Section 230 immunity, such that its net benefits outweigh its net costs. In the context of both criminal and tort law generally, this balancing is sought through a mix of direct and collateral enforcement actions that, ideally, minimizes the total costs of misconduct and enforcement. Section 230, as it is currently construed, however, eschews entirely the possibility of collateral liability, foreclosing an important mechanism for properly adjusting the overall liability scheme.

But there is no sound reason to think this must be so. While many objections to Section 230 reform—that is, to the imposition of any amount of intermediary liability—are well-founded, they also frequently suffer from overstatement or unsupported suppositions about the magnitude of harm. At the same time, some of the expressed concerns are either simply misplaced or serve instead as arguments for broader civil-procedure reform (or decriminalization), rather than as defenses of the particularized immunity afforded by Section 230 itself.

Unfortunately, the usual course of discussion typically fails to acknowledge the tradeoffs that Section 230—and its reform—requires. These tradeoffs embody value judgments about the quantity and type of speech that should exist online, how individuals threatened by tortious and illegal conduct online should be protected, how injured parties should be made whole, and what role online platforms should have in helping to negotiate these tradeoffs. This paper’s overarching goal, even more important than any particular recommendation, is to make explicit what these tradeoffs entail.

Of central importance to the approach taken in this paper, our proposals presuppose a condition frequently elided by defenders of the Section 230 status quo, although we believe nearly all of them would agree with the assertion: that there is actual harm—violations of civil law and civil rights, violations of criminal law, and tortious conduct—that occurs on online platforms and that imposes real costs on individuals and society at-large. Our proposal proceeds on the assumption, in other words, that there are very real, concrete benefits that would result from demanding greater accountability from online intermediaries, even if that also leads to “collateral censorship” of some lawful speech.

It is necessary to understand that the baseline standard for speech and conduct—both online and offline—is not “anything goes,” but rather self-restraint enforced primarily by incentives for deterrence. Just as the law may deter some amount of speech, so too is speech deterred by fear of reprisal, threat of social sanction, and people’s baseline sense of morality. Some of this “lost” speech will be over-deterred, but one hopes that most deterred speech will be of the harmful or, at least, low-value sort (or else, the underlying laws and norms should be changed). Moreover, not even the most valuable speech is of infinite value, such that any change in a legal regime that results in relatively less speech can be deemed per se negative.

A proper evaluation of the merits of an intermediary-liability regime must therefore consider whether user liability alone is insufficient to deter bad actors, either because it is too costly to pursue remedies against users directly, or because the actions of platforms serve to make it less likely that harmful speech or conduct is deterred. The latter concern, in other words, is that intermediaries may—intentionally or not—facilitate harmful speech that would otherwise be deterred (self-censored) were it not for the operation of the platform.

Arguably, the incentives offered by each of the forces for self-restraint are weakened in the context of online platforms. Certainly everyone is familiar with the significantly weaker operation of social norms in the more attenuated and/or pseudonymous environment of online social interaction. While this environment facilitates more legal speech and conduct than in the offline world, it also facilitates more illegal and tortious speech and conduct. Similarly, fear of reprisal (i.e., self-help) is often attenuated online, not least because online harms are often a function of the multiplier effect of online speech: it is frequently not the actions of the original malfeasant actor, but those of neutral actors amplifying that speech or conduct, that cause harm. In such an environment, the culpability of the original actor is surely mitigated and may be lost entirely. Likewise, in the normal course, victims of tortious or illegal conduct and law enforcers acting on their behalf are the primary line of defense against bad actors. But the relative anonymity/pseudonymity of online interactions may substantially weaken this defense.

Many argue, nonetheless, that holding online intermediaries responsible for failing to remove offensive content would lead to a flood of lawsuits that would ultimately overwhelm service providers, and sub-optimally diminish the value these firms provide to society—a so-called “death by ten thousand duck-bites.” Relatedly, firms that face potentially greater liability would be forced to internalize some increased—possibly exorbitant—degree of compliance costs even if litigation never materialized.

There is certainly some validity to these concerns. Given the sheer volume of content online and the complexity, imprecision, and uncertainty of moderation processes, even very effective content-moderation algorithms will fail to prevent all actionable conduct, which could result in many potential claims. At the same time, it can be difficult to weed out unlawful conduct without inadvertently over-limiting lawful activity.

But many of the unique features of online platforms also cut against the relaxation of legal standards online. Among other things—and in addition to the attenuated incentives for self-restraint mentioned above—where traditional (offline) media primarily host expressive content, online platforms facilitate a significant volume of behavior and commerce that isn’t purely expressive. Tortious and illegal content tends to be less susceptible to normal deterrence online than in other contexts, as individuals can hide behind varying degrees of anonymity. Even users who are neither anonymous nor pseudonymous can sometimes prove challenging to reach with legal process. And, perhaps most importantly, online content is disseminated both faster and more broadly than offline media.

At the same time, an increase in liability risk for online platforms may lead not to insurmountable increases in litigation costs, but to other changes that may be less privately costly to a platform than litigation, and which may be socially desirable. Among these changes may be an increase in preemptive moderation; smaller, more specialized platforms and/or tighter screening of platform participants on the front end (both of which are likely to entail stronger reputational and normative constraints); the establishment of more effective user-reporting and harm-mitigation mechanisms; the development and adoption of specialized insurance offerings; or any number of other possible changes.

Thus the proper framework for evaluating potential reforms to Section 230 must include the following considerations: To what degree would shifting the legal rules governing platform liability increase litigation costs, increase moderation costs, constrain the provision of products and services, increase “collateral censorship,” and impede startup formation and competition, all relative to the status quo, not to some imaginary ideal state? Assessing the marginal changes in all these aspects entails, first, determining how they are affected by the current regime. It then requires identifying both the direction and magnitude of change that would result from reform. Next, it requires evaluating the corresponding benefits that legal change would bring in increasing accountability for tortious or criminal conduct online. And, finally, it necessitates hazarding a best guess of the net effect. Virtually never is this requisite analysis undertaken with any real degree of rigor. Our paper aims to correct that.

A Proposal for Reform

What is called for is a properly scoped reform that applies the same political, legal, economic, and other social preferences offline as online, aimed at ensuring that we optimally deter illegal content without losing the benefits of widespread user-generated content. Properly considered, there is no novel conflict between promoting the flow of information and protecting against tortious or illegal conduct online. While the specific mechanisms employed to mediate between these two principles online and offline may differ—and, indeed, while technological differences can alter the distribution of costs and benefits in ways that must be accounted for—the fundamental principles that determine the dividing line between actionable and illegal or tortious content offline can and should be respected online, as well. Indeed, even Google has argued for exactly this sort of parity, recently calling on the Canadian government to “take care to ensure that their proposal does not risk creating different legal standards for online and offline environments.”

Keeping in mind the tradeoffs embedded in Section 230, we believe that, in order to more optimally mitigate truly harmful conduct on Internet platforms, intermediary-liability law should develop a “duty-of-care” standard that obliges service providers to reasonably protect their users and others from the foreseeable illegal or tortious acts of third parties. As a guiding principle, we should not hold online platforms vicariously liable for the speech of third parties, both because of the sheer volume of user-generated content online and the generally attenuated relationship between online platforms and users, as well as because of the potentially large costs to overly chilling free expression online. But we should place at least the same burden to curb unlawful behavior on online platforms that we do on traditional media operating offline.

Nevertheless, we hasten to add that this alone would likely be deficient: adding an open-ended duty of care to the current legal system could generate a volume of litigation that few, if any, platform providers could survive. Instead, any new duty of care should be tempered by procedural reforms designed to ensure that only meritorious litigation survives beyond a pre-discovery motion to dismiss.

Procedurally, Section 230 immunity protects service providers not just from liability for harm caused by third-party content, but also from having to incur substantial litigation costs. Concern for judicial economy and operational efficiency are laudable, of course, but such concerns are properly addressed toward minimizing the costs of litigation in ways that do not undermine the deterrent and compensatory effects of meritorious causes of action. While litigation costs that exceed the minimum required to properly assign liability are deadweight losses to be avoided, the cost of liability itself—when properly found—ought to be borne by the party best positioned to prevent harm. Thus, a functional regime will attempt to accurately balance excessive litigation costs against legitimate and necessary liability costs.

In order to achieve this balance, we recommend that, while online platforms should be responsible for adopting reasonable practices to mitigate illegal or tortious conduct by their users, they should not face liability for communication torts (e.g., defamation) arising out of user-generated content unless they fail to remove content they knew or should have known was defamatory.  Further, we propose that Section 230(c)(2)’s safe harbor should remain in force and that, unlike for traditional media operating offline, the act of reasonable content moderation by online platforms should not, by itself, create liability exposure.

In sum, we propose that Section 230 should be reformed to incorporate the following high-level elements, encompassing two major components: first, a proposal to alter the underlying intermediary-liability rules to establish a “duty of care” requiring adherence to certain standards of conduct with respect to user-generated content; and second, a set of procedural reforms that are meant to phase in the introduction of the duty of care and its refinement by courts and establish guardrails governing litigation of the duty.

Proposed Basic Liability Rules

Online intermediaries should operate under a duty of care to take appropriate measures to prevent or mitigate foreseeable harms caused by their users’ conduct.

Section 230(c)(1) should not preclude intermediary liability when an online service provider fails to take reasonable care to prevent non-speech-related tortious or illegal conduct by its users

As an exception to the general reasonableness rule above, Section 230(c)(1) should preclude intermediary liability for communication torts arising out of user-generated content unless an online service provider fails to remove content it knew or should have known was defamatory.

Section 230(c)(2) should provide a safe harbor from liability when an online service provider does take reasonable steps to moderate unlawful conduct. In this way, an online service provider would not be held liable simply for having let harmful content slip through, despite its reasonable efforts.

The act of moderation should not give rise to a presumption of knowledge. Taking down content may indicate an online service provider knows it is unlawful, but it does not establish that the online service provider should necessarily be liable for a failure to remove it anywhere the same or similar content arises.

But Section 230 should contemplate “red-flag” knowledge, such that a failure to remove content will not be deemed reasonable if an online service provider knows or should have known that it is illegal or tortious. Because the Internet creates exceptional opportunities for the rapid spread of harmful content, a reasonableness obligation that applies only ex ante may be insufficient. Rather, it may be necessary to impose certain ex post requirements for harmful content that was reasonably permitted in the first instance, but that should nevertheless be removed given sufficient notice.

Proposed Procedural Reforms

In order to effect the safe harbor for reasonable moderation practices that nevertheless result in harmful content, we propose the establishment of “certified” moderation standards under the aegis of a multi-stakeholder body convened by an overseeing government agency. Compliance with these standards would operate to foreclose litigation at an early stage against online service providers in most circumstances. If followed, a defendant could provide its certified moderation practices as a “certified answer” to any complaint alleging a cause of action arising out of user-generated content. Compliant practices will merit dismissal of the case, effecting a safe harbor for such practices.

In litigation, after a defendant answers a complaint with its certified moderation practices, the burden would shift to the plaintiff to adduce sufficient evidence to show that the certified standards were not actually adhered to. Such evidence should be more than mere res ipsa loquitur; it must be sufficient to demonstrate that the online service provider should have been aware of a harm or potential harm, that it had the opportunity to cure or prevent it, and that it failed to do so. Such a claim would need to meet a heightened pleading requirement, as for fraud, requiring particularity.

Finally, we believe any executive or legislative oversight of this process should be explicitly scheduled to sunset. Once the basic system of intermediary liability has had some time to mature, it should be left to courts to further manage and develop the relevant common law.

Our proposal does not demand perfection from online service providers in their content-moderation decisions—only that they make reasonable efforts. What is appropriate for YouTube, Facebook, or Twitter will not be the same as what’s appropriate for a startup social-media site, a web-infrastructure provider, or an e-commerce platform. A properly designed duty-of-care standard should be flexible and account for the scale of a platform, the nature and size of its user base, and the costs of compliance, among other considerations. Indeed, this sort of flexibility is a benefit of adopting a “reasonableness” standard, such as is found in common law negligence. Allowing courts to apply the flexible common law duty of reasonable care would also enable the jurisprudence to evolve with the changing nature of online intermediaries, the problems they pose, and the moderating technologies that become available.

Read the full working paper here.

Continue reading
Innovation & the New Economy

ICLE Amicus Brief in Sanofi-Aventis U.S. v. Mylan Inc.

Amicus Brief A brief of amici curiae from the International Center for Law & Economics and other notable law & economics scholars in the 10th Circuit case of Sanofi v Mylan.

INTRODUCTION

Sanofi is seeking to overturn the district court’s grant of summary judgment in favor of Mylan, which held that Mylan’s EpiPen rebate agreements (loyalty discounts) did not foreclose Sanofi from competing in the market for epinephrine auto-injectors. As this brief argues, finding in favor of Sanofi would mark a misguided departure from the error-cost framework that has been the linchpin of modern antitrust enforcement. Loyalty discounts – and the lower prices they bring – routinely benefit consumers. The Court accordingly should not endorse a dubious theory of harm that does not adequately distinguish between procompetitive and anticompetitive behavior, as doing so would chill firms’ incentives to compete on price.

Anticompetitive (that is, consumer-harming) strategies capable of foreclosing even efficient competitors are difficult – often impossible – to distinguish from vigorous competition (which benefits consumers). Courts are compelled to rely on a limited set of observable parameters to infer whether a firm’s behavior falls under one or the other category. This process entails significant pitfalls. See Geoffrey A. Manne & Joshua D. Wright, If Search Neutrality Is the Answer, What’s the Question?, 2012 Colum. Bus. L. Rev. 151, 184-85 (“The key challenge facing any proposed analytical framework for evaluating monopolization claims is distinguishing pro-competitive from anticompetitive conduct. Antitrust errors are inevitable because much of what is potentially actionable conduct under the antitrust laws frequently actually benefits consumers, and generalist judges are called upon to identify anticompetitive conduct with imperfect information.”).

When it comes to allegedly anticompetitive lowering of prices – predation, discounts, and rebates – low prices themselves are the posited mechanism for anticompetitive foreclosure and thus a key component of the liability regimes pertaining to pricing practices. Yet low prices are also precisely the consumer benefit that antitrust law ordinarily seeks to preserve, especially when these low prices are sustained in the long run. In almost every circumstance, rebates and discounts represent welfare-enhancing price competition; nevertheless, economic theory teaches that strategic pricing can be anticompetitive. As Judge Easterbrook described, “[l]ow prices and large plants may be competitive and beneficial, or they may be exclusionary and harmful. We need a way to distinguish competition from exclusion without penalizing competition.” Frank H. Easterbrook, The Limits of Antitrust, 63 Tex. L. Rev. 1, 26 (1984). In short, false positives in these settings may be especially costly because they penalize consumer-benefiting low prices.

The challenge for courts is distinguishing between robust competition and anticompetitive conduct when a primary indicator of both – low prices – is the same. Although the dividing line will always be imperfect such that it is not always clear when anticompetitive conduct is occurring, the academic literature and the courts have established guiding rules and standards designed to minimize error, maximize ease of administration, and protect consumer welfare. Sanofi’s approach, by contrast, would increase the risks of wrongly imposing antitrust liability and, in turn, harming consumers, while being more difficult to administer.

Read the full amicus brief here.

Continue reading
Antitrust & Consumer Protection

Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins

Scholarship Dystopian thinking is pervasive within the antitrust community. Unlike entrepreneurs, antitrust scholars and policy makers often lack the imagination to see how competition will emerge and enable entrants to overthrow seemingly untouchable incumbents.

Introduction

The dystopian novel is a powerful literary genre. It has given us such masterpieces as Nineteen Eighty-Four, Brave New World, Fahrenheit 451, and Animal Farm. Though these novels often shed light on some of the risks that contemporary society faces and the zeitgeist of the time when they were written, they almost always systematically overshoot the mark (whether intentionally or not) and severely underestimate the radical improvements commensurate with the technology (or other causes) that they fear. Nineteen Eighty-Four, for example, presciently saw in 1949 the coming ravages of communism, but it did not guess that markets would prevail, allowing us all to live freer and more comfortable lives than any preceding generation. Fahrenheit 451 accurately feared that books would lose their monopoly as the foremost medium of communication, but it completely missed the unparalleled access to knowledge that today’s generations enjoy. And while Animal Farm portrayed a metaphorical world where increasing inequality is inexorably linked to totalitarianism and immiseration, global poverty has reached historic lows in the twenty-first century, and this is likely also true of global inequality. In short, for all their literary merit, dystopian novels appear to be terrible predictors of the quality of future human existence. The fact that popular depictions of the future often take the shape of dystopias is more likely reflective of the genre’s entertainment value than of society’s impending demise.

But dystopias are not just a literary phenomenon; they are also a powerful force in policy circles. For example, in the early 1970s, the so-called Club of Rome published an influential report titled The Limits to Growth. The report argued that absent rapid and far-reaching policy shifts, the planet was on a clear path to self-destruction:

If the present growth trends in world population, industrialization, pollution, food production, and resource depletion continue unchanged, the limits to growth on this planet will be reached sometime within the next one hundred years. The most probable result will be a rather sudden and uncontrollable decline in both population and industrial capacity.

Halfway through the authors’ 100-year timeline, however, available data suggests that their predictions were way off the mark. While the world’s economic growth has continued at a breakneck pace, extreme poverty, famine, and the depletion of natural resources have all decreased tremendously.

For all its inaccurate and misguided predictions, dire tracts such as The Limits to Growth perhaps deserve some of the credit for the environmental movements that followed. But taken at face value, the dystopian future along with the attendant policy demands put forward by works like The Limits to Growth would have had cataclysmic consequences for, apparently, extremely limited gain. The policy incentive is to strongly claim impending doom. There’s no incentive to suggest “all is well,” and little incentive even to offer realistic, caveated predictions.

As we argue in this Article, antitrust scholarship and commentary is also afflicted by dystopian thinking. Today, antitrust pessimists have set their sights predominantly on the digital economy—“big tech” and “big data”—alleging a vast array of potential harms. Scholars have argued that the data created and employed by the digital economy produces network effects that inevitably lead to tipping and more concentrated markets. In other words, firms will allegedly accumulate insurmountable data advantages and thus thwart competitors for extended periods of time. Some have gone so far as to argue that this threatens the very fabric of western democracy. Other commentators have voiced fears that companies may implement abusive privacy policies to shortchange consumers. It has also been said that the widespread adoption of pricing algorithms will almost inevitably lead to rampant price discrimination and algorithmic collusion. Indeed, “pollution” from data has even been likened to the environmental pollution that spawned The Limits to Growth: “If indeed ‘data are to this century what oil was to the last one,’ then—[it’s] argue[d]—data pollution is to our century what industrial pollution was to the last one.”

Some scholars have drawn explicit parallels between the emergence of the tech industry and famous dystopian novels. Professor Shoshana Zuboff, for instance, refers to today’s tech giants as “Big Other.” In an article called “Only You Can Prevent Dystopia,” one New York Times columnist surmised:

The new year is here, and online, the forecast calls for several seasons of hell. Tech giants and the media have scarcely figured out all that went wrong during the last presidential election—viral misinformation, state-sponsored propaganda, bots aplenty, all of us cleaved into our own tribal reality bubbles—yet here we go again, headlong into another experiment in digitally mediated democracy.

I’ll be honest with you: I’m terrified . . . There’s a good chance the internet will help break the world this year, and I’m not confident we have the tools to stop it.

Parallels between the novel Nineteen Eighty-Four and the power of large digital platforms were also plain to see when Epic Games launched an antitrust suit against Apple and its App Store in August 2020. Indeed, Epic Games released a short video clip parodying Apple’s famous “1984” ad (which upon its release was itself widely seen as a critique of the tech incumbents of the time).

Similarly, a piece in the New Statesman, titled “Slouching Towards Dystopia: The Rise of Surveillance Capitalism and the Death of Privacy,” concluded that: “Our lives and behaviour have been turned into profit for the Big Tech giants—and we meekly click ‘Accept.’ How did we sleepwalk into a world without privacy?”

Finally, a piece published in the online magazine Gizmodo asked a number of experts whether we are “already living in a tech dystopia.” Some of the responses were alarming, to say the least:

I’ve started thinking of some of our most promising tech, including machine learning, as like asbestos: … it’s really hard to account for, much less remove, once it’s in place; and it carries with it the possibility of deep injury both now and down the line.

. . . .

We live in a world saturated with technological surveillance, democracy-negating media, and technology companies that put themselves above the law while helping to spread hate and abuse all over the world.

Yet the most dystopian aspect of the current technology world may be that so many people actively promote these technologies as utopian.

Antitrust pessimism is not a new phenomenon, and antitrust enforcers and scholars have long been fascinated with—and skeptical of—high tech markets. From early interventions against the champions of the Second Industrial Revolution (oil, railways, steel, etc.) through the mid-twentieth century innovations such as telecommunications and early computing (most notably the RCA, IBM, and Bell Labs consent decrees in the US) to today’s technology giants, each wave of innovation has been met with a rapid response from antitrust authorities, copious intervention-minded scholarship, and waves of pessimistic press coverage. This is hardly surprising given that the adoption of antitrust statutes was in part a response to the emergence of those large corporations that came to dominate the Second Industrial Revolution (despite the numerous radical innovations that these firms introduced in the process). Especially for unilateral conduct issues, it has long been innovative firms that have drawn the lion’s share of cases, scholarly writings, and press coverage.

Underlying this pessimism is a pervasive assumption that new technologies will somehow undermine the competitiveness of markets, imperil innovation, and entrench dominant technology firms for decades to come. This is a form of antitrust dystopia. For its proponents, the future ushered in by digital platforms will be a bleak one—despite abundant evidence that information technology and competition in technology markets have played significant roles in the positive transformation of society. This tendency was highlighted by economist Ronald Coase:

[I]f an economist finds something—a business practice of one sort or another—that he does not understand, he looks for a monopoly explanation. And as in this field we are very ignorant, the number of ununderstandable practices tends to be rather large, and the reliance on a monopoly explanation, frequent.

“The fear of the new—and the assumption that ‘ununderstandable practices’ emerge from anticompetitive impulses and generate anticompetitive effects—permeates not only much antitrust scholarship, but antitrust doctrine as well.” While much antitrust doctrine is capable of accommodating novel conduct and innovative business practices, antitrust law—like all common law-based legal regimes—is inherently backward looking: it primarily evaluates novel arrangements with reference to existing or prior structures, contracts, and practices, often responding to any deviations with “inhospitality.” As a result, there is a built-in “nostalgia bias” throughout much of antitrust that casts a deeply skeptical eye upon novel conduct.

“The upshot is that antitrust scholarship often emphasizes the risks that new market realities create for competition, while idealizing the extent to which previous market realities led to procompetitive outcomes.” Against this backdrop, our Article argues that the current wave of antitrust pessimism is premised on particularly questionable assumptions about competition in data-intensive markets.

Part I lays out the theory and identifies the sources and likely magnitude of both the dystopia and nostalgia biases. Having examined various expressions of these two biases, the Article argues that their exponents ultimately seek to apply a precautionary principle within the field of antitrust enforcement, made most evident in critics’ calls for authorities to shift the burden of proof in a subset of proceedings.

Part II discusses how these arguments play out in the context of digital markets. It argues that economic forces may undermine many of the ills that allegedly plague these markets—and thus the case for implementing a form of precautionary antitrust enforcement. For instance, because data is ultimately just information, it will prove exceedingly difficult for firms to hoard data for extended periods of time. Instead, a more plausible risk is that firms will underinvest in the generation of data. Likewise, the main challenge for digital economy firms is not so much to obtain data, but to create valuable goods and hire talented engineers to draw insights from the data these goods generate. Recent empirical findings suggest, for example, that data economy firms don’t benefit as much as often claimed from data network effects or increasing returns to scale.

Part III reconsiders the United States v. Microsoft Corp. antitrust litigation—the most important precursor to today’s “big tech” antitrust enforcement efforts—and shows how it undermines, rather than supports, pessimistic antitrust thinking. It shows that many of the fears that were raised at the time didn’t transpire (for reasons unrelated to antitrust intervention). Rather, pessimists missed the emergence of key developments that greatly undermined Microsoft’s market position, and greatly overestimated Microsoft’s ability to thwart its competitors. Those circumstances—particularly revolving around the alleged “applications barrier to entry”—have uncanny analogues in the data markets of today. We thus explain how and why the Microsoft case should serve as a cautionary tale for current enforcers confronted with dystopian antitrust theories.

In short, the Article exposes a form of bias within the antitrust community. Unlike entrepreneurs, antitrust scholars and policy makers often lack the imagination to see how competition will emerge and enable entrants to overthrow seemingly untouchable incumbents. New technologies are particularly prone to this bias because there is a shorter history of competition to go on and thus less tangible evidence of attrition in these specific markets. The digital future is almost certainly far less bleak than many antitrust critics have suggested and yet the current swath of interventions aimed at reining in “big tech” presume. This does not mean that antitrust authorities should throw caution to the wind. Instead, policy makers should strive to maintain existing enforcement thresholds, which exclude interventions that are based solely on highly speculative theories of harm.

Read the full white paper here.

Continue reading
Antitrust & Consumer Protection