Showing 9 of 102 Publications in Innovation

Guiding Principles & Legislative Checklist for Broadband Subsidies

ICLE Issue Brief President Joe Biden in November 2021 signed the Infrastructure Investment and Jobs Act. Among other provisions, the law allocated $42.45 billion toward last-mile broadband development, . . .

President Joe Biden in November 2021 signed the Infrastructure Investment and Jobs Act. Among other provisions, the law allocated $42.45 billion toward last-mile broadband development, with the National Telecommunications and Information Administration (NTIA) directed to administer those funds through the newly created Broadband Equity, Access & Deployment (BEAD) program. The BEAD program will provide broadband grants to states, who may then subgrant the money to public and private telecommunications providers.

Serious analysis of the proper roles for government and the private sector in reaching the unserved is a necessary prerequisite for successful rollout of broadband-infrastructure spending. Public investment in broadband infrastructure should focus on the cost-effective provision of Internet access to those who don’t have it, rather than subsidizing competition in areas that already do.

Read the full checklist here.

Continue reading
Telecommunications & Regulated Utilities

Regulating Payment-Card Fees: International Best Practices and Lessons for Costa Rica

ICLE Issue Brief Executive Summary In 2020, the Legislative Assembly of Costa Rica passed Legislative Decree 9831, which granted the Central Bank of Costa Rica (BCCR) authority to . . .

Executive Summary

In 2020, the Legislative Assembly of Costa Rica passed Legislative Decree 9831, which granted the Central Bank of Costa Rica (BCCR) authority to regulate payment-card fees. BCCR subsequently developed a regulation that set maximum fees for acquiring and issuing banks, which came into force Nov. 24, 2020. In BCCR’s November 2021 ordinary review of those price controls, the central bank set out a framework to limit further the fees charged on domestic cards and to introduce limits on fees charged on foreign cards.

This brief considers the international experience with interchange and acquisition fees, reviewing both theoretical and empirical evidence. It finds that international best practices require that payment networks be considered dynamic two-sided markets, and therefore, that assessments account for the effects of regulation on both sides of the market: merchants and consumers. In contrast, BCCR’s analysis focuses primarily on static costs that affect merchants, with little attention to the effects on consumers, let alone the dynamic effects. Consequently, BCCR’s proposed maximum interchange and acquisition fees would interfere with the efficient operation of the payment-card market in ways that are likely to harm consumers. Specifically, losses by issuing and acquiring banks are likely to be passed on to consumers in the form of higher banking and card fees, and less investment in improvements. Less wealthy consumers are likely to be hit hardest.

Based on the evidence available, international best practices entail:

  • As far as possible, allowing the market to determine interchange fees and acquisition fees;
  • Acknowledging that payment networks are two-sided markets in which one side (usually merchants) typically subsidizes the other side, thereby increasing system effectiveness;
  • Not benchmarking fees, especially against countries that have price controls in place; and
  • Not imposing price controls on fees on foreign cards.

Read the full issue brief here.

Continue reading
Financial Regulation & Corporate Governance

How Not to Promote US Innovation

TOTM President Joe Biden’s July 2021 executive order set forth a commitment to reinvigorate U.S. innovation and competitiveness. The administration’s efforts to pass the America COMPETES Act would appear to . . .

President Joe Biden’s July 2021 executive order set forth a commitment to reinvigorate U.S. innovation and competitiveness. The administration’s efforts to pass the America COMPETES Act would appear to further demonstrate a serious intent to pursue these objectives.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

ICLE Amicus Brief in Fleites v. MindGeek

Amicus Brief An ICLE amicus brief filed in U.S. District Court in California supporting a motion to dismiss a suit in which holding Visa collaterally liable would generate massive social cost.

The attached was submitted Jan. 17, 2022, by the International Center for Law & Economics (ICLE) to the U.S. District Court for the Central District of California, Southern Division, as a proposed amicus brief in case of Fleites v. MindGeek in support of co-defendant Visa Inc.’s motion to dismiss.


Visa sits outside the boundaries of liability contemplated by statutes like RICO and TVPRA. At the very outer boundaries, liability for indirect actors under these statutes is analogous to the sorts of collateral liability sometimes found in other statutes and in common law tort.[1] But the nature of the relationship between Visa and the alleged direct actors in this case, dictated by the mechanics of payment networks, does not support the traditional economic and policy rationales for assigning collateral liability. This amicus brief elucidates the law and economics of collateral liability and applies it to the circumstances of Visa’s alleged participation in the alleged enterprises at issue. As discussed further below, the general principles of collateral liability counsel strongly against holding Visa liable for the harms suffered by Plaintiffs. To hold otherwise would be sure to generate a massive amount of social cost that would outweigh the potential deterrent or compensatory gains sought.

Read the full brief here.

[1] This amicus brief uses the term “collateral liability” to encompass a range of theories of civil liability aimed at secondary actors not directly responsible for causing harm. Thus, the term contemplates causes of action like premises liability for third-party injury, distributor liability for defamation, civil aiding and abetting liability for fraud, contributory and inducement liability for copyright infringement, and various theories of vicarious liability under the doctrine of respondeat superior. See generally Reiner Kraakman, Third-Party Liability, in 3 THE NEW PALGRAVE DICTIONARY OF ECONOMICS AND THE LAW 583 (Peter Newman ed., 1998).



Continue reading
Innovation & the New Economy

The BIF Offers a Good First Step for Broadband, but the Devil Will Be in the Details

TOTM Capping months of inter-chamber legislative wrangling, President Joe Biden on Nov. 15 signed the $1 trillion Infrastructure Investment and Jobs Act (also known as the bipartisan infrastructure . . .

Capping months of inter-chamber legislative wrangling, President Joe Biden on Nov. 15 signed the $1 trillion Infrastructure Investment and Jobs Act (also known as the bipartisan infrastructure framework, or BIF), which sets aside $65 billion of federal funding for broadband projects. While there is much to praise about the package’s focus on broadband deployment and adoption, whether that money will be well-spent  depends substantially on how the law is implemented and whether the National Telecommunications and Information Administration (NTIA) adopts adequate safeguards to avoid waste, fraud, and abuse.

Read the full piece here.

Continue reading
Telecommunications & Regulated Utilities

Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet

ICLE White Paper A comprehensive survey of the law & economics of online intermediary liability, which concludes that any proposed reform of Section 230 must meaningfully reduce the incidence of unlawful or tortious online content such that its net benefits outweigh its net costs.

Executive Summary

A quarter-century since its enactment as part of the Communications Decency Act of 1996, a growing number of lawmakers have been seeking reforms to Section 230. In the 116th Congress alone, 26 bills were introduced to modify the law’s scope or to repeal it altogether. Indeed, we have learned much in the last 25 years about where Section 230 has worked well and where it has not.

Although the current Section 230 reform debate popularly—and politically—revolves around when platforms should be forced to host certain content politically favored by one faction (i.e., conservative speech) or when they should be forced to remove certain content disfavored by another (i.e., alleged “misinformation” or hate speech), this paper does not discuss, nor even entertain, such reform proposals. Rather, such proposals are (and should be) legal non-starters under the First Amendment.

Indeed, such reforms are virtually certain to harm, not improve, social welfare: As frustrating as imperfect content moderation may be, state-directed speech codes are much worse. Moreover, the politicized focus on curbing legal and non-tortious speech undermines the promise of making any progress on legitimate issues: The real gains to social welfare will materialize from reforms that better align the incentives of online platforms with the social goal of deterring or mitigating illegal or tortious conduct.

Section 230 contains two major provisions: (1) that an online service provider will not be treated as the speaker or publisher of the content of a third party, and (2) that actions taken by an online service provider to moderate the content hosted by its services will not trigger liability. In essence, Section 230 has come to be seen as a broad immunity provision insulating online platforms from liability for virtually all harms caused by user-generated content hosted by their services, including when platforms might otherwise be deemed to be implicated because of the exercise of their editorial control over that content.

To the extent that the current legal regime permits social harms online that exceed concomitant benefits, it should be reformed to deter those harms if such reform can be accomplished at sufficiently low cost. The salient objection to Section 230 reform is not one of principle, but of practicality: are there effective reforms that would address the identified harms without destroying (or excessively damaging) the vibrant Internet ecosystem by imposing punishing, open-ended legal liability? We believe there are.

First and foremost, we believe that Section 230(c)(1)’s intermediary-liability protections for illegal or tortious conduct by third parties can and should be conditioned on taking reasonable steps to curb such conduct, subject to procedural constraints that will prevent a tide of unmeritorious litigation.

This basic principle is not without its strenuous and thoughtful detractors, of course. A common set of objections to Section 230 reform has grown out of legitimate concerns that the economic and speech gains that have accompanied the rise of the Internet over the last three decades would be undermined or reversed if Section 230’s liability shield were weakened. Our paper thus establishes a proper framework for evaluating online intermediary liability and evaluates the implications of the common objections to Section 230 reform within that context. Indeed, it is important to take those criticisms seriously, as they highlight many of the pitfalls that could attend imprudent reforms. We examine these criticisms both to find ways to incorporate them into an effective reform agenda, and to highlight where the criticisms themselves are flawed.

Our approach is rooted in the well-established law & economics analysis of liability rules and civil procedure, which we use to introduce a framework for understanding the tradeoffs faced by online platforms under differing legal standards with differing degrees of liability for the behavior and speech of third-party users. This analysis is bolstered by a discussion of common law and statutory antecedents that allow us to understand how courts and legislatures have been able to develop appropriate liability regimes for the behavior of third parties in different, but analogous, contexts. Ultimately, and drawing on this analysis, we describe the contours of our recommended duty-of-care standard, along with a set of necessary procedural reforms that would help to ensure that we retain as much of the value of user-generated content as possible, while encouraging platforms to better police illicit and tortious content on their services.

The Law & Economics of Online Intermediary Liability

An important goal of civil tort law is to align individual incentives with social welfare such that costly behavior is deterred and individuals are encouraged to take optimal levels of precaution against risks of injury. Not uncommonly, the law even holds intermediaries—persons or businesses that have a special relationship with offenders or victims—accountable when they are the least-cost avoider of harms, even when those harms result from the actions of third parties.

Against this background, the near-complete immunity granted to online platforms by Section 230 for harms caused by platform users is a departure from normal rules governing intermediary behavior. This immunity has certainly yielded benefits in the form of more user-generated online content and the ability of platforms to moderate without fear of liability. But it has also imposed costs to the extent that broad immunity fails to ensure that illegal and tortious conduct are optimally deterred online.

The crucial question for any proposed reform of Section 230 is whether it could pass a cost-benefit test—that is, whether it is likely to meaningfully reduce the incidence of unlawful or tortious online content while sufficiently addressing the objections to the modification of Section 230 immunity, such that its net benefits outweigh its net costs. In the context of both criminal and tort law generally, this balancing is sought through a mix of direct and collateral enforcement actions that, ideally, minimizes the total costs of misconduct and enforcement. Section 230, as it is currently construed, however, eschews entirely the possibility of collateral liability, foreclosing an important mechanism for properly adjusting the overall liability scheme.

But there is no sound reason to think this must be so. While many objections to Section 230 reform—that is, to the imposition of any amount of intermediary liability—are well-founded, they also frequently suffer from overstatement or unsupported suppositions about the magnitude of harm. At the same time, some of the expressed concerns are either simply misplaced or serve instead as arguments for broader civil-procedure reform (or decriminalization), rather than as defenses of the particularized immunity afforded by Section 230 itself.

Unfortunately, the usual course of discussion typically fails to acknowledge the tradeoffs that Section 230—and its reform—requires. These tradeoffs embody value judgments about the quantity and type of speech that should exist online, how individuals threatened by tortious and illegal conduct online should be protected, how injured parties should be made whole, and what role online platforms should have in helping to negotiate these tradeoffs. This paper’s overarching goal, even more important than any particular recommendation, is to make explicit what these tradeoffs entail.

Of central importance to the approach taken in this paper, our proposals presuppose a condition frequently elided by defenders of the Section 230 status quo, although we believe nearly all of them would agree with the assertion: that there is actual harm—violations of civil law and civil rights, violations of criminal law, and tortious conduct—that occurs on online platforms and that imposes real costs on individuals and society at-large. Our proposal proceeds on the assumption, in other words, that there are very real, concrete benefits that would result from demanding greater accountability from online intermediaries, even if that also leads to “collateral censorship” of some lawful speech.

It is necessary to understand that the baseline standard for speech and conduct—both online and offline—is not “anything goes,” but rather self-restraint enforced primarily by incentives for deterrence. Just as the law may deter some amount of speech, so too is speech deterred by fear of reprisal, threat of social sanction, and people’s baseline sense of morality. Some of this “lost” speech will be over-deterred, but one hopes that most deterred speech will be of the harmful or, at least, low-value sort (or else, the underlying laws and norms should be changed). Moreover, not even the most valuable speech is of infinite value, such that any change in a legal regime that results in relatively less speech can be deemed per se negative.

A proper evaluation of the merits of an intermediary-liability regime must therefore consider whether user liability alone is insufficient to deter bad actors, either because it is too costly to pursue remedies against users directly, or because the actions of platforms serve to make it less likely that harmful speech or conduct is deterred. The latter concern, in other words, is that intermediaries may—intentionally or not—facilitate harmful speech that would otherwise be deterred (self-censored) were it not for the operation of the platform.

Arguably, the incentives offered by each of the forces for self-restraint are weakened in the context of online platforms. Certainly everyone is familiar with the significantly weaker operation of social norms in the more attenuated and/or pseudonymous environment of online social interaction. While this environment facilitates more legal speech and conduct than in the offline world, it also facilitates more illegal and tortious speech and conduct. Similarly, fear of reprisal (i.e., self-help) is often attenuated online, not least because online harms are often a function of the multiplier effect of online speech: it is frequently not the actions of the original malfeasant actor, but those of neutral actors amplifying that speech or conduct, that cause harm. In such an environment, the culpability of the original actor is surely mitigated and may be lost entirely. Likewise, in the normal course, victims of tortious or illegal conduct and law enforcers acting on their behalf are the primary line of defense against bad actors. But the relative anonymity/pseudonymity of online interactions may substantially weaken this defense.

Many argue, nonetheless, that holding online intermediaries responsible for failing to remove offensive content would lead to a flood of lawsuits that would ultimately overwhelm service providers, and sub-optimally diminish the value these firms provide to society—a so-called “death by ten thousand duck-bites.” Relatedly, firms that face potentially greater liability would be forced to internalize some increased—possibly exorbitant—degree of compliance costs even if litigation never materialized.

There is certainly some validity to these concerns. Given the sheer volume of content online and the complexity, imprecision, and uncertainty of moderation processes, even very effective content-moderation algorithms will fail to prevent all actionable conduct, which could result in many potential claims. At the same time, it can be difficult to weed out unlawful conduct without inadvertently over-limiting lawful activity.

But many of the unique features of online platforms also cut against the relaxation of legal standards online. Among other things—and in addition to the attenuated incentives for self-restraint mentioned above—where traditional (offline) media primarily host expressive content, online platforms facilitate a significant volume of behavior and commerce that isn’t purely expressive. Tortious and illegal content tends to be less susceptible to normal deterrence online than in other contexts, as individuals can hide behind varying degrees of anonymity. Even users who are neither anonymous nor pseudonymous can sometimes prove challenging to reach with legal process. And, perhaps most importantly, online content is disseminated both faster and more broadly than offline media.

At the same time, an increase in liability risk for online platforms may lead not to insurmountable increases in litigation costs, but to other changes that may be less privately costly to a platform than litigation, and which may be socially desirable. Among these changes may be an increase in preemptive moderation; smaller, more specialized platforms and/or tighter screening of platform participants on the front end (both of which are likely to entail stronger reputational and normative constraints); the establishment of more effective user-reporting and harm-mitigation mechanisms; the development and adoption of specialized insurance offerings; or any number of other possible changes.

Thus the proper framework for evaluating potential reforms to Section 230 must include the following considerations: To what degree would shifting the legal rules governing platform liability increase litigation costs, increase moderation costs, constrain the provision of products and services, increase “collateral censorship,” and impede startup formation and competition, all relative to the status quo, not to some imaginary ideal state? Assessing the marginal changes in all these aspects entails, first, determining how they are affected by the current regime. It then requires identifying both the direction and magnitude of change that would result from reform. Next, it requires evaluating the corresponding benefits that legal change would bring in increasing accountability for tortious or criminal conduct online. And, finally, it necessitates hazarding a best guess of the net effect. Virtually never is this requisite analysis undertaken with any real degree of rigor. Our paper aims to correct that.

A Proposal for Reform

What is called for is a properly scoped reform that applies the same political, legal, economic, and other social preferences offline as online, aimed at ensuring that we optimally deter illegal content without losing the benefits of widespread user-generated content. Properly considered, there is no novel conflict between promoting the flow of information and protecting against tortious or illegal conduct online. While the specific mechanisms employed to mediate between these two principles online and offline may differ—and, indeed, while technological differences can alter the distribution of costs and benefits in ways that must be accounted for—the fundamental principles that determine the dividing line between actionable and illegal or tortious content offline can and should be respected online, as well. Indeed, even Google has argued for exactly this sort of parity, recently calling on the Canadian government to “take care to ensure that their proposal does not risk creating different legal standards for online and offline environments.”

Keeping in mind the tradeoffs embedded in Section 230, we believe that, in order to more optimally mitigate truly harmful conduct on Internet platforms, intermediary-liability law should develop a “duty-of-care” standard that obliges service providers to reasonably protect their users and others from the foreseeable illegal or tortious acts of third parties. As a guiding principle, we should not hold online platforms vicariously liable for the speech of third parties, both because of the sheer volume of user-generated content online and the generally attenuated relationship between online platforms and users, as well as because of the potentially large costs to overly chilling free expression online. But we should place at least the same burden to curb unlawful behavior on online platforms that we do on traditional media operating offline.

Nevertheless, we hasten to add that this alone would likely be deficient: adding an open-ended duty of care to the current legal system could generate a volume of litigation that few, if any, platform providers could survive. Instead, any new duty of care should be tempered by procedural reforms designed to ensure that only meritorious litigation survives beyond a pre-discovery motion to dismiss.

Procedurally, Section 230 immunity protects service providers not just from liability for harm caused by third-party content, but also from having to incur substantial litigation costs. Concern for judicial economy and operational efficiency are laudable, of course, but such concerns are properly addressed toward minimizing the costs of litigation in ways that do not undermine the deterrent and compensatory effects of meritorious causes of action. While litigation costs that exceed the minimum required to properly assign liability are deadweight losses to be avoided, the cost of liability itself—when properly found—ought to be borne by the party best positioned to prevent harm. Thus, a functional regime will attempt to accurately balance excessive litigation costs against legitimate and necessary liability costs.

In order to achieve this balance, we recommend that, while online platforms should be responsible for adopting reasonable practices to mitigate illegal or tortious conduct by their users, they should not face liability for communication torts (e.g., defamation) arising out of user-generated content unless they fail to remove content they knew or should have known was defamatory.  Further, we propose that Section 230(c)(2)’s safe harbor should remain in force and that, unlike for traditional media operating offline, the act of reasonable content moderation by online platforms should not, by itself, create liability exposure.

In sum, we propose that Section 230 should be reformed to incorporate the following high-level elements, encompassing two major components: first, a proposal to alter the underlying intermediary-liability rules to establish a “duty of care” requiring adherence to certain standards of conduct with respect to user-generated content; and second, a set of procedural reforms that are meant to phase in the introduction of the duty of care and its refinement by courts and establish guardrails governing litigation of the duty.

Proposed Basic Liability Rules

Online intermediaries should operate under a duty of care to take appropriate measures to prevent or mitigate foreseeable harms caused by their users’ conduct.

Section 230(c)(1) should not preclude intermediary liability when an online service provider fails to take reasonable care to prevent non-speech-related tortious or illegal conduct by its users

As an exception to the general reasonableness rule above, Section 230(c)(1) should preclude intermediary liability for communication torts arising out of user-generated content unless an online service provider fails to remove content it knew or should have known was defamatory.

Section 230(c)(2) should provide a safe harbor from liability when an online service provider does take reasonable steps to moderate unlawful conduct. In this way, an online service provider would not be held liable simply for having let harmful content slip through, despite its reasonable efforts.

The act of moderation should not give rise to a presumption of knowledge. Taking down content may indicate an online service provider knows it is unlawful, but it does not establish that the online service provider should necessarily be liable for a failure to remove it anywhere the same or similar content arises.

But Section 230 should contemplate “red-flag” knowledge, such that a failure to remove content will not be deemed reasonable if an online service provider knows or should have known that it is illegal or tortious. Because the Internet creates exceptional opportunities for the rapid spread of harmful content, a reasonableness obligation that applies only ex ante may be insufficient. Rather, it may be necessary to impose certain ex post requirements for harmful content that was reasonably permitted in the first instance, but that should nevertheless be removed given sufficient notice.

Proposed Procedural Reforms

In order to effect the safe harbor for reasonable moderation practices that nevertheless result in harmful content, we propose the establishment of “certified” moderation standards under the aegis of a multi-stakeholder body convened by an overseeing government agency. Compliance with these standards would operate to foreclose litigation at an early stage against online service providers in most circumstances. If followed, a defendant could provide its certified moderation practices as a “certified answer” to any complaint alleging a cause of action arising out of user-generated content. Compliant practices will merit dismissal of the case, effecting a safe harbor for such practices.

In litigation, after a defendant answers a complaint with its certified moderation practices, the burden would shift to the plaintiff to adduce sufficient evidence to show that the certified standards were not actually adhered to. Such evidence should be more than mere res ipsa loquitur; it must be sufficient to demonstrate that the online service provider should have been aware of a harm or potential harm, that it had the opportunity to cure or prevent it, and that it failed to do so. Such a claim would need to meet a heightened pleading requirement, as for fraud, requiring particularity.

Finally, we believe any executive or legislative oversight of this process should be explicitly scheduled to sunset. Once the basic system of intermediary liability has had some time to mature, it should be left to courts to further manage and develop the relevant common law.

Our proposal does not demand perfection from online service providers in their content-moderation decisions—only that they make reasonable efforts. What is appropriate for YouTube, Facebook, or Twitter will not be the same as what’s appropriate for a startup social-media site, a web-infrastructure provider, or an e-commerce platform. A properly designed duty-of-care standard should be flexible and account for the scale of a platform, the nature and size of its user base, and the costs of compliance, among other considerations. Indeed, this sort of flexibility is a benefit of adopting a “reasonableness” standard, such as is found in common law negligence. Allowing courts to apply the flexible common law duty of reasonable care would also enable the jurisprudence to evolve with the changing nature of online intermediaries, the problems they pose, and the moderating technologies that become available.

Read the full working paper here.

Continue reading
Innovation & the New Economy

A Path Forward for Section 230 Reform

TL;DR The liability protections granted to intermediaries under Section 230(c)(1) of the Communications Decency Act of 1996 can and should be conditioned on platforms taking reasonable steps to curb harmful conduct.


The liability protections granted to intermediaries under Section 230(c)(1) of the Communications Decency Act of 1996 can and should be conditioned on platforms taking reasonable steps to curb harmful conduct. Online platforms should operate under a duty of care obligating them to adopt reasonable content-moderation practices regarding illegal or tortious third-party content.


Platforms should not bear excessive costs for conduct that does not and should not give rise to liability, while they should internalize the costs of responding to actual harms and meritorious litigation. This will require reforms to civil procedure, a regulatory agency to oversee creation of a duty of care, and implementation of a “safe harbor” or presumption of reasonableness.

Read the full explainer here.

Continue reading
Innovation & the New Economy

Comments on the Advanced Notice Of Proposed Rulemaking, Re: Executive Order 13984, ‘Taking Additional Steps To Address the National Emergency With Respect to Significant Malicious Cyber-Enabled Activities’

Regulatory Comments Intro and summary As one of his final acts in office, former President Donald Trump signed Executive Order 13984 (the EO), “Taking Additional Steps To . . .

Intro and summary

As one of his final acts in office, former President Donald Trump signed Executive Order 13984 (the EO), “Taking Additional Steps To Address the National Emergency With Respect to Significant Malicious Cyber- Enabled Activities.” The EO directed the Secretary of Commerce to “propose for notice and comment regulations that require United States IaaS providers to verify the identity of a foreign person that obtains an Account.”

In its related advanced notice of proposed rulemaking (ANPRM), the U.S. Commerce Department notes that:

…foreign persons obtain or offer for resale IaaS accounts (Accounts) with U.S. IaaS providers, and then use these Accounts to conduct malicious cyber-enabled activities against U.S. interests. Malicious actors then destroy evidence of their prior activities and transition to other services.

This pattern makes it extremely difficult to track and obtain information on foreign malicious cyber actors and their activities in a timely manner, especially if U.S. IaaS providers do not maintain updated information and records of their customers or the lessees and sub-lessees of those customers.

The rule of law is frustrated when courts and law enforcement are unable to locate those who commit illegal acts. Other legal frictions may arise when the law fails to deter illegal behavior or to offer incentives for firms to adopt socially optimal business practices. These concerns are particularly acute online, because the Internet hosts a large volume of activity from anonymous or otherwise difficult-to-locate users.

The Internet’s ability to facilitate anonymous or pseudonymous communications, of course, also continues a long tradition of anonymous speech being protected under U.S. constitutional law. The ANPRM acknowledged this tension when it asks “[c]an the Department implement the requirement to verify a foreign person’s identity… while minimizing the impact on U.S. persons’ opening or using such Accounts, or will the application of the requirements to foreign persons in practice necessitate the application of that requirement across all customers?” But anonymity is just one value among many that must be weighed when crafting regulatory policy—particularly with respect to enforcing criminal law and upholding national security. Thus, even if the EO has some effect on U.S. business customers, that alone ought not foreclose implementation of effective identity-verification requirements.

Further, it is important to consider how the incentives service providers face align with optimal social policy. In particular, Information as a Service (IaaS) providers may not adequately internalize the social costs that stem from their making anonymous or pseudonymous accounts available to the public. Public policy may be necessary to correct such misalignment. While the EO focuses narrowly on the use of IaaS by foreign actors, there are broader problems associated with the anonymous use of Internet-connected services. As such, the Administration, the U.S. Commerce Department, and Congress should consider broader “know your business customer” (KYBC) requirements.

But while IaaS providers’ potential misalignment of incentives is a proper subject for regulatory and legislative action, policy should be carefully calibrated to encourage compliance with broader criminal and national-security goals, while still permitting the vibrant IaaS industry to continue to thrive. The law must shape incentives such that responsibility to deal with illicit activity is placed where it is appropriate. Overly broad regulatory requirements can become burdensome, accrue more costs than benefits, and ultimately chill entry of new firms.

Thus, as described in more detail below, the EO is correct to require basic identity verification by IaaS providers, subject to some caveats. The goal of these regulations should be to collect the optimal amount of information about bad actors with the least interference in the operations of firms subject to the requirements. Thus, the Department must weigh how much benefit it realistically expects to obtain from any given level of compliance. Notably, the overwhelming number of IaaS accounts will be law-abiding users. The process is thus largely about identifying outliers, and regulatory intervention must be tempered in recognition that IaaS firms are constrained in the degree to which they can assist in furthering legitimate law-enforcement ends.

The requirements ought to be designed to obtain the optimal level of information that law enforcement and courts would need in most, but not all, cases. A minimal set of initial verification requirements, paired with an ongoing obligation to re-verify user identities, ought to resolve most problems associated with anonymous users.

Moreover, it would be highly inadvisable to prescribe specific technological measures that providers must use. Providers should be free to implement what they consider to be appropriate identity-verification systems, so long as those systems elicit the needed information. Relatedly, IaaS providers are bound by the requirements of laws like the EU’s General Data Protection Regulation (GDPR) and therefore need the flexibility to design their systems to comply both with the Department’s final rules as well as various privacy regimes to which they are subject.

Read the full comments here.

Continue reading
Data Security & Privacy


Scholarship A joint publication of ICLE and The Entrepreneurs Network makes the case that the U.K. government's plan to crack down on Big Tech mergers would harm the British start-up ecosystem.

Executive Summary

The British government is consulting on whether to lower the burden of proof needed by the Competition and Markets Authority (CMA) to block mergers and acquisitions involving large tech companies that have been deemed as having strategic market status (SMS) in some activity. This is likely to include companies like Google and Facebook, but the scope may grow over time.

Under the current regime, the CMA uses a two-step process. At Phase 1, the CMA assesses whether or not a deal has a ‘realistic prospect of a substantial lessening of competition’. If so, the merger is referred to Phase 2, where it is assessed in depth by an independent panel, and remedied or blocked if it is deemed to carry a greater than 50 per cent chance of substantially lessening competition.

The reforms proposed by the government would stop any deal involving a SMS firm that creates a ‘realistic prospect’ of reducing competition. This has been defined by courts as being a ‘greater than fanciful’ chance.

In practice, this could amount to a de facto ban on acquisitions by Big Tech firms in the UK, and any others designated as having strategic market status.

Mergers and acquisitions are normally good or neutral for competition, and there is little evidence that the bulk of SMS firms’ mergers have harmed competition.

Although the static benefits of mergers are widely acknowledged, the dynamic benefits are less well-understood. We highlight four key ways in which mergers and acquisitions can enhance competition by increasing dynamic efficiency:

Acquisition is a key route to exit for entrepreneurs

  • Startup formation and venture capital investment is extremely sensitive to the availability of exits, the vast majority of which are through acquisition as opposed to listing on a stock market. In the US, more than half (58%) of startup founders expect to be acquired at some point.
  • According to data provider Beauhurst, only nine equity-backed startups exited through IPO in 2019. By contrast, eight British equity-backed startups were acquired last year by Microsoft, Google, Facebook, Amazon, and Apple alone.
  • Cross-country studies find that restrictions on takeovers can have strong negative effects on VC activity. Countries that pass pro-takeover laws see a 40-50% growth in VC activity compared to others.
  • Nine out of ten UK VCs believe that the ability to be acquired is ‘very important’ to the health of Britain’s startup ecosystem. Half of those surveyed said they would ‘significantly reduce’ the amount they invested if the ability to exit through M&A was restricted.

Acquisitions enable a ‘market for corporate control’

  • M&A allows companies with specific skills, such as navigating regulatory processes or scaling products, to acquire startups and unlock value that would otherwise not be realised in the absence of a takeover.

Acquisitions can reduce transaction costs between complementary products

  • M&A can encourage the development of complementary products that might not be able to find a market without the ability to be bought and integrated by an incumbent.
  • In the presence of network effects or high switching costs, takeovers can be a way to allow incremental improvements to be developed and added to incumbent products that would not be sufficiently attractive to compete users away from the product by themselves.

Acquisitions can support inter-platform competition

  • Competition in digital markets often takes place between digital platforms that have a strong position in one market and move into another market, sometimes using their advantage in the original market to gain a foothold in the new one. This often involves them moving into markets that are currently dominated by another digital platform, increasing competition faced by these companies.
  • Acquisitions can accelerate this kind of inter-platform competition. Instead of starting from scratch, platforms can use mergers to gain a foothold in the new market, and do so more rapidly and perhaps more effectively than if they had to develop the product in-house.
  • There are many examples of this kind of behaviour: Google’s acquisition of Android increased competition faced by Apple’s iPhone; Apple’s acquisition of Beats by Dre increased competition faced by Spotify; Walmart’s acquisition of Jet increased competition faced by Amazon in e-commerce; myriad acquisitions by Google, Amazon, and Microsoft in cloud computing have strengthened the competition each of those face from each other.

The UK risks becoming a global outlier

  • There is a serious risk that the US and EU do not follow suit on merger regulation. Although the EU’s Digital Markets Act is highly restrictive in some ways, it does not propose any changes to the EU’s standards of merger control besides changes to notification thresholds.
  • It is also unlikely that the US will follow suit. Although a bill has been brought forward in Congress, it may struggle to pass without bipartisan support. In the last Congress, between 2019 and 2020, only 2% of the 16,601 pieces of legislation that were introduced were ultimately passed into law.

The Government’s theories of harm caused by tech mergers are under-evidenced, hard to action, and do not require a change in the burden of proof to be effectively incorporated into the CMA’s merger review process.

The Government should instead consider a more moderate approach that retains the balance of probabilities approach, but that attempts to drive competition by supporting startups and entrepreneurs, and gives the CMA the tools it needs to do the best job it can within the existing burden of proof.

  • To support startups, the government should: streamline venture capital tax breaks such as EIS and SEIS, lift the EMI caps to £100M and 500 employees to make it easier for scale-ups to attract world-class talent, and implement reforms to the pensions charge cap to unlock more of the £1tn capital in Defined Contribution pension schemes for investment in startups.
  • The CMA should be better equipped to challenge deals that are potentially anti-competitive with lower and mandatory notification thresholds for SMS firms, alongside additional resourcing to bring the cases it believes may threaten competition.
  • Most importantly, any new SMS mergers regime should be limited to the activities given SMS designation, not the firms as a whole, to avoid limiting the use of M&A to increase inter-platform competition.

Read the full white paper here.

Continue reading
Antitrust & Consumer Protection