What are you looking for?

Showing 9 of 98 Results in Innovation

Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet

ICLE White Paper A comprehensive survey of the law & economics of online intermediary liability, which concludes that any proposed reform of Section 230 must meaningfully reduce the incidence of unlawful or tortious online content such that its net benefits outweigh its net costs.

Executive Summary

A quarter-century since its enactment as part of the Communications Decency Act of 1996, a growing number of lawmakers have been seeking reforms to Section 230. In the 116th Congress alone, 26 bills were introduced to modify the law’s scope or to repeal it altogether. Indeed, we have learned much in the last 25 years about where Section 230 has worked well and where it has not.

Although the current Section 230 reform debate popularly—and politically—revolves around when platforms should be forced to host certain content politically favored by one faction (i.e., conservative speech) or when they should be forced to remove certain content disfavored by another (i.e., alleged “misinformation” or hate speech), this paper does not discuss, nor even entertain, such reform proposals. Rather, such proposals are (and should be) legal non-starters under the First Amendment.

Indeed, such reforms are virtually certain to harm, not improve, social welfare: As frustrating as imperfect content moderation may be, state-directed speech codes are much worse. Moreover, the politicized focus on curbing legal and non-tortious speech undermines the promise of making any progress on legitimate issues: The real gains to social welfare will materialize from reforms that better align the incentives of online platforms with the social goal of deterring or mitigating illegal or tortious conduct.

Section 230 contains two major provisions: (1) that an online service provider will not be treated as the speaker or publisher of the content of a third party, and (2) that actions taken by an online service provider to moderate the content hosted by its services will not trigger liability. In essence, Section 230 has come to be seen as a broad immunity provision insulating online platforms from liability for virtually all harms caused by user-generated content hosted by their services, including when platforms might otherwise be deemed to be implicated because of the exercise of their editorial control over that content.

To the extent that the current legal regime permits social harms online that exceed concomitant benefits, it should be reformed to deter those harms if such reform can be accomplished at sufficiently low cost. The salient objection to Section 230 reform is not one of principle, but of practicality: are there effective reforms that would address the identified harms without destroying (or excessively damaging) the vibrant Internet ecosystem by imposing punishing, open-ended legal liability? We believe there are.

First and foremost, we believe that Section 230(c)(1)’s intermediary-liability protections for illegal or tortious conduct by third parties can and should be conditioned on taking reasonable steps to curb such conduct, subject to procedural constraints that will prevent a tide of unmeritorious litigation.

This basic principle is not without its strenuous and thoughtful detractors, of course. A common set of objections to Section 230 reform has grown out of legitimate concerns that the economic and speech gains that have accompanied the rise of the Internet over the last three decades would be undermined or reversed if Section 230’s liability shield were weakened. Our paper thus establishes a proper framework for evaluating online intermediary liability and evaluates the implications of the common objections to Section 230 reform within that context. Indeed, it is important to take those criticisms seriously, as they highlight many of the pitfalls that could attend imprudent reforms. We examine these criticisms both to find ways to incorporate them into an effective reform agenda, and to highlight where the criticisms themselves are flawed.

Our approach is rooted in the well-established law & economics analysis of liability rules and civil procedure, which we use to introduce a framework for understanding the tradeoffs faced by online platforms under differing legal standards with differing degrees of liability for the behavior and speech of third-party users. This analysis is bolstered by a discussion of common law and statutory antecedents that allow us to understand how courts and legislatures have been able to develop appropriate liability regimes for the behavior of third parties in different, but analogous, contexts. Ultimately, and drawing on this analysis, we describe the contours of our recommended duty-of-care standard, along with a set of necessary procedural reforms that would help to ensure that we retain as much of the value of user-generated content as possible, while encouraging platforms to better police illicit and tortious content on their services.

The Law & Economics of Online Intermediary Liability

An important goal of civil tort law is to align individual incentives with social welfare such that costly behavior is deterred and individuals are encouraged to take optimal levels of precaution against risks of injury. Not uncommonly, the law even holds intermediaries—persons or businesses that have a special relationship with offenders or victims—accountable when they are the least-cost avoider of harms, even when those harms result from the actions of third parties.

Against this background, the near-complete immunity granted to online platforms by Section 230 for harms caused by platform users is a departure from normal rules governing intermediary behavior. This immunity has certainly yielded benefits in the form of more user-generated online content and the ability of platforms to moderate without fear of liability. But it has also imposed costs to the extent that broad immunity fails to ensure that illegal and tortious conduct are optimally deterred online.

The crucial question for any proposed reform of Section 230 is whether it could pass a cost-benefit test—that is, whether it is likely to meaningfully reduce the incidence of unlawful or tortious online content while sufficiently addressing the objections to the modification of Section 230 immunity, such that its net benefits outweigh its net costs. In the context of both criminal and tort law generally, this balancing is sought through a mix of direct and collateral enforcement actions that, ideally, minimizes the total costs of misconduct and enforcement. Section 230, as it is currently construed, however, eschews entirely the possibility of collateral liability, foreclosing an important mechanism for properly adjusting the overall liability scheme.

But there is no sound reason to think this must be so. While many objections to Section 230 reform—that is, to the imposition of any amount of intermediary liability—are well-founded, they also frequently suffer from overstatement or unsupported suppositions about the magnitude of harm. At the same time, some of the expressed concerns are either simply misplaced or serve instead as arguments for broader civil-procedure reform (or decriminalization), rather than as defenses of the particularized immunity afforded by Section 230 itself.

Unfortunately, the usual course of discussion typically fails to acknowledge the tradeoffs that Section 230—and its reform—requires. These tradeoffs embody value judgments about the quantity and type of speech that should exist online, how individuals threatened by tortious and illegal conduct online should be protected, how injured parties should be made whole, and what role online platforms should have in helping to negotiate these tradeoffs. This paper’s overarching goal, even more important than any particular recommendation, is to make explicit what these tradeoffs entail.

Of central importance to the approach taken in this paper, our proposals presuppose a condition frequently elided by defenders of the Section 230 status quo, although we believe nearly all of them would agree with the assertion: that there is actual harm—violations of civil law and civil rights, violations of criminal law, and tortious conduct—that occurs on online platforms and that imposes real costs on individuals and society at-large. Our proposal proceeds on the assumption, in other words, that there are very real, concrete benefits that would result from demanding greater accountability from online intermediaries, even if that also leads to “collateral censorship” of some lawful speech.

It is necessary to understand that the baseline standard for speech and conduct—both online and offline—is not “anything goes,” but rather self-restraint enforced primarily by incentives for deterrence. Just as the law may deter some amount of speech, so too is speech deterred by fear of reprisal, threat of social sanction, and people’s baseline sense of morality. Some of this “lost” speech will be over-deterred, but one hopes that most deterred speech will be of the harmful or, at least, low-value sort (or else, the underlying laws and norms should be changed). Moreover, not even the most valuable speech is of infinite value, such that any change in a legal regime that results in relatively less speech can be deemed per se negative.

A proper evaluation of the merits of an intermediary-liability regime must therefore consider whether user liability alone is insufficient to deter bad actors, either because it is too costly to pursue remedies against users directly, or because the actions of platforms serve to make it less likely that harmful speech or conduct is deterred. The latter concern, in other words, is that intermediaries may—intentionally or not—facilitate harmful speech that would otherwise be deterred (self-censored) were it not for the operation of the platform.

Arguably, the incentives offered by each of the forces for self-restraint are weakened in the context of online platforms. Certainly everyone is familiar with the significantly weaker operation of social norms in the more attenuated and/or pseudonymous environment of online social interaction. While this environment facilitates more legal speech and conduct than in the offline world, it also facilitates more illegal and tortious speech and conduct. Similarly, fear of reprisal (i.e., self-help) is often attenuated online, not least because online harms are often a function of the multiplier effect of online speech: it is frequently not the actions of the original malfeasant actor, but those of neutral actors amplifying that speech or conduct, that cause harm. In such an environment, the culpability of the original actor is surely mitigated and may be lost entirely. Likewise, in the normal course, victims of tortious or illegal conduct and law enforcers acting on their behalf are the primary line of defense against bad actors. But the relative anonymity/pseudonymity of online interactions may substantially weaken this defense.

Many argue, nonetheless, that holding online intermediaries responsible for failing to remove offensive content would lead to a flood of lawsuits that would ultimately overwhelm service providers, and sub-optimally diminish the value these firms provide to society—a so-called “death by ten thousand duck-bites.” Relatedly, firms that face potentially greater liability would be forced to internalize some increased—possibly exorbitant—degree of compliance costs even if litigation never materialized.

There is certainly some validity to these concerns. Given the sheer volume of content online and the complexity, imprecision, and uncertainty of moderation processes, even very effective content-moderation algorithms will fail to prevent all actionable conduct, which could result in many potential claims. At the same time, it can be difficult to weed out unlawful conduct without inadvertently over-limiting lawful activity.

But many of the unique features of online platforms also cut against the relaxation of legal standards online. Among other things—and in addition to the attenuated incentives for self-restraint mentioned above—where traditional (offline) media primarily host expressive content, online platforms facilitate a significant volume of behavior and commerce that isn’t purely expressive. Tortious and illegal content tends to be less susceptible to normal deterrence online than in other contexts, as individuals can hide behind varying degrees of anonymity. Even users who are neither anonymous nor pseudonymous can sometimes prove challenging to reach with legal process. And, perhaps most importantly, online content is disseminated both faster and more broadly than offline media.

At the same time, an increase in liability risk for online platforms may lead not to insurmountable increases in litigation costs, but to other changes that may be less privately costly to a platform than litigation, and which may be socially desirable. Among these changes may be an increase in preemptive moderation; smaller, more specialized platforms and/or tighter screening of platform participants on the front end (both of which are likely to entail stronger reputational and normative constraints); the establishment of more effective user-reporting and harm-mitigation mechanisms; the development and adoption of specialized insurance offerings; or any number of other possible changes.

Thus the proper framework for evaluating potential reforms to Section 230 must include the following considerations: To what degree would shifting the legal rules governing platform liability increase litigation costs, increase moderation costs, constrain the provision of products and services, increase “collateral censorship,” and impede startup formation and competition, all relative to the status quo, not to some imaginary ideal state? Assessing the marginal changes in all these aspects entails, first, determining how they are affected by the current regime. It then requires identifying both the direction and magnitude of change that would result from reform. Next, it requires evaluating the corresponding benefits that legal change would bring in increasing accountability for tortious or criminal conduct online. And, finally, it necessitates hazarding a best guess of the net effect. Virtually never is this requisite analysis undertaken with any real degree of rigor. Our paper aims to correct that.

A Proposal for Reform

What is called for is a properly scoped reform that applies the same political, legal, economic, and other social preferences offline as online, aimed at ensuring that we optimally deter illegal content without losing the benefits of widespread user-generated content. Properly considered, there is no novel conflict between promoting the flow of information and protecting against tortious or illegal conduct online. While the specific mechanisms employed to mediate between these two principles online and offline may differ—and, indeed, while technological differences can alter the distribution of costs and benefits in ways that must be accounted for—the fundamental principles that determine the dividing line between actionable and illegal or tortious content offline can and should be respected online, as well. Indeed, even Google has argued for exactly this sort of parity, recently calling on the Canadian government to “take care to ensure that their proposal does not risk creating different legal standards for online and offline environments.”

Keeping in mind the tradeoffs embedded in Section 230, we believe that, in order to more optimally mitigate truly harmful conduct on Internet platforms, intermediary-liability law should develop a “duty-of-care” standard that obliges service providers to reasonably protect their users and others from the foreseeable illegal or tortious acts of third parties. As a guiding principle, we should not hold online platforms vicariously liable for the speech of third parties, both because of the sheer volume of user-generated content online and the generally attenuated relationship between online platforms and users, as well as because of the potentially large costs to overly chilling free expression online. But we should place at least the same burden to curb unlawful behavior on online platforms that we do on traditional media operating offline.

Nevertheless, we hasten to add that this alone would likely be deficient: adding an open-ended duty of care to the current legal system could generate a volume of litigation that few, if any, platform providers could survive. Instead, any new duty of care should be tempered by procedural reforms designed to ensure that only meritorious litigation survives beyond a pre-discovery motion to dismiss.

Procedurally, Section 230 immunity protects service providers not just from liability for harm caused by third-party content, but also from having to incur substantial litigation costs. Concern for judicial economy and operational efficiency are laudable, of course, but such concerns are properly addressed toward minimizing the costs of litigation in ways that do not undermine the deterrent and compensatory effects of meritorious causes of action. While litigation costs that exceed the minimum required to properly assign liability are deadweight losses to be avoided, the cost of liability itself—when properly found—ought to be borne by the party best positioned to prevent harm. Thus, a functional regime will attempt to accurately balance excessive litigation costs against legitimate and necessary liability costs.

In order to achieve this balance, we recommend that, while online platforms should be responsible for adopting reasonable practices to mitigate illegal or tortious conduct by their users, they should not face liability for communication torts (e.g., defamation) arising out of user-generated content unless they fail to remove content they knew or should have known was defamatory.  Further, we propose that Section 230(c)(2)’s safe harbor should remain in force and that, unlike for traditional media operating offline, the act of reasonable content moderation by online platforms should not, by itself, create liability exposure.

In sum, we propose that Section 230 should be reformed to incorporate the following high-level elements, encompassing two major components: first, a proposal to alter the underlying intermediary-liability rules to establish a “duty of care” requiring adherence to certain standards of conduct with respect to user-generated content; and second, a set of procedural reforms that are meant to phase in the introduction of the duty of care and its refinement by courts and establish guardrails governing litigation of the duty.

Proposed Basic Liability Rules

Online intermediaries should operate under a duty of care to take appropriate measures to prevent or mitigate foreseeable harms caused by their users’ conduct.

Section 230(c)(1) should not preclude intermediary liability when an online service provider fails to take reasonable care to prevent non-speech-related tortious or illegal conduct by its users

As an exception to the general reasonableness rule above, Section 230(c)(1) should preclude intermediary liability for communication torts arising out of user-generated content unless an online service provider fails to remove content it knew or should have known was defamatory.

Section 230(c)(2) should provide a safe harbor from liability when an online service provider does take reasonable steps to moderate unlawful conduct. In this way, an online service provider would not be held liable simply for having let harmful content slip through, despite its reasonable efforts.

The act of moderation should not give rise to a presumption of knowledge. Taking down content may indicate an online service provider knows it is unlawful, but it does not establish that the online service provider should necessarily be liable for a failure to remove it anywhere the same or similar content arises.

But Section 230 should contemplate “red-flag” knowledge, such that a failure to remove content will not be deemed reasonable if an online service provider knows or should have known that it is illegal or tortious. Because the Internet creates exceptional opportunities for the rapid spread of harmful content, a reasonableness obligation that applies only ex ante may be insufficient. Rather, it may be necessary to impose certain ex post requirements for harmful content that was reasonably permitted in the first instance, but that should nevertheless be removed given sufficient notice.

Proposed Procedural Reforms

In order to effect the safe harbor for reasonable moderation practices that nevertheless result in harmful content, we propose the establishment of “certified” moderation standards under the aegis of a multi-stakeholder body convened by an overseeing government agency. Compliance with these standards would operate to foreclose litigation at an early stage against online service providers in most circumstances. If followed, a defendant could provide its certified moderation practices as a “certified answer” to any complaint alleging a cause of action arising out of user-generated content. Compliant practices will merit dismissal of the case, effecting a safe harbor for such practices.

In litigation, after a defendant answers a complaint with its certified moderation practices, the burden would shift to the plaintiff to adduce sufficient evidence to show that the certified standards were not actually adhered to. Such evidence should be more than mere res ipsa loquitur; it must be sufficient to demonstrate that the online service provider should have been aware of a harm or potential harm, that it had the opportunity to cure or prevent it, and that it failed to do so. Such a claim would need to meet a heightened pleading requirement, as for fraud, requiring particularity.

Finally, we believe any executive or legislative oversight of this process should be explicitly scheduled to sunset. Once the basic system of intermediary liability has had some time to mature, it should be left to courts to further manage and develop the relevant common law.

Our proposal does not demand perfection from online service providers in their content-moderation decisions—only that they make reasonable efforts. What is appropriate for YouTube, Facebook, or Twitter will not be the same as what’s appropriate for a startup social-media site, a web-infrastructure provider, or an e-commerce platform. A properly designed duty-of-care standard should be flexible and account for the scale of a platform, the nature and size of its user base, and the costs of compliance, among other considerations. Indeed, this sort of flexibility is a benefit of adopting a “reasonableness” standard, such as is found in common law negligence. Allowing courts to apply the flexible common law duty of reasonable care would also enable the jurisprudence to evolve with the changing nature of online intermediaries, the problems they pose, and the moderating technologies that become available.

Read the full working paper here.

Continue reading
Innovation & the New Economy

A Path Forward for Section 230 Reform

TL;DR The liability protections granted to intermediaries under Section 230(c)(1) of the Communications Decency Act of 1996 can and should be conditioned on platforms taking reasonable steps to curb harmful conduct.

Background…

The liability protections granted to intermediaries under Section 230(c)(1) of the Communications Decency Act of 1996 can and should be conditioned on platforms taking reasonable steps to curb harmful conduct. Online platforms should operate under a duty of care obligating them to adopt reasonable content-moderation practices regarding illegal or tortious third-party content.

But…

Platforms should not bear excessive costs for conduct that does not and should not give rise to liability, while they should internalize the costs of responding to actual harms and meritorious litigation. This will require reforms to civil procedure, a regulatory agency to oversee creation of a duty of care, and implementation of a “safe harbor” or presumption of reasonableness.

Read the full explainer here.

Continue reading
Innovation & the New Economy

Comments on the Advanced Notice Of Proposed Rulemaking, Re: Executive Order 13984, ‘Taking Additional Steps To Address the National Emergency With Respect to Significant Malicious Cyber-Enabled Activities’

Regulatory Comments Intro and summary As one of his final acts in office, former President Donald Trump signed Executive Order 13984 (the EO), “Taking Additional Steps To . . .

Intro and summary

As one of his final acts in office, former President Donald Trump signed Executive Order 13984 (the EO), “Taking Additional Steps To Address the National Emergency With Respect to Significant Malicious Cyber- Enabled Activities.” The EO directed the Secretary of Commerce to “propose for notice and comment regulations that require United States IaaS providers to verify the identity of a foreign person that obtains an Account.”

In its related advanced notice of proposed rulemaking (ANPRM), the U.S. Commerce Department notes that:

…foreign persons obtain or offer for resale IaaS accounts (Accounts) with U.S. IaaS providers, and then use these Accounts to conduct malicious cyber-enabled activities against U.S. interests. Malicious actors then destroy evidence of their prior activities and transition to other services.

This pattern makes it extremely difficult to track and obtain information on foreign malicious cyber actors and their activities in a timely manner, especially if U.S. IaaS providers do not maintain updated information and records of their customers or the lessees and sub-lessees of those customers.

The rule of law is frustrated when courts and law enforcement are unable to locate those who commit illegal acts. Other legal frictions may arise when the law fails to deter illegal behavior or to offer incentives for firms to adopt socially optimal business practices. These concerns are particularly acute online, because the Internet hosts a large volume of activity from anonymous or otherwise difficult-to-locate users.

The Internet’s ability to facilitate anonymous or pseudonymous communications, of course, also continues a long tradition of anonymous speech being protected under U.S. constitutional law. The ANPRM acknowledged this tension when it asks “[c]an the Department implement the requirement to verify a foreign person’s identity… while minimizing the impact on U.S. persons’ opening or using such Accounts, or will the application of the requirements to foreign persons in practice necessitate the application of that requirement across all customers?” But anonymity is just one value among many that must be weighed when crafting regulatory policy—particularly with respect to enforcing criminal law and upholding national security. Thus, even if the EO has some effect on U.S. business customers, that alone ought not foreclose implementation of effective identity-verification requirements.

Further, it is important to consider how the incentives service providers face align with optimal social policy. In particular, Information as a Service (IaaS) providers may not adequately internalize the social costs that stem from their making anonymous or pseudonymous accounts available to the public. Public policy may be necessary to correct such misalignment. While the EO focuses narrowly on the use of IaaS by foreign actors, there are broader problems associated with the anonymous use of Internet-connected services. As such, the Administration, the U.S. Commerce Department, and Congress should consider broader “know your business customer” (KYBC) requirements.

But while IaaS providers’ potential misalignment of incentives is a proper subject for regulatory and legislative action, policy should be carefully calibrated to encourage compliance with broader criminal and national-security goals, while still permitting the vibrant IaaS industry to continue to thrive. The law must shape incentives such that responsibility to deal with illicit activity is placed where it is appropriate. Overly broad regulatory requirements can become burdensome, accrue more costs than benefits, and ultimately chill entry of new firms.

Thus, as described in more detail below, the EO is correct to require basic identity verification by IaaS providers, subject to some caveats. The goal of these regulations should be to collect the optimal amount of information about bad actors with the least interference in the operations of firms subject to the requirements. Thus, the Department must weigh how much benefit it realistically expects to obtain from any given level of compliance. Notably, the overwhelming number of IaaS accounts will be law-abiding users. The process is thus largely about identifying outliers, and regulatory intervention must be tempered in recognition that IaaS firms are constrained in the degree to which they can assist in furthering legitimate law-enforcement ends.

The requirements ought to be designed to obtain the optimal level of information that law enforcement and courts would need in most, but not all, cases. A minimal set of initial verification requirements, paired with an ongoing obligation to re-verify user identities, ought to resolve most problems associated with anonymous users.

Moreover, it would be highly inadvisable to prescribe specific technological measures that providers must use. Providers should be free to implement what they consider to be appropriate identity-verification systems, so long as those systems elicit the needed information. Relatedly, IaaS providers are bound by the requirements of laws like the EU’s General Data Protection Regulation (GDPR) and therefore need the flexibility to design their systems to comply both with the Department’s final rules as well as various privacy regimes to which they are subject.

Read the full comments here.

Continue reading
Data Security & Privacy

BETTER TOGETHER: THE PROCOMPETITIVE EFFECTS OF MERGERS IN TECH

Scholarship A joint publication of ICLE and The Entrepreneurs Network makes the case that the U.K. government's plan to crack down on Big Tech mergers would harm the British start-up ecosystem.

Executive Summary

The British government is consulting on whether to lower the burden of proof needed by the Competition and Markets Authority (CMA) to block mergers and acquisitions involving large tech companies that have been deemed as having strategic market status (SMS) in some activity. This is likely to include companies like Google and Facebook, but the scope may grow over time.

Under the current regime, the CMA uses a two-step process. At Phase 1, the CMA assesses whether or not a deal has a ‘realistic prospect of a substantial lessening of competition’. If so, the merger is referred to Phase 2, where it is assessed in depth by an independent panel, and remedied or blocked if it is deemed to carry a greater than 50 per cent chance of substantially lessening competition.

The reforms proposed by the government would stop any deal involving a SMS firm that creates a ‘realistic prospect’ of reducing competition. This has been defined by courts as being a ‘greater than fanciful’ chance.

In practice, this could amount to a de facto ban on acquisitions by Big Tech firms in the UK, and any others designated as having strategic market status.

Mergers and acquisitions are normally good or neutral for competition, and there is little evidence that the bulk of SMS firms’ mergers have harmed competition.

Although the static benefits of mergers are widely acknowledged, the dynamic benefits are less well-understood. We highlight four key ways in which mergers and acquisitions can enhance competition by increasing dynamic efficiency:

Acquisition is a key route to exit for entrepreneurs

  • Startup formation and venture capital investment is extremely sensitive to the availability of exits, the vast majority of which are through acquisition as opposed to listing on a stock market. In the US, more than half (58%) of startup founders expect to be acquired at some point.
  • According to data provider Beauhurst, only nine equity-backed startups exited through IPO in 2019. By contrast, eight British equity-backed startups were acquired last year by Microsoft, Google, Facebook, Amazon, and Apple alone.
  • Cross-country studies find that restrictions on takeovers can have strong negative effects on VC activity. Countries that pass pro-takeover laws see a 40-50% growth in VC activity compared to others.
  • Nine out of ten UK VCs believe that the ability to be acquired is ‘very important’ to the health of Britain’s startup ecosystem. Half of those surveyed said they would ‘significantly reduce’ the amount they invested if the ability to exit through M&A was restricted.

Acquisitions enable a ‘market for corporate control’

  • M&A allows companies with specific skills, such as navigating regulatory processes or scaling products, to acquire startups and unlock value that would otherwise not be realised in the absence of a takeover.

Acquisitions can reduce transaction costs between complementary products

  • M&A can encourage the development of complementary products that might not be able to find a market without the ability to be bought and integrated by an incumbent.
  • In the presence of network effects or high switching costs, takeovers can be a way to allow incremental improvements to be developed and added to incumbent products that would not be sufficiently attractive to compete users away from the product by themselves.

Acquisitions can support inter-platform competition

  • Competition in digital markets often takes place between digital platforms that have a strong position in one market and move into another market, sometimes using their advantage in the original market to gain a foothold in the new one. This often involves them moving into markets that are currently dominated by another digital platform, increasing competition faced by these companies.
  • Acquisitions can accelerate this kind of inter-platform competition. Instead of starting from scratch, platforms can use mergers to gain a foothold in the new market, and do so more rapidly and perhaps more effectively than if they had to develop the product in-house.
  • There are many examples of this kind of behaviour: Google’s acquisition of Android increased competition faced by Apple’s iPhone; Apple’s acquisition of Beats by Dre increased competition faced by Spotify; Walmart’s acquisition of Jet increased competition faced by Amazon in e-commerce; myriad acquisitions by Google, Amazon, and Microsoft in cloud computing have strengthened the competition each of those face from each other.

The UK risks becoming a global outlier

  • There is a serious risk that the US and EU do not follow suit on merger regulation. Although the EU’s Digital Markets Act is highly restrictive in some ways, it does not propose any changes to the EU’s standards of merger control besides changes to notification thresholds.
  • It is also unlikely that the US will follow suit. Although a bill has been brought forward in Congress, it may struggle to pass without bipartisan support. In the last Congress, between 2019 and 2020, only 2% of the 16,601 pieces of legislation that were introduced were ultimately passed into law.

The Government’s theories of harm caused by tech mergers are under-evidenced, hard to action, and do not require a change in the burden of proof to be effectively incorporated into the CMA’s merger review process.

The Government should instead consider a more moderate approach that retains the balance of probabilities approach, but that attempts to drive competition by supporting startups and entrepreneurs, and gives the CMA the tools it needs to do the best job it can within the existing burden of proof.

  • To support startups, the government should: streamline venture capital tax breaks such as EIS and SEIS, lift the EMI caps to £100M and 500 employees to make it easier for scale-ups to attract world-class talent, and implement reforms to the pensions charge cap to unlock more of the £1tn capital in Defined Contribution pension schemes for investment in startups.
  • The CMA should be better equipped to challenge deals that are potentially anti-competitive with lower and mandatory notification thresholds for SMS firms, alongside additional resourcing to bring the cases it believes may threaten competition.
  • Most importantly, any new SMS mergers regime should be limited to the activities given SMS designation, not the firms as a whole, to avoid limiting the use of M&A to increase inter-platform competition.

Read the full white paper here.

Continue reading
Antitrust & Consumer Protection

European Commission Objection to App Store Rules Lack Empirical Support

TOTM The European Commission recently issued a formal Statement of Objections (SO) in which it charges Apple with antitrust breach. In a nutshell, the commission argues that Apple . . .

The European Commission recently issued a formal Statement of Objections (SO) in which it charges Apple with antitrust breach. In a nutshell, the commission argues that Apple prevents app developers—in this case, Spotify—from using alternative in-app purchase systems (IAPs) other than Apple’s own, or steering them towards other, cheaper payment methods on another site. This, the commission says, results in higher prices for consumers in the audio streaming and ebook/audiobook markets.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

Impact of the Durbin Amendment’s Cap on Interchange Fees

TL;DR The Dodd-Frank Act of 2010 set price controls for debit-card interchange fees charged by banks with more than $10 billion in assets.

Background…

The Dodd-Frank Act of 2010 set price controls for debit-card interchange fees charged by banks with more than $10 billion in assets. Known colloquially as the “Durbin Amendment” after Sen. Dick Durbin, who sponsored the original proposal, the provision was supposed to cut costs for customers and merchants by cutting the interchange fees charged by large banks roughly in half.

But…

Covered banks and credit unions have recouped these losses by eliminating free checking accounts, raising minimum balance requirements, and charging higher maintenance fees. While retailers have seen cost reductions as a result of the Durbin Amendment, there is little evidence those savings have been passed on to consumers. 

However…

In recent years, some lawmakers have signaled interest in limiting interchange fees on credit-card transactions, as well. Some elements of the retail sector likewise sought a cap on credit card interchange fees as part of COVID-19 relief legislation in 2020. Sen. Durbin himself also recently suggested the Durbin Amendment’s cap on interchange fees should be extended to credit cards. The predictable result would be a reduction in credit and rewards programs made available to consumers.

Read the full explainer here.

Continue reading
Financial Regulation & Corporate Governance

ICLE’s Principles for the Future of Broadband Infrastructure

ICLE Issue Brief The COVID-19 pandemic has highlighted the resilience of U.S. broadband infrastructure, the extent to which we rely on that infrastructure, and the geographies and communities . . .

The COVID-19 pandemic has highlighted the resilience of U.S. broadband infrastructure, the extent to which we rely on that infrastructure, and the geographies and communities where broadband build-out lags behind. As the extent and impact of the digital divide has been made clearer, there is renewed interest in the best ways to expand broadband access to better serve all Americans.

At ICLE, we would caution policymakers to eschew calls to address the digital divide simply by throwing vast sums of money at the problem. They should, instead, pursue a principled approach designed to encourage entry in new regions, while avoiding poorly managed subsidies and harmful price controls that would discourage investment and innovation by incumbent internet service providers (ISPs). Here is how to do that.

  • To the extent it is necessary at all, public investment in broadband infrastructure should focus on providing internet access to those who don’t have it, rather than subsidizing competition in areas that already do.
  • Highly prescriptive mandates—like requiring a particular technology or requiring symmetrical speeds— will be costly and likely to skew infrastructure spending away from those in unserved areas.
  • There may be very limited cases where municipal broadband is an effective and efficient solution to a complete absence of broadband infrastructure, but policymakers must narrowly tailor any such proposals to avoid displacing private investment or undermining competition.
  • Consumer-directed subsidies should incentivize broadband buildout and, where necessary, guarantee the availability of minimum levels of service reasonably comparable to those in competitive markets.
  • Firms that take government funding should be subject to reasonable obligations. Competitive markets should be subject to lighter-touch obligations.

Read the full brief here.

Continue reading
Telecommunications & Regulated Utilities

What You Need to Know About the EU’s New AI Regulation

TOTM The European Commission this week published its proposed Artificial Intelligence Regulation, setting out new rules for  “artificial intelligence systems” used within the European Union. The . . .

The European Commission this week published its proposed Artificial Intelligence Regulation, setting out new rules for  “artificial intelligence systems” used within the European Union. The regulation—the commission’s attempt to limit pernicious uses of AI without discouraging its adoption in beneficial cases—casts a wide net in defining AI to include essentially any software developed using machine learning. As a result, a host of software may fall under the regulation’s purview.

Read the full piece here

 

Continue reading
Innovation & the New Economy

Open Letter by Public Interest Organizations in Favor of Direct EV Sales and Service

Written Testimonies & Filings The signatories of this letter represent a broad range of public interest organizations who urge that any state laws still prohibiting car companies from selling their cars directly to consumers, or opening service centers for those vehicles, be amended to permit direct sales and service of EVs

We, the signatories of this letter, represent a broad range of public interest organizations. Our individual interests include such diverse matters as environmental protection, fair competition, consumer protection, economic growth and workforce development, and technology and innovation. Some of us frequently find ourselves on different sides of public policy debates. However, today we find common ground on an issue of considerable public importance concerning sales of electric vehicles (“EVs”). Specifically, we urge that any state laws still prohibiting car companies from selling their cars directly to consumers, or opening service centers for those vehicles, be amended to permit direct sales and service of EVs

Continue reading
Innovation & the New Economy