What are you looking for?

Showing 9 of 29 Results in Data Regulation

ICLE Comments on FTC/DOJ Merger Enforcement RFI

Regulatory Comments The FTC and DOJ's RFI on whether and how to update the antitrust agencies’ merger-enforcement guidelines is based on several faulty premises and appears to presuppose a preferred outcome.

Executive Summary

Our comments in response to the agencies’ merger guidelines RFI are broken into two parts. The first raises concerns regarding the agencies’ ultimate intentions as reflected in the RFI, the authority of the assumptions undergirding it, and the agencies’ (mis)understanding of the role of merger guidelines. The second part responds to several of the most pressing and problematic substantive questions raised in the RFI.

With respect to the (for lack of a better term) “process” elements of the agencies’ apparent intended course of action, we argue that the RFI is based on several faulty premises which, if left unchecked, will taint any subsequent soft law proposals based thereon:

First, the RFI seems to presuppose a particular, preferred outcome and does not generally read like an objective request for the best information necessary to reach optimal results. Although some of the language is superficially neutral, the overarching tone is (as Doug Melamed put it) “very tendentious”: the RFI seeks information to support a broad invigoration of merger enforcement. While some certainly contend that strengthening merger-enforcement standards is appropriate, merger guidelines that start from that position can hardly be relied upon by courts as a source of information to differentiate in difficult cases, if and when that may be warranted.

Indeed, the RFI misconstrues the role of merger guidelines, which is to reflect the state of the art in a certain area of antitrust and not to artificially push the accepted scope of knowledge and practice toward a politically preferred and tenuous frontier. The RFI telegraphs an attempt by the agencies to pronounce as settled what are hotly disputed, sometimes stubbornly unresolved issues among experts, all to fit a preconceived political agenda. This not only overreaches the FTC’s and DOJ’s powers, but it also risks galvanizing opposition from the courts, thereby undermining the utility of adopting guidelines in the first place.

Second, underlying the RFI and the agencies’ apparently intended course of action is the uncritical acceptance of a popular, but highly contentious, narrative positing that there is an inexorable trend toward increased concentration, caused by lax antitrust enforcement, that has caused significant harm to the economy. As we explain, however, every element of this narrative withers under closer scrutiny. Rather, the root causes of increased concentration (if it is happening in the first place) are decidedly uncertain; concentration is decreasing in the local markets in which consumers actually make consumption decisions; and there is evidence that, because much increased concentration has been caused by productivity advances rather than anticompetitive conduct, consumers likely benefit from it.

Lastly, the RFI assumes that the current merger-control laws and tools are no longer fit for purpose. Specifically, the agencies imply that current enforcement thresholds and longstanding presumptions, such as the HHI levels that trigger enforcement, allow too many anticompetitive mergers to slip through the cracks. We contend that this kind of myopic thinking fails to apply the relevant error-cost framework. In merger enforcement, as in antitrust law, it is not appropriate to focus narrowly on one set of errors in guiding legal and policy reform.  Instead, general-purpose tools and presumptions should be assessed with an eye toward reducing the totality of errors, rather than those arising in one segment at the expense of another.

Substantively, our comments address the following issues:

First, the RFI is concerned with the state of merger enforcement in labor markets (and “monopsony” markets more broadly). While some discussion may be welcome regarding new guidelines for how agencies and courts might begin to approach mergers that affect labor markets, the paucity of past actions in this area (the vast bulk of which have been in a single industry: hospitals); the significant dearth of scholarly analysis of relevant market definition in labor markets; and, above all, the fundamental complexities it raises for the proper metrics of harm in mergers that affect multiple markets, all raise the specter that aiming for specific outcomes in labor markets may undermine the standards that support proper merger enforcement overall. If the agencies are to apply merger-control rules to monopsony markets, they must make clear that the relevant market to analyze is the output market, and not (only) the input market. Ultimately, this is the only way to separate mergers that generate efficiencies from those that create monopsony power, since both have the effect of depressing input prices. If antitrust law is to stay grounded in the consumer welfare standard, as it should, it must avoid blocking mergers that are consumer-facing simply because they decrease the price of an input. The issue of monopsony is further complicated by the fact that many inputs are highly substitutable across a wide range of industries, rendering the relevant market even more difficult to pin down than in traditional product markets.

Second, there is not enough evidence to create the presumption of a negative relationship between market concentration and innovation, or between market concentration and investment. In fact, as we show, it may often be the case that the opposite is true. The agencies should thus be wary of drawing any premature conclusions—let alone establishing any legal presumptions—on the connection between market structure and non-price effects, such as innovation and investment.

Third, the RFI blurs what has hitherto been a clear demarcation—and rightly so—between vertical and horizontal mergers by stretching the meaning of “potential competition” beyond any reasonable limits.  In doing, it ascribes stringent theories of harm based on far-fetched hypotheticals to otherwise neutral or benign business conduct. This “horizontalization” of vertical mergers, if allowed to translate into policy, is likely to have chilling effects on procompetitive merger activity to the detriment of consumers and, ultimately, society as a whole.  As we show, there is no legal or empirical justification to abandon the time-honed differentiation between horizontal and vertical mergers, or to impose a heightened burden of proof on the latter. The 2018 AT&T merger illustrates this.

Fourth, and despite some facially attractive rhetoric, data should not receive any special treatment under the merger rules. Instead, it should be treated as any other intangible asset, such as reputation, IP, know-how, etc.

Finally, the notion of “attention markets” is not ready to be applied in a merger-control context, as the attention-market scholarship fails to offer objective, let alone quantifiable, criteria that might enable authorities to identify firms that are unique competitors for user attention.

Read the full comments here.

Continue reading
Antitrust & Consumer Protection

ICLE Brief for D.C. Circuit in State of New York v Facebook

Amicus Brief In this amicus brief for the U.S. Court of Appeals for the D.C. Circuit, ICLE and a dozen scholars of law & economics address the broad consensus disfavoring how New York and other states seek to apply the “unilateral refusal to deal” doctrine in an antitrust case against Facebook.

United States Court of Appeals
for the District of Columbia Circuit

STATE OF NEW YORK, et al.,
Plaintiffs-Appellants,
v.
FACEBOOK, INC.,
Defendant-Appellee.

ON APPEAL FROM THE UNITED STATES DISTRICT COURT
FOR THE DISTRICT OF COLUMBIA
No. 1:20-cv-03589-JEB (Hon. James E. Boasberg)

BRIEF OF INTERNATIONAL CENTER FOR
LAW AND ECONOMICS AND SCHOLARS OF LAW
AND ECONOMICS AS AMICUS CURIAE SUPPORTING
DEFENDANT-APPELLEE FACEBOOK, INC. AND AFFIRMANCE

 

STATEMENT OF THE AMICUS CURIAE

Amici are leading scholars of economics, telecommunications, and/or antitrust. Their scholarship reflects years of experience and publications in these fields.

Amici’s expertise and academic perspectives will aid the Court in deciding whether to affirm in three respects. First, amici provide an explanation of key economic concepts underpinning how economists understand the welfare effects of a monopolist’s refusal to deal voluntarily with a competitor and why that supports affirmance here. Second, amici offer their perspective on the limited circumstances that might justify penalizing a monopolist’s unilateral refusal to deal—and why this case is not one of them. Third, amici explain why the District Court’s legal framework was correct and why a clear standard is necessary when analyzing alleged refusals to deal.

SUMMARY OF ARGUMENT

This brief addresses the broad consensus in the academic literature disfavoring a theory underlying plaintiff’s case—“unilateral refusal to deal” doctrine. The States allege that Facebook restricted access to an input (Facebook’s Platform) in order to prevent third parties from using that access to export Facebook data to competitors or compete directly with Facebook. But a unilateral refusal to deal involves more than an allegation that a monopolist refuses to enter into a business relationship with a rival.

Mainstream economists and competition law scholars are skeptical of imposing liability, even on a monopolist, based solely on its choice of business partners. The freedom of firms to choose their business partners is a fundamental tenet of the free market economy, and the mechanism by which markets produce the greatest welfare gains. Thus, cases compelling business dealings should be confined to particularly delineated circumstances.

In Part I below, amici describe why it is generally inefficient for courts to compel economic actors to deal with one another. Such “solutions” are generally unsound in theory and unworkable in practice, in that they ask judges to operate as regulators over the defendant’s business.

In Part II, amici explain why Aspen Skiing—the Supreme Court’s most prominent precedent permitting liability for a monopolist’s unilateral refusal to deal—went too far and should not be expanded as the States’ and some of their amici propose.

In Part III, amici explain that the District Court correctly held that the conduct at issue here does not constitute a refusal to deal under Aspen Skiing. A unilateral refusal to deal should trigger antitrust liability only where a monopolist turns down more profitable dealings with a competitor in an effort to drive that competitor’s exit or to disable its ability to compete, thereby allowing the monopolist to recoup its losses by increasing prices in the future. But the States’ allegations do not describe that scenario.

In Part IV, amici address that the District Court properly considered and dismissed the States’ “conditional dealing” argument. The States’ allegations are correctly addressed under the rubric of a refusal to deal—not exclusive dealing or otherwise. The States’ desire to mold their allegations into different legal theories highlights why courts should use a strict, clear standard to analyze refusals to deal.

Read the full brief here.

Continue reading
Antitrust & Consumer Protection

Privacy and Security Implications of Regulation of Digital Services in the EU and in the US

Scholarship Written for the Transatlantic Technology Law Forum (TTLF) Working Paper Series, ICLE Senior Scholar Mikołaj Barczentewicz assesses privacy and security risks raised by U.S. and EU legislative proposals to regulate digital platforms.

The attached is a part of the Transatlantic Technology Law Forum’s (TTLF) Working Paper Series, which presents original research on technology, and business-related law and policy issues of the European Union and the United States. TTLF is a joint initiative of Stanford Law School and the University of Vienna School of Law.

Abstract

The goal of this project is to assess the data privacy and security implications of the “new wave” of legislation on digital services—both in the United States and in the EU. In the European Union, the proposals for the Digital Services Act and the Digital Markets Act include provisions that have potentially significant security and privacy implications, like interoperability obligations for online platforms or provisions for data access for researchers. Similar provisions, e.g., on interoperability, are included in bills currently being considered by the U.S .Congress (e.g., in Rep. David Cicilline’s American Choice and Innovation Online Act and in Sen. Amy Klobuchar’s American Innovation and Choice Online Act). Some stakeholders are advocating that the EU and U.S. legislatures go even further than currently contemplated in a direction that could potentially have negative security and privacy consequences—especially on interoperability. I aim to assess whether the legislative proposals in their current form adequately addresses potential privacy and security risks, and what changes in the proposed legislation might help to alleviate the risks.

Introduction

Increasing information privacy and security through the law is notoriously difficult, even if that is the explicit goal of legislation. Thus, perhaps we should instead expect the law at least not to unintentionally decrease the level of privacy and security. Unfortunately, pursuing even seemingly unrelated policy aims through legislation may have that negative effect. In this paper, I analyze several legislative proposals from the EU and from the United States belonging to the new “techlash” wave. All those bills purport to improve the situation of consumers or competitiveness of digital markets. However, as I argue, they would all have
negative and unaddressed consequences in terms of information privacy and security.

On the EU side, I consider the Digital Services Act (DSA) and the Digital Markets Act (DMA) proposals. The DSA and the DMA have been proceeding through the EU legislative process with unexpected speed and given what looks like significant political momentum, it is possible that they will become law. On the U.S. side, I look at Rep. David Cicilline’s (D-R.I.) American Choice and Innovation Online Act, Rep. Mary Gay Scanlon’s (D-Pa.) Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, Sen. Amy Klobuchar’s (D-Minn.) American Innovation and Choice Online Act, and Sen. Richard Blumenthal’s (D-Conn.) Open App Markets Act.

I chose to focus on three regulatory solutions: (1) mandating interoperability, (2) mandating device neutrality (a possibility of sideloading applications), and (3) compulsory data access (by vetted researchers or by authorities). The first two models are shared by most of the discussed legislative proposals, other than the DSA. The last one is only included in the DSA.

Read the full paper here.

Continue reading
Data Security & Privacy

Mandatory Interoperability Is Not a ‘Super Tool’ for Platform Competition

On both sides of the Atlantic, 2021 has seen legislative and regulatory proposals to mandate that various digital services be made interoperable with others. Several . . .

On both sides of the Atlantic, 2021 has seen legislative and regulatory proposals to mandate that various digital services be made interoperable with others. Several bills to do so have been proposed in Congress; the EU’s proposed Digital Markets Act would mandate interoperability in certain contexts for “gatekeeper” platforms; and the UK’s competition regulator will be given powers to require interoperability as part of a suite of “pro-competitive interventions” that are hoped to increase competition in digital markets.

The European Commission plans to require Apple to use USB-C charging ports on iPhones to allow interoperability among different chargers (to save, the Commission estimates, two grams of waste per-European per-year). Interoperability demands for forms of interoperability have been at the center of at least two major lawsuits: Epic’s case against Apple and a separate lawsuit against Apple by the app called Coronavirus Reporter. In July, a group of pro-intervention academics published a white paper calling interoperability “the ‘Super Tool’ of Digital Platform Governance.”

What is meant by the term “interoperability” varies widely. It can refer to relatively narrow interventions in which user data from one service is made directly portable to other services, rather than the user having to download and later re-upload it. At the other end of the spectrum, it could mean regulations to require virtually any vertical integration be unwound. (Should a Tesla’s engine be “interoperable” with the chassis of a Land Rover?) And in between are various proposals for specific applications of interoperability—some product working with another made by another company.

Why Isn’t Everything Interoperable?

The world is filled with examples of interoperability that arose through the (often voluntary) adoption of standards. Credit card companies oversee massive interoperable payments networks; screwdrivers are interoperable with screws made by other manufacturers, although different standards exist; many U.S. colleges accept credits earned at other accredited institutions. The containerization revolution in shipping is an example of interoperability leading to enormous efficiency gains, with a government subsidy to encourage the adoption of a single standard.

And interoperability can emerge over time. Microsoft Word used to be maddeningly non-interoperable with other word processors. Once OpenOffice entered the market, Microsoft patched its product to support OpenOffice files; Word documents now work slightly better with products like Google Docs, as well.

But there are also lots of things that could be interoperable but aren’t, like the Tesla motors that can’t easily be removed and added to other vehicles. The charging cases for Apple’s AirPods and Sony’s wireless earbuds could, in principle, be shaped to be interoperable. Medical records could, in principle, be standardized and made interoperable among healthcare providers, and it’s easy to imagine some of the benefits that could come from being able to plug your medical history into apps like MyFitnessPal and Apple Health. Keurig pods could, in principle, be interoperable with Nespresso machines. Your front door keys could, in principle, be made interoperable with my front door lock.

The reason not everything is interoperable like this is because interoperability comes with costs as well as benefits. It may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to market and for consumers to have choice among different kinds. We may find that, while digital health records are wonderful in theory, the compliance costs of a standardized format might outweigh those benefits.

Manufacturers may choose to sell an expensive device with a relatively cheap upfront price tag, relying on consumer “lock in” for a stream of supplies and updates to finance the “full” price over time, provided the consumer likes it enough to keep using it.

Interoperability can remove a layer of security. I don’t want my bank account to be interoperable with any payments app, because it increases the risk of getting scammed. What I like about my front door lock is precisely that it isn’t interoperable with anyone else’s key. Lots of people complain about popular Twitter accounts being obnoxious, rabble-rousing, and stupid; it’s not difficult to imagine the benefits of a new, similar service that wanted everyone to start from the same level and so did not allow users to carry their old Twitter following with them.

There thus may be particular costs that prevent interoperability from being worth the tradeoff, such as that:

  1. It might be too costly to implement and/or maintain.
  2. It might prescribe a certain product design and prevent experimentation and innovation.
  3. It might add too much complexity and/or confusion for users, who may prefer not to have certain choices.
  4. It might increase the risk of something not working, or of security breaches.
  5. It might prevent certain pricing models that increase output.
  6. It might compromise some element of the product or service that benefits specifically from not being interoperable.

In a market that is functioning reasonably well, we should be able to assume that competition and consumer choice will discover the desirable degree of interoperability among different products. If there are benefits to making your product interoperable with others that outweigh the costs of doing so, that should give you an advantage over competitors and allow you to compete them away. If the costs outweigh the benefits, the opposite will happen—consumers will choose products that are not interoperable with each other.

In short, we cannot infer from the absence of interoperability that something is wrong, since we frequently observe that the costs of interoperability outweigh the benefits.

Of course, markets do not always lead to optimal outcomes. In cases where a market is “failing”—e.g., because competition is obstructed, or because there are important externalities that are not accounted for by the market’s prices—certain goods may be under-provided. In the case of interoperability, this can happen if firms struggle to coordinate upon a single standard, or because firms’ incentives to establish a standard are not aligned with the social optimum (i.e., interoperability might be optimal and fail to emerge, or vice versa).

But the analysis cannot stop here: just because a market might not be functioning well and does not currently provide some form of interoperability, we cannot assume that if it was functioning well that it would provide interoperability.

Interoperability for Digital Platforms

Since we know that many clearly functional markets and products do not provide all forms of interoperability that we could imagine them providing, it is perfectly possible that many badly functioning markets and products would still not provide interoperability, even if they did not suffer from whatever has obstructed competition or effective coordination in that market. In these cases, imposing interoperability would destroy value.

It would therefore be a mistake to assume that more interoperability in digital markets would be better, even if you believe that those digital markets suffer from too little competition. Let’s say, for the sake of argument, that Facebook/Meta has market power that allows it to keep its subsidiary WhatsApp from being interoperable with other competing services. Even then, we still would not know if WhatsApp users would want that interoperability, given the trade-offs.

A look at smaller competitors like Telegram and Signal, which we have no reason to believe have market power, demonstrates that they also are not interoperable with other messaging services. Signal is run by a nonprofit, and thus has little incentive to obstruct users for the sake of market power. Why does it not provide interoperability? I don’t know, but I would speculate that the security risks and technical costs of doing so outweigh the expected benefit to Signal’s users. If that is true, it seems strange to assume away the potential costs of making WhatsApp interoperable, especially if those costs may relate to things like security or product design.

Interoperability and Contact-Tracing Apps

A full consideration of the trade-offs is also necessary to evaluate the lawsuit that Coronavirus Reporter filed against Apple. Coronavirus Reporter was a COVID-19 contact-tracing app that Apple rejected from the App Store in March 2020. Its makers are now suing Apple for, they say, stifling competition in the contact-tracing market. Apple’s defense is that it only allowed COVID-19 apps from “recognised entities such as government organisations, health-focused NGOs, companies deeply credentialed in health issues, and medical or educational institutions.” In effect, by barring it from the App Store, and offering no other way to install the app, Apple denied Coronavirus Reporter interoperability with the iPhone. Coronavirus Reporter argues it should be punished for doing so.

No doubt, Apple’s decision did reduce competition among COVID-19 contact tracing apps. But increasing competition among COVID-19 contact-tracing apps via mandatory interoperability might have costs in other parts of the market. It might, for instance, confuse users who would like a very straightforward way to download their country’s official contact-tracing app. Or it might require access to certain data that users might not want to share, preferring to let an intermediary like Apple decide for them. Narrowing choice like this can be valuable, since it means individual users don’t have to research every single possible option every time they buy or use some product. If you don’t believe me, turn off your spam filter for a few days and see how you feel.

In this case, the potential costs of the access that Coronavirus Reporter wants are obvious: while it may have had the best contact-tracing service in the world, sorting it from other less reliable and/or scrupulous apps may have been difficult and the risk to users may have outweighed the benefits. As Apple and Facebook/Meta constantly point out, the security risks involved in making their services more interoperable are not trivial.

It isn’t competition among COVID-19 apps that is important, per se. As ever, competition is a means to an end, and maximizing it in one context—via, say, mandatory interoperability—cannot be judged without knowing the trade-offs that maximization requires. Even if we thought of Apple as a monopolist over iPhone users—ignoring the fact that Apple’s iPhones obviously are substitutable with Android devices to a significant degree—it wouldn’t follow that the more interoperability, the better.

A ‘Super Tool’ for Digital Market Intervention?

The Coronavirus Reporter example may feel like an “easy” case for opponents of mandatory interoperability. Of course we don’t want anything calling itself a COVID-19 app to have totally open access to people’s iPhones! But what’s vexing about mandatory interoperability is that it’s very hard to sort the sensible applications from the silly ones, and most proposals don’t even try. The leading U.S. House proposal for mandatory interoperability, the ACCESS Act, would require that platforms “maintain a set of transparent, third-party-accessible interfaces (including application programming interfaces) to facilitate and maintain interoperability with a competing business or a potential competing business,” based on APIs designed by the Federal Trade Commission.

The only nod to the costs of this requirement are provisions that further require platforms to set “reasonably necessary” security standards, and a provision to allow the removal of third-party apps that don’t “reasonably secure” user data. No other costs of mandatory interoperability are acknowledged at all.

The same goes for the even more substantive proposals for mandatory interoperability. Released in July 2021, “Equitable Interoperability: The ‘Super Tool’ of Digital Platform Governance” is co-authored by some of the most esteemed competition economists in the business. While it details obscure points about matters like how chat groups might work across interoperable chat services, it is virtually silent on any of the costs or trade-offs of its proposals. Indeed, the first “risk” the report identifies is that regulators might be too slow to impose interoperability in certain cases! It reads like interoperability has been asked what its biggest weaknesses are in a job interview.

Where the report does acknowledge trade-offs—for example, interoperability making it harder for a service to monetize its user base, who can just bypass ads on the service by using a third-party app that blocks them—it just says that the overseeing “technical committee or regulator may wish to create conduct rules” to decide.

Ditto with the objection that mandatory interoperability might limit differentiation among competitors – like, for example, how imposing the old micro-USB standard on Apple might have stopped us from getting the Lightning port. Again, they punt: “We recommend that the regulator or the technical committee consult regularly with market participants and allow the regulated interface to evolve in response to market needs.”

But if we could entrust this degree of product design to regulators, weighing the costs of a feature against its benefits, we wouldn’t need markets or competition at all. And the report just assumes away many other obvious costs: “??the working hypothesis we use in this paper is that the governance issues are more of a challenge than the technical issues.” Despite its illustrious panel of co-authors, the report fails to grapple with the most basic counterargument possible: its proposals have costs as well as benefits, and it’s not straightforward to decide which is bigger than which.

Strangely, the report includes a section that “looks ahead” to “Google’s Dominance Over the Internet of Things.” This, the report says, stems from the company’s “market power in device OS’s [that] allows Google to set licensing conditions that position Google to maintain its monopoly and extract rents from these industries in future.” The report claims this inevitability can only be avoided by imposing interoperability requirements.

The authors completely ignore that a smart home interoperability standard has already been developed, backed by a group of 170 companies that include Amazon, Apple, and Google, as well as SmartThings, IKEA, and Samsung. It is open source and, in principle, should allow a Google Home speaker to work with, say, an Amazon Ring doorbell. In markets where consumers really do want interoperability, it can emerge without a regulator requiring it, even if some companies have apparent incentive not to offer it.

If You Build It, They Still Might Not Come

Much of the case for interoperability interventions rests on the presumption that the benefits will be substantial. It’s hard to know how powerful network effects really are in preventing new competitors from entering digital markets, and none of the more substantial reports cited by the “Super Tool” report really try.

In reality, the cost of switching among services or products is never zero. Simply pointing out that particular costs—such as network effect-created switching costs—happen to exist doesn’t tell us much. In practice, many users are happy to multi-home across different services. I use at least eight different messaging apps every day (Signal, WhatsApp, Twitter DMs, Slack, Discord, Instagram DMs, Google Chat, and iMessage/SMS). I don’t find it particularly costly to switch among them, and have been happy to adopt new services that seemed to offer something new. Discord has built a thriving 150-million-user business, despite these switching costs. What if people don’t actually care if their Instagram DMs are interoperable with Slack?

None of this is to argue that interoperability cannot be useful. But it is often overhyped, and it is difficult to do in practice (because of those annoying trade-offs). After nearly five years, Open Banking in the UK—cited by the “Super Tool” report as an example of what it wants for other markets—still isn’t really finished yet in terms of functionality. It has required an enormous amount of time and investment by all parties involved and has yet to deliver obvious benefits in terms of consumer outcomes, let alone greater competition among the current accounts that have been made interoperable with other services. (My analysis of the lessons of Open Banking for other services is here.) Phone number portability, which is also cited by the “Super Tool” report, is another example of how hard even simple interventions can be to get right.

The world is filled with cases where we could imagine some benefits from interoperability but choose not to have them, because the costs are greater still. None of this is to say that interoperability mandates can never work, but their benefits can be oversold, especially when their costs are ignored. Many of mandatory interoperability’s more enthusiastic advocates should remember that such trade-offs exist—even for policies they really, really like.

Continue reading
Antitrust & Consumer Protection

Issue Brief: The Great Transatlantic Data Disruption

ICLE Issue Brief A new issue brief published jointly by ICLE and the Progressive Policy Institute looks at looming threats to transatlantic data flows between the U.S. and EU that power an estimated $333 billion in annual trade of digitally enabled services.

(This issue brief is a joint publication of the International Center for Law & Economics and the Progressive Policy Institute)

Executive Summary

Data is, logically enough, one of the pillars supporting the modern digital economy. It is, however, not terribly useful on its own. Only once it has been collected, analyzed, combined, and deployed in novel ways does data obtain its highest utility. This is to say, a large part of the value of data is its ability to flow throughout the global connected economy in real time, permitting individuals and firms to develop novel insights that would not otherwise be possible, and to operate at a higher level of efficiency and safety.

Although the global transmission of data is critical to every industry and scientific endeavor, those data flows increasingly run into barriers of various sorts when they seek to cross national borders. Most typically, these barriers take the form of data-localization requirements.

Data localization is an umbrella term that refers to a variety of requirements that nations set to govern how data is created, stored, and transmitted within their jurisdiction. The aim of data-localization policies is to restrict the flow of data across a nation’s borders, often justified on grounds of protecting national security interests and/or sensitive information about citizens.

Data-localization requirements have in recent years been at the center of a series of legal disputes between the United States and the European Union (EU) that potentially threaten the future of transatlantic data flows. In October 2015, in a decision known as Schrems I, the Court of Justice of the European Union (CJEU) overturned the International Safe Harbor Privacy Principles, which had for the prior 15 years governed customer data transmitted between the United States and the EU. The principles were replaced in February 2016 by a new framework agreement known as the EU–US Privacy Shield, until the CJEU declared that, too, to be invalid in a July 2020 decision known as Schrems II. (Both complaints were brought by Austrian privacy advocate Max Schrems).

The current threatened disruption to transatlantic data flows highlights the size of the problem caused by data-localization policies. According to one estimate, transatlantic trade generates upward of $5.6 trillion in annual commercial sales, of which at least $333 billion is related to digitally enabled services.[3] Some estimates suggest that moderate increases in data-localization requirements would result in a €116 billion reduction in exports from the EU.

One difficulty in precisely quantifying the full impact of strict data-localization practices is that the list of industries engaged in digitally enabled trade extends well beyond those that explicitly trade in data. This is because “it is increasingly difficult to separate services and goods with the rise of the ‘Internet of Things’ and the greater bundling of goods and services. At the same time, goods are being substituted by services … further shifting the regulatory boundaries between what is treated as goods and services.” Thus, there is reason to believe that the true value of digitally enabled trade to the global economy is underestimated.

Moreover, as we discuss infra, there is reason to suspect that data flows and digitally enabled trade have contributed a good deal of unmeasured economic activity that partially offsets the lower-than-expected measured productivity growth seen in the both the European Union and the United States over the last decade and a half. In particular, heavy investment in research and development by firms globally has facilitated substituting the relatively more efficient work of employees at firms for unpaid labor by individuals. And global data flows have facilitated the creation of larger, more efficient worldwide networks that optimize time use by firms and individuals, and the development of resilient networks that can withstand shocks to the system like the COVID-19 pandemic.

In the Schrems II decision, the court found that provisions of U.S. national security law and the surveillance powers it grants to intelligence agencies do not protect the data of EU citizens sufficiently to justify deeming U.S. laws as providing adequate protection (known as an “adequacy” decision). In addition to a national “adequacy” decision, the EU General Data Protection Regulation (GDPR) also permits firms that wish to transfer data to the United States to rely on “standard contractual clauses” (SCC) that guarantee protection of citizen data. However, a prominent view in European policy circles—voiced, for example, by the European Parliament—is that, after Schrems II, no SCC can provide a lawful basis for data transfers to the United States.

Shortly after the Schrems II decision, the Irish Data Protection Commission (IDPC) issued a preliminary draft decision against Facebook that proposed to invalidate the company’s SCCs, largely on the same grounds that the CJEU used when invalidating the Privacy Shield. This matter is still pending, but a decision from the IDPC is expected imminently, with the worst-case result being an order that Facebook suspend all transatlantic data transfers that depend upon SCCs. Narrowly speaking, the IDPC decision only immediately affects Facebook. However, if the draft decision is finalized, the SCCs of every other firm that transfers data across the Atlantic may be subject to invalidation under the same legal reasoning.

Although this increasingly restrictive legal environment for data flows has been building for years, the recent problems are increasingly breaking into public view, as national DPAs grapple with the language of the GDPR and the Schrems decisions. The Hamburg DPA recently issued a public warning that the use of the popular video-conference application Zoom violates GDPR. The Portuguese DPA issued a resolution forbidding its National Institute of Statistics from transferring census data to the U.S.-based Cloudflare, because the SCCs in the contract between the two entities were deemed insufficient in light of Schrems II.

The European Data Protection Supervisor has initiated a program to “monitor compliance of European institutions, bodies, offices and agencies (EUIs) with the ‘Schrems II’ Judgement.” As part of this program, it opened an investigation into Amazon and Microsoft in order to determine if Microsoft’s Office 365 and the cloud-hosting services offered by both Amazon and Microsoft are compatible with GDPR post-Schrems II. Max Schrems, who brought the original complaint against Facebook, has through his privacy-activist group submitted at least 100 complaints as of August 2020 alone, which will undoubtedly result in scores of cases across multiple industries.

The United States and European Union are currently negotiating a replacement for the Privacy Shield agreement that would allow data flows between the two economic regions to continue. But EU representatives have warned that, in order to comply with GDPR, there will likely be nontrivial legislative changes necessary in the United States, particularly in the sensitive area of national-security monitoring. In effect, the European Union and the Unites States are being forced to rethink the boundaries of national law in the context of a digital global economy.

This issue brief first reviews the relevant literature on the importance of digital trade, as well as the difficulties in adequately measuring it. One implication of these measurement difficulties is that the impact of disruptions to data flows and digital trade are likely to be far greater than even the large effects discovered through traditional measurement suggest.

We then discuss the importance of network resilience, and the productivity or quasi-productivity gains that digital networks and data flows provide. After a review of the current policy and legal challenges facing digital trade and data flows, we finally urge the U.S. and EU negotiating parties to consider longer-term trade and policy changes that take seriously the role of data flows in the world economy.

Read the full issue brief here.

Continue reading
Innovation & the New Economy

What You Need to Know About the EU’s New AI Regulation

TOTM The European Commission this week published its proposed Artificial Intelligence Regulation, setting out new rules for  “artificial intelligence systems” used within the European Union. The . . .

The European Commission this week published its proposed Artificial Intelligence Regulation, setting out new rules for  “artificial intelligence systems” used within the European Union. The regulation—the commission’s attempt to limit pernicious uses of AI without discouraging its adoption in beneficial cases—casts a wide net in defining AI to include essentially any software developed using machine learning. As a result, a host of software may fall under the regulation’s purview.

Read the full piece here

 

Continue reading
Innovation & the New Economy

Why Data Interoperability Is Harder Than It Looks: The Open Banking Experience

Scholarship Many people hope that data interoperability can increase competition, by making it easier for customers to switch and multi-home across different products. The UK’s Open . . .

Many people hope that data interoperability can increase competition, by making it easier for customers to switch and multi-home across different products. The UK’s Open Banking is the most important example of such a remedy imposed by a competition authority, but the experience demonstrates that such remedies are unlikely to be straightforward. The experience of Open Banking suggests that such remedies should be applied with focus and patience, may require ongoing regulatory oversight to work, and may be best suited to particular kinds of market where, like retail banking, the products are relatively homogeneous. But even then, they may not deliver the outcomes that many hopes for.

Data portability and interoperability tools allow customers to easily move their data between competing services, either on a one-off or an ongoing basis. Some see these tools as offering the potential to strengthen competition in digital markets; customers who feel locked in to services that they have provided data to might be more likely to switch to competitors if they could move that data more easily. This would be particularly true, advocates hope, where network effects grant existing services value that new rivals cannot emulate or where one of the barriers to switching services is the cost of re-entering personal data.

The UK’s Open Banking system is one of the most mature and important examples of this kind of policy in practice. As such, the UK’s experience to date may offer useful clues as to the potential for similar policies in other markets, for which the UK’s Furman Report has cited Open Banking as a model. But fans of interoperability sometimes gloss over the difficulties and limitations that Open Banking has faced, which are just as important as the potential benefits.

In this article, I argue that Open Banking provides lessons that should both give hope to optimists about data portability and interoperability, as well as temper some of the enthusiasm for applying it too broadly and readily.

I draw on my experiences as part of the team that produced the industry review “Open Banking: Preparing For Lift Off” in 2019. That report concluded that Open Banking, though promising, needed several additional reforms to succeed, a few of which I discuss in this piece. I was also the co-author of a white paper that argued for an Open Banking-like remedy in the UK’s retail electricity market, which I discuss briefly below. All views expressed here are my own.

I argue that there are three main lessons to draw from Open Banking for considerations of similar remedies in other markets:

  1. Implementation is difficult and iterative, and probably requires de facto regulatory oversight if it is to be implemented effectively, with all the attendant costs and risks that entails.
  2. The outcomes that interoperability produces may differ from those policymakers have in mind, and may not mean more switching of core services.
  3. If Open Banking does succeed, it will be thanks to features of the UK banking market that may not be present in other markets where similar interoperability is being proposed.

I conclude that Open Banking has not yet led to noticeably stronger competition in the UK banking sector. Implementation challenges suggest that taking an equivalent approach to other markets would require more time, investment and effort than many advocates of interoperability requirements usually concede and may not deliver the anticipated benefits. To the extent that Open Banking is to be a model, it would be best applied as a focused approach in markets that bear particular characteristics and where the costs are outweighed by the benefits, rather than a blanket measure that can be applied to every market where customer data matters.

Read the full white paper here.

Continue reading
Innovation & the New Economy

The Digital Markets Act

TL;DR The European Union has unveiled draft legislation that seeks to tame so-called “gatekeeper” Big Tech firms. If passed into law, this Digital Markets Act (“DMA”) would create a list of “dos and don’ts” by which the platforms must abide, such as allowing interoperability with third parties and sharing data with rivals.

Background…

The European Union has unveiled draft legislation that seeks to tame so-called “gatekeeper” Big Tech firms. If passed into law, this Digital Markets Act (“DMA”) would create a list of “dos and don’ts” by which the platforms must abide, such as allowing interoperability with third parties and sharing data with rivals. In short, the DMA would give the European Commission significant powers to tell tech companies how to run their businesses.

But…

The DMA essentially shifts competition enforcement against gatekeeper platforms away from an “effects” analysis that weighs costs and benefits to a “blacklist” approach that proscribes all listed practices as harmful. This will constrain platforms’ ability to experiment with new products and make changes to existing ones, limiting their ability to innovate and compete.

Read the full explainer here.

Continue reading
Antitrust & Consumer Protection

The DOJ’s Antitrust Case Against Google

TL;DR The Department of Justice and a few Republican state attorneys general have filed an antitrust suit against Google. But… The DOJ case will struggle.

Background…

The Department of Justice and a few Republican state attorneys general have filed an antitrust suit against Google. The complaint alleges that Google’s deals with Android smartphone manufacturers, Apple, and third-party browsers to make Google Search their default general search engine are anticompetitive, harming consumers by denying Google’s competitors the scale and data they need to compete.

But… 

The DOJ case will struggle. Nothing in these deals limits the ability of users to switch from Google to another search engine if they want to, and switching is trivially easy. Nor do the deals constrain Android smartphone makers from pre-installing competing search engines alongside Google. In fact, consumers benefit from these deals because they mean lower handset prices and greater incentive for Google to invest in Android. Moreover, the competition among general search engines to secure these default positions isn’t constrained by Google, and that competition should encourage all search providers to invest in their products.

Read the full explainer here.

Continue reading
Antitrust & Consumer Protection