Showing 9 of 70 Publications by Brian Albrecht

New Merger Guidelines Are As Expected. That’s Not a Compliment.

TOTM Fifteen months after the close of the comment period, we finally have the release of the draft merger guidelines by the Federal Trade Commission (FTC) and the . . .

Fifteen months after the close of the comment period, we finally have the release of the draft merger guidelines by the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ).

While there is a lot to digest in the 51 page document with over 100 (largely stale) footnotes, the broad picture is clear: the goal of this document is to stop more mergers. Period.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

What Is a Barrier to Entry?

TOTM Why do monopolies exist? Many textbooks point to barriers to entry as a cause of monopolies. Tyler Cowen and Alex Tabarrok’s textbook says: “In addition to patents, . . .

Why do monopolies exist? Many textbooks point to barriers to entry as a cause of monopolies.

Tyler Cowen and Alex Tabarrok’s textbook says: “In addition to patents, government regulation and economies of scale, monopolies may be created whenever there is a significant barrier to entry, something that raises the cost to new firms of entering the industry.” Greg Mankiw’s textbook goes as far as to say: “The fundamental cause of monopoly is barriers to entry.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

ICLE Response to the FTC’s Cloud Computing RFI

Regulatory Comments Introduction The cloud-computing industry has undergone a transformation in recent years, driven by innovation, competition, and unprecedented demand for information-technology (IT) services. These comments assess . . .

Introduction

The cloud-computing industry has undergone a transformation in recent years, driven by innovation, competition, and unprecedented demand for information-technology (IT) services. These comments assess the state of competition in this burgeoning field, and we thank the Federal Trade Commission (FTC) for the opportunity to respond to this request for information (RFI).

Competition among industry players within cloud computing is intense. It is crucial, however, to remember that, as ubiquitous as cloud-service providers might be, they must compete not just with each other but also with the internal IT capabilities of large enterprises. In other words, while the cloud-computing sector has been growing in importance within the IT ecosystem, it remains just one aspect of that ecosystem. Traditional, on-premises IT infrastructure continues to hold sway within many businesses, with internal IT teams designing solutions uniquely tailored to the specific needs of their organizations.

In this context, cloud providers present an attractive proposition. They offer companies the opportunity to take advantage of gains from specialization to outsource some or all of their IT services to expert entities. This decision between outsourcing and maintaining in-house operations is a typical business consideration, and its outcome will vary depending on a particular company’s individual capabilities. Nonetheless, it is clear that the advent of cloud computing has significantly expanded the range of available IT options.

While both the fast-moving nature of the cloud-computing industry and its intense competition have catalyzed numerous benefits, there are also some reasons for concern. For example, the shortage of computer chips has affected many segments of IT, including cloud computing. Despite this slowdown in the immediate term, the market’s general trends are mostly optimistic. These include an explosion in the variety of available software services and breakthroughs in hardware, both accompanying a dramatic fall in prices.

Given this energetic landscape, it is crucial that regulatory bodies like the FTC exercise caution in any potential interventions. After all, this vibrant and rapidly evolving industry stands as testament to the power of competitive forces to drive progress and deliver value.

I.        The Evolving Landscape of Cloud Computing

The digital landscape is undergoing a profound transformation, as businesses around the globe increasingly transition from on-premises IT solutions to cloud services. This shift represents a significant evolution in the way that organizations manage their data, execute their operations, and leverage technology to gain competitive advantage. The market’s competitive dynamics are frequently misunderstood, however, which often leads to misconceptions about the role and impact of cloud computing in the broader IT ecosystem.

Traditionally, cloud computing has been divided into three “layers”:

  1. Infrastructure as a Service (IaaS) offers virtualized computing resources via the internet. It comprises the essential components of computing infrastructure, such as virtual machines (VMs), storage, and networking. In an IaaS environment, organizations retain greater control and responsibility for managing operating systems, runtime environments, and applications that run on the infrastructure.
  2. Platform as a Service (PaaS) provides developers with a platform and environment for the development, testing, and deployment of applications. PaaS encompasses a runtime environment, development tools, and various services like databases, messaging queues, and identity management. By abstracting the underlying infrastructure, PaaS allows developers to focus on building applications without the burden of infrastructure management.
  3. Software as a Service (SaaS) delivers software applications over the internet. SaaS enables users to access and use applications directly, without the need for installation or maintenance. The service provider assumes responsibility for managing the underlying infrastructure, platform, and application stack, providing users with a hassle-free experience. This is the most common way that users interact with the cloud, even if they are unaware of it.

One misconception about cloud computing is that it is a novel technology dominated by the “big three” companies of Amazon, Google, and Microsoft. In fact, cloud computing is merely one component of IT services, which used to be provided exclusively on-premises. Investments in cloud computing still represent a relatively small portion of global IT spending, with one report putting the total at 7%,[1] while another suggests it may be as much as 12%.[2] Whatever the precise figure, there clearly remains a sizeable opportunity for the sector to grow.

It’s also important to remember that before the advent of cloud computing, the IT landscape was dominated by a different set of players, some of which—including IBM, Hewlett-Packard, and Oracle—remain prominent today. It is therefore critical to acknowledge that cloud services have not replaced these entities, but have instead expanded the market and introduced new competitors and service offerings.

If we narrow our focus from all cloud-computing services to one of its three layers, such as IaaS, we can see that it is teeming with competition. Numerous competitors—including Amazon, Google, Alibaba, Microsoft, IBM, OVHcloud, Digital Ocean, Oracle, Deutsche Telekom, Huawei, and others—all vie for consumers. According to industry reports, in 2021 alone, these competitors showcased remarkable growth, with Microsoft growing 51%, Alibaba 42%, Google 64%, and Huawei 56%.[3]

Amid this robust competition, the dominance of established players like Amazon’s AWS has been declining. According to Gartner data for IaaS, AWS’s market share dipped from 45% in 2019[4] to 39% in 2021,[5] signaling a continuing evolution in the industry’s competitive dynamics. If we expand the market and look at IaaS, PaaS, and hosted private-cloud services (which is a subset of IaaS), Amazon’s market share has been steady, while Microsoft and Google have made huge gains in the past few years (see Figure 1 below).[6] This is exactly the sort of dynamics we would expect from a vibrant industry: some firms succeeding in one part but not in another, and market shares shifting around.

It is important to note that these “shares” are for the broad, colloquial sense of “a market,” and not for a relevant market in the antitrust sense. But even assuming, for the sake of argument, that it was a relevant market, concentration would not appear to be a concern. According to Synergy Group’s Q1 2023 numbers for IaaS, Amazon had a 32% market share, with Microsoft at 23%, Google at 10%, Alibaba at 4%, and IBM at 3%.[7] If we consider all other firms in the market to be a single entity, the highest possible Herfindahl-Hirschman Index (HHI) for this market (a proxy for all cloud computing) would be 2462.

Even though that is a large overestimate of the true market concentration, it still produces an HHI that is in the “moderately concentrated” range, according to the 2010 Merger Guidelines.[8] If the remaining 28% of the market were divided up among 28 firms, the HHI would drop to 1706. And neither of these figures account for the vast swath of IT spending that occurs outside the cloud, which suggests that competition in the market is far more vigorous than the HHI would imply.

By contrast, it is difficult even to conceive of the SaaS layer of cloud computing as a “market” in any meaningful sense. SaaS comprises an extremely varied set of productivity and collaboration tools, such as Microsoft Office 365, Google Workspace (formerly G Suite), and Slack; content management systems (CMS) like WordPress, Wix, and Squarespace; video-conferencing and communication platforms, such as Zoom, Microsoft Teams, and Slack; and cloud-gaming platforms like Microsoft xCloud and PlayStation Now. Like IaaS, SaaS has experienced dramatic expansion, with more than 30,000 providers in operation. Major players include most of the already mentioned companies, as well as industry giants like Cisco, Dell, Salesforce, Databricks, Heroku, Snowflake, Adobe, and Atlassian, among others.

To answer RFI Question #1 regarding the extent to which cloud providers specialize within a layer or operate at multiple layers, all of the major players in IaaS and PaaS (Amazon, Microsoft, Google, etc.) also offer SaaS, but not the other way around. SaaS is a much larger layer, in terms of both the number of companies and amount of revenue. It is the largest cloud-computing segment and has experienced exponential growth, with cloud-software sales escalating from $31 billion in 2015 to an impressive $103 billion in 2020 (see Figure 2 below).

Other research finds similar numbers regarding the dominance of SaaS within cloud computing. Grand View Research finds: “The SaaS segment dominated the industry in 2022 and accounted for the highest share of more than 53.95% of the overall revenue.”[9] Other research finds similar divisions among the layers. [10]

II.      Competitive Dynamics and Innovation in Cloud Computing

RFI Question #3 asks: “What are the competitive dynamics within and across the different layers of cloud computing?” These will vary by layer. In particular, any analysis of competition within SaaS would have to examine a particular subset of the layer. The subset of personal-storage services, for example, sees competition among Dropbox, Apple’s iCloud, Microsoft’s OneDrive, Google Drive, and many others. For video conferencing, we have competition among Zoom, Microsoft’s Teams, Google Meet, Apple’s FaceTime, Cisco’s Webex, and more.

Question #3 continues: “How does service quality vary between providers operating at one layer vs. providers operating at multiple layers?” While we cannot say much about the competitive dynamics within SaaS overall, as it is not a single, coherent market, we can work through the implications between SaaS and the other layers. Some of the major players in SaaS also provide IaaS and PaaS. They are not the norm, however. For example, Zoom (like most SaaS companies) does not provide the other layers, so it may not exert direct influence in those layers. SaaS is simply much broader. To the extent there is a competitive connection between SaaS and the other layers, it is indirect and manifests through demand for other services. In the other direction, falling prices for IaaS and PaaS increases competition among SaaS providers.

A.      Price Trends for Cloud Services

Beyond the newsworthy stories of big companies switching cloud providers, we see aggregate-level outcomes that indicate competition. Prices are dropping (with a recent exception that we discuss below) and quantity is increasing, both at rapid rates. We have already documented the rapid rise in revenue generated by these markets. Industry forecasts continue to predict significant growth in the coming years. Gartner, for example, forecasts 23% growth for 2023 alone.[11] These revenue-growth trends are particularly remarkable in the face of rapidly falling prices.

RFI Question #7 asks: “What are the trends in pricing practices used by cloud providers?” According to Amazon’s blog, the company reduced prices 107 times between AWS’s launch in 2006 and 2021.[12] For one comparison, in November 2010, the cost of Amazon’s “Simple Storage Service” (S3) was cut to $0.140 per-GB per-month.[13] In May 2023, the monthly cost was $0.023 per-GB, a drop of more than 80% in roughly 12 1/2 years.[14] Over the same period, the consumer price index rose nearly 40%.[15] Google Cloud’s standard storage prices are similarly at $0.020-$0.023 per-GB per-month.[16]

Byrne, Corrado, & Sichel conduct the most systematic study of AWS prices, but only for the period 2009-2016.[17] Looking at storage (S3), database management (RDS), and computing services (EC2), they found:

prices for S3 storage fall at an average annual rate of more than 17 percent over the full sample. Over sub-periods, the pattern is that same as that for EC2 prices. Prices fell at an annual average rate of about 12 percent from the beginning of 2009 to the end of 2013. Then, in early 2014, just as Microsoft had entered the market to sufficient degree that they were posting their cloud prices on the Internet, AWS began cutting prices more rapidly. That started with the big price drop in early 2014, and over the period from the start of 2014 to the end of 2016, S3 prices fell at an average annual rate of about 25 percent.[18]

The timing and magnitude of price drops was similar for Amazon’s RDS, a data-management system that involves both storage and computing abilities. Overall, “quality-adjusted prices for RDS instances fall at an average annual rate of more than 11 percent over the full sample.”[19]

RFI Question #4 asks: “What practices do cloud providers use to enhance or secure their position in any layer of cloud computing?” Lowering prices, especially for storage, is a major way that cloud providers compete. Examining the timing of the most extreme drops in AWS’s prices, it is clear that competition from Microsoft pushed down prices at AWS.

The story is more complicated when it comes to on-demand compute instances. Each cloud-computing provider offers many different tiers of instances, depending on the customer’s needs for memory, network performance, operating systems, and other criteria. Moreover, those tiers and offerings have changed over time, so any price comparison needs to be quality-adjusted.

Again, the systematic analysis by Byrne, Corrado, & Sichel for 2009-2016 shows a longer-term decline in prices.[20] They find:

quality-adjusted prices for EC2 instances fall at an average annual rate of about 7 percent over the full sample. Interestingly, prices fell at an annual average rate of about 5 percent from the beginning of 2009 to the end of 2013. Then, in early 2014, just as Microsoft had entered the market to sufficient degree that they were posting their cloud prices on the Internet (and shortly before Google started doing the same), AWS began cutting prices more rapidly. That started with the big price drop in early 2014, and over the period from the start of 2014 to the end of 2016, EC2 prices fell at an average annual rate of 10.5 percent.

More recently, some industry reports suggest that computing prices might not have fallen. For example, Liftr Insights estimates there was a 2.5% increase in 2022 for cloud-instance prices—though their methodology is unclear, especially around adjusting for quality improvements and the introduction of new products over time. In any case, this number should be taken with a pinch of salt. The study shows significant variation by provider.[21] For example, there was a 23.0% increase in 2022 for average prices of on-demand compute instances at AWS, while Azure saw a 9.1% decline.[22] These price variations suggest there may be significant price dispersion in the market, or that there were important and asymmetrical product quality variations during the observed time period. In either case, the diverging price paths suggest the report may be missing important parameters or competition or simply that its price measures are not accurate.

Even on its own terms, the Liftr report does not paint an unambiguously negative picture of cloud competition over recent years. First, AWS still has lower prices than competitors. As Liftr Insights writes in their news release, “despite all these increases and decreases in prices, Azure prices have been higher (on average) than AWS prices for three years.”[23] This suggests that AWS may have dropped prices at an unsustainable pace, as competitors did not go that low, and later decided to reverse course.

A more important factor to keep in mind is that the recent period of flat or rising prices is not unique to cloud computing. A January 2022 report that calculated the cost of computations by looking at the price of computing hardware found:

The price of computations in gigaFLOPS has not decreased since 2017. Similarly, cloud GPU prices have remained constant for Amazon Web Services since at least 2017 and Google Cloud since at least 2019. Although more advanced chips have been introduced in that time—with the primary example being Nvidia’s A100 GPU, released in 2020—they only offer five percent more FLOPS per dollar than the V100 that was released in 2017.[24]

For all of its success, the cloud-computing industry is not immune from the widespread demand for chips, combined with more recent supply shortages. The supply of the chips needed to run computations has barely kept up with demand, which has caused prices to remain flat.

B.      Customer Exit Options

While falling prices are evidence of strong competitive pressures within an industry, one may be concerned about barriers to switching service providers that, as RFI Question #10 suggests, may serve as a form of customer lock-in.

The major SaaS companies can still exert competitive pressure on the other layers through entry and exit. For example, Dropbox decided in 2017 to leave AWS and build its own infrastructure,[25] illustrating two things that matter for thinking about customer lock-in. First, it shows that exit is an option for customers. As explained above, cloud companies always compete with on-premises servers. Second, there is real harm to Amazon’s bottom line when it does not satisfy customers. Dropbox is a company with $2 billion in annual revenue.[26] While we do not know the nature of the Dropbox-Amazon negotiations, it seems implausible that Dropbox’s IT managers simply went on the AWS website to buy storage. Companies of Dropbox’s size will negotiate on price and quality/reliability terms. In this case, Dropbox decided its own servers would be superior.

On the flip side, cloud providers also attract large customers by convincing them that increased reliability and decreased costs justify a switch from on-premises servers to the cloud. Netflix migrated some services to AWS after originally having only its own servers. It partnered with Amazon despite Amazon Prime Video being a competitor to Netflix in the streaming-video space.[27] Importantly, Netflix was able to partially exit and to mix and match its own Open Connect with AWS to ensure that its streaming “never goes down,” as The Verge put it.

This sort of user entry and exit is to be expected from a healthy market. The turnover in these contracts also speaks to the influence that the IaaS and PaaS layers can have on SaaS.[28] IaaS can greatly improve SaaS systems’ offerings, but they always have to compete with the option of on-premises servers.

Do these examples illustrate that switching among different cloud providers is as easy as flipping a light switch? No, but moving data is neither immediate nor free. There are serious time costs and financial costs for cloud companies, and we should ultimately expect consumer prices to reflect those underlying costs. Back in 2015, moving 100 terabytes of data from an on-premises server to an AWS server could take 100 days. Amazon subsequently introduced a service called “Snowball” that involved physically shipping servers on trucks. With this service, that same transfer could take only two days.[29]

For many SaaS-storage options, such as Dropbox and Google Drive, the cost of moving data is not explicitly charged, as personal files are usually too small to bother charging for transfers, provided that the data is not moving across the country to a different data center. But for storage offered as part of IaaS, such as Google Virtual Private Cloud and Amazon S3, customers face explicit fees associated with moving data around.

Any transfer of data involves the data leaving one location (“egress traffic”) and entering another location (“ingress traffic”). Amazon,[30] Google,[31] and Microsoft[32] do not charge any fees for ingress traffic, even though incoming traffic is costly. Cloud providers instead recoup the costs of moving data through egress fees, including fees for moving within one provider or to somewhere else on the internet.

The pricing for egress fees varies depending on the type of transfer. For example, Google Cloud charges $0.01 per GiB[33] for an “Egress from a Google Cloud region in the US or Canada to another Google Cloud region in the US or Canada,” but $0.08 for an “Egress to a Google Cloud region on another continent (excludes Oceania).”[34] Amazon offers 100GB of transfer out to the internet each month, but thereafter charges $0.09 per GB for the first 10 TB per month.[35]

Beyond explicit egress charges for switching providers, policymakers may be concerned about other compatibility costs that could generate consumer lock-in. This is not a major issue for pure data storage or computing power. The major cloud providers allow users to run programs on open-source Linux instances. As one moves further from the commodity-like products of storage and computing, however, the switching costs become more real, depending on which precise service customers are using.

For example, for video editing, whether one is using editing software on one’s local machine or through cloud services, all the major editing software will input and output standardized files. If you are in the middle of an edit, however, that file format is often unique. Is that a switching cost? Probably not in any sense that is relevant to the FTC, but it is on par with the switching costs experienced once you enter a grocery store. The competitive pressures are to attract customers to enter the store, or to start using the software.

For less trivial examples, one could worry about the costs to a large company of switching from one cloud SQL-database (part of the PaaS layer) provider to another—e.g., from Amazon RDS to Microsoft Azure. SQL itself isn’t “open source” in the way that a software application or operating system might be. There are, however, numerous database systems that utilize SQL, and many of these are open source. Examples include MySQL, PostgreSQL, and SQLite; all are open-source relational database-management systems that use SQL as their standard language. Conversely, there are also proprietary, closed-source database systems that use SQL, such as Microsoft SQL Server and Oracle Database. No matter the system, again, changing providers is not as easy as flipping a light switch or dragging and dropping files on Google Drive.

But we must always ask, compared to what? Changes to major IT operations have always been costly. Transferring large amounts of data is costly, as noted above. That’s why companies have dedicated, full-time IT staff to handle such issues. Putting something on the cloud does not magically make it free to do whatever one wants, but cloud computing does open up the number of choices for any product that is available to customers.

While the above discussion frames such questions as an either/or decision, many users “multi-home” or use multiple providers. According to one survey, 70% of companies that use cloud providers use multiple providers.[36] This flexibility in selection allows customers to cherry-pick services from various providers and assign different providers for distinct workloads. Such an approach inherently amplifies the level of competition within the cloud industry. Again, it is worth contrasting this with on-premises IT services. The apparent ease of multi-homing suggests that other compatibility issues are not a major hindrance to competitive pressures and that there is still robust competition for consumers.

III.    Downstream Competitive Benefits of Cloud Computing

RFI Question #4 asks: “What are the effects of those practices on competition, including on cloud providers who do not operate at multiple layers?”

The biggest impact of competition on prices within the cloud-computing sector may not be observed directly by consumers; rather, it manifests in generating the infrastructure and platforms that allow other businesses to compete. This includes not only the various types of SaaS that people usually associate with cloud computing, such as Zoom and Dropbox, but also general online products and services that have become feasible due to the lower cost of cloud computing.

Every industry has been affected by cloud computing, as many large-scale enterprises are adopting cloud-based technologies to effectively manage and reduce their expenses.[37] As noted above, for the 2009-2016 period, Byrne, Corrado, & Sichel find that prices fell annually by 7% for computing power, 12% for database services, and 17% for storage.[38]

These cost reductions enhance competition in many sectors, not just “tech” sectors. For example, according to Grand View Research, the banking, financial services, and insurance (BFSI) sector has the largest share of cloud computing by end-use (see Figure 3 below).[39] As of 2017, a large BFSI company like JP Morgan Chase required 40,000 IT employees and an annual budget of more than $9 billion.[40] As companies moved more of those IT services to the cloud, they have saved money and opened new possibilities.

Cloud computing has also enabled such advances as mobile banking, digital wallets, and payment services. Services like Apple Pay, Google Pay, and PayPal allow users to make secure transactions with their smartphones, eliminating the need for physical credit cards or cash. By leveraging cloud computing, mobile-payment services can benefit from increased scalability, flexibility, security, and accessibility, ensuring the smooth and secure operation of payment transactions while providing a seamless user experience. These options and the competitive pressures they unleash are now feasible, given the drastic drop in cloud prices.

And banking is just one industry. Every industry has experienced the effects of cloud computing increasing competition in those downstream industries. For a recent example, American Airlines in May 2022 transitioned its customer-facing applications from an internal server to IBM Cloud. Again, for decades, companies have had to manage their own IT processes. The move by American was intended to enhance the airline’s digital self-service tools and offer customers improved access. By leveraging the open and flexible IBM cloud platform, American was able to modernize its technology stack, embrace DevOps principles, and achieve greater agility in its operations.[41]

This ease is especially important for startups. According to one IT-industry advocacy group, “cloud hosting and computing services, like AWS, Azure, Google Cloud, or others”[42] is one of the three most popular categories of services used by startups, alongside code repositories (like Github) and communication and collaboration tools (like Zoom or Slack). By their numbers, 69% of startups are using “cloud computing and database services,” because these “have lowered barriers for startups by enabling them to innovate without needing to worry about building the hardware physical infrastructure themselves.”[43] Whether the number is 60% or 80%, the important thing to recognize is that cloud providers now provide options to startups in the most remote parts of the country on par with those in Silicon Valley.

One aspect of the market’s evolution that may get lost in the discussions about price and exit is the increased security benefits that cloud computing can provide for downstream firms—especially for smaller startups without the means for a dedicated IT team. Oracle conducted a survey of “341 CEOs and CIOs, at firms between 500 to 10,000 employees, making between 100M to 999M dollars in revenue annually in a variety of industries, and located across the United States.”[44] They found that that 66% of the C-suite officials selected “security” as one of the “biggest benefits of cloud computing for your organization today,” while only 41% chose “cost reduction.”[45] Any policy proposals that seek changes in the cloud-computing market must take the potential impacts to security seriously.

VII.  Cloud Computing and Artificial Intelligence

Among the downfield services that have received considerable attention are artificial intelligence (AI) and machine learning (ML). Training large AI and language models requires an expensive, upfront training period. The costs are often not released to the public, but we can make rough calculations.

A cutting-edge GPU such as the popular Nvidia A100, released in 2020, costs about $10,000.[46] Meta’s largest language model used 2,048 Nvidia A100s. If Meta bought GPUs specifically for training this model, the cost would be more than $20 million and the training would take about 21 days. If it instead used dedicated prices from AWS, the cost would be over $2.4 million.[47] Cloud computing allows for the cost of GPUs to be split across many users. Training large models are a perfect use case for renting processing power, since it is a large training cost that companies do not need every day.

While $2 million to $20 million is a large investment, it is not beyond the capabilities of many large tech companies. For a comparison, Meta has spent more than $30 billion on the Metaverse.[48] And the returns to innovation in this space are large. OpenAI’s ChatGPT reached 100 million active users just two months after its release, making it the fast-growing user base ever.[49] And that came from a relatively unknown company.

Much of the attention around AI involves the big companies (OpenAI, Google, Meta, Microsoft). But the cloud increases the availability of AI models for smaller companies, thereby increasing competition in the AI space, just as it previously did in teleconferencing, mobile payments, and IT services. For example, Amazon’s Bedrock is a marketplace for “generative AI applications with foundation models (FMs)” that “makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case.”[50]

Similarly, Microsoft’s Azure provides tools for customers to use AI models for their unique situations by, for example, offering models to help with quality control in manufacturing.[51] Azure also offers large-scale language models—including ChatGPT, as well as others—for use on the pay-as-you-go model, similar to other cloud services.[52]

Cloud computing may allow AI to flourish, but AI also affects the cloud-computing markets. One concern in AI is the limit of suitable AI chips. For years, there have been news stories about chip shortages and their impact on car production, computer availability, and more. That supply constraint could have a major impact on cloud computing. As noted above, computing prices are rising both on the cloud and off.

In response to rising prices, as well the development of AI and ML, demand has grown for new types of chips within cloud computing. Many are transitioning from traditional CPU chips to specialized chips, such as graphics processing units (GPUs), which were traditionally used for graphics cards but are increasingly used for AI and ML. For example, Google’s AI Infrastructure is built around Google’s Tensor Processing Units (TPUs) and cloud GPUs.[53] Amazon introduced a new type of chip, known as Trainium, which was specifically engineered to train machine-learning algorithms and to compete with similar products from Nvidia.

To respond to RFI Question #4, new hardware—such as Google’s TPUs or Amazon’s Trainium—is one way that cloud providers could enhance or secure their position across multiple layers. By offering a better product, they secure their position both in IaaS, as well as in any SaaS that runs on that hardware.

Conclusion

The landscape of the cloud-computing industry is marked by intense competition and rapid innovation, influenced by the increasing demand for IT services. It’s important to acknowledge the dual nature of this competition: cloud providers must not only contend with one other, but also with the established internal IT capabilities of large firms.

Cloud computing is a new entrant to the IT sector. Traditional, on-premises IT infrastructure remains a significant player in this arena, with cloud computing forming one crucial component of the broader IT ecosystem. Firms’ ability to choose, combine, or even switch among services from multiple cloud providers further underscores the industry’s competitive dynamics. As we move forward, the promise of enhanced service offerings, technological innovation, and reduced costs for consumers appear set to continue, fueled by these competitive pressures. We could always imagine some perfect policy remedy that would allow these trends of falling prices, increasing quantity, and increased innovation to be even more extreme. But given what we have seen, it appears the future of cloud computing is bright, and we can eagerly anticipate exciting developments on the horizon.

[1] Bill Whyman, Secrets From Cloud Computing’s First Stage: An Action Agenda for Government and Industry, Information Technology and Innovation Foundation (Jun. 1, 2021), https://itif.org/publications/2021/06/01/secrets-cloud-computings-first-stage-action-agenda-government-and-industry.

[2] Glenn Solomon, The Cloud Is Still a Multibillion-Dollar Opportunity. Here’s Why, Forbes (Jan. 4, 2023)m https://www.forbes.com/sites/glennsolomon/2023/01/04/the-cloud-is-still-a-multibillion-dollar-opportunity-heres-why.

[3] Press Release, Gartner Says Worldwide IaaS Public Cloud Services Market Grew 41.4% in 2021, Gartner (Jun. 2, 2022), https://www.gartner.com/en/newsroom/press-releases/2022-06-02-gartner-says-worldwide-iaas-public-cloud-services-market-grew-41-percent-in-2021.

[4] Id.

[5] Id.

[6] Cloud Spending Growth Rate Slows But Q4 Still Up By $10 Billion from 2021; Microsoft Gains Market Share, Synergy Research Group (Feb. 6, 2023), https://www.srgresearch.com/articles/cloud-spending-growth-rate-slows-but-q4-still-up-by-10-billion-from-2021-microsoft-gains-market-share.

[7] Felix Richter, Big Three Dominate the Global Cloud Market, Statista (Apr. 28, 2023), https://www.statista.com/chart/18819/worldwide-market-share-of-leading-cloud-infrastructure-service-providers.

[8] U.S. Department of Justice & FTC, Horizontal Merger Guidelines § 5.3 (2010).

[9] Cloud Computing Market Size, Share & Trends Analysis Report By Service (SaaS, IaaS), By End-use (BFSI, Manufacturing), By Deployment (Private, Public), By Enterprise Size (Large, SMEs), And Segment Forecasts, 2023 – 2030, Grand View Research, https://www.grandviewresearch.com/industry-analysis/cloud-computing-industry (last visited Jun. 18, 2023).

[10] Minjau Song, Trend and Developments in Cloud Computing and On-Premise IT Solutions, Alliance for Digital Innovation (Dec. 2021), available at https://alliance4digitalinnovation.org/wp-content/uploads/2021/12/Brattle-Cloud-Computing-Whitepaper_Dec-2021-2.pdf, at 17.

[11] Press Release, Gartner Forecasts Worldwide Public Cloud End-User Spending to Reach Nearly $600 Billion in 2023, Gartner (Apr. 19, 2023), https://www.gartner.com/en/newsroom/press-releases/2023-04-19-gartner-forecasts-worldwide-public-cloud-end-user-spending-to-reach-nearly-600-billion-in-2023.

[12] Bowen Wang, Amazon EC2 – 15 Years of Optimizing and Saving Your IT Costs, Amazon (Aug. 17, 2021), https://aws.amazon.com/blogs/aws-cost-management/amazon-ec2-15th-years-of-optimizing-and-saving-your-it-costs.

[13] Alexia Tsotsis, Amazon Slashes AWS S3 Prices Up to 19%, TechCrunch (Nov. 1, 2010), https://techcrunch.com/2010/11/01/aws-s3-2.

[14] Amazon S3 pricing, Amazon, https://aws.amazon.com/s3/pricing (last visited Jun. 15, 2023).

[15] Consumer Price Index for All Urban Consumers: All Items in U.S. City Average, FRED, https://fred.stlouisfed.org/series/CPIAUCSL (last accessed Jun. 15, 2023).

[16] Cloud Storage price, Google, https://cloud.google.com/storage/pricing (last visited Jun. 15, 2023).

[17] David Byrne et al., The Rise of Cloud Computing: Minding Your P’s Q’s and K’s, National Bureau of Economic Research (Working Paper 25188 2018).

[18] Id. at 22.

[19] Id. at 22.

[20] Id at 20.

[21] Press Release, Liftr Insights Data Highlights Increases in AWS Prices While Microsoft Azure Prices Have Been Decreasing, Liftr Insights (Feb. 7, 2023), https://liftrinsights.com/news-releases/aws-and-azure-cloud-pricing-moving-in-different-directions-as-shown-by-liftr-insights-data.

[22] Id.

[23] Id.

[24] Andrew Lohn & Micah Musser, AI and Compute: How Much Longer Can Computing Power Drive Artificial Intelligence Progress, Center for Security and Emerging Technology (Jan. 2022), https://cset.georgetown.edu/publication/ai-and-compute.

[25] Ron Miller, Why Dropbox Decided to Drop AWS and Build Its Own Infrastructure and Network, TechCrunch (Sep. 15, 2017), https://techcrunch.com/2017/09/15/why-dropbox-decided-to-drop-aws-and-build-its-own-infrastructure-and-network.

[26] Press Release, Dropbox Announces Fourth Quarter and Fiscall 2022 Results, Dropbox (Feb. 16, 2023), https://dropbox.gcs-web.com/news-releases/news-release-details/dropbox-announces-fourth-quarter-and-fiscal-2022-results.

[27] See, e.g., Shirsha Datta, What Led Netflix to Shut Their Own Data Centers and Migrate to AWS?, Medium (Sep. 22, 2020), https://shirshadatta2000.medium.com/what-led-netflix-to-shut-their-own-data-centers-and-migrate-to-aws-bb38b9e4b965; Vaishnavi Katgaonkar, How Does Netflix Work?, Medium (Sep. 9, 2020), https://medium.com/@katgaonkarvaishnavi10/how-does-netflix-work-425e0fd06055; Netflix on AWS, Amazon, https://aws.amazon.com/solutions/case-studies/innovators/netflix (last visited Jun. 15, 2023).

[28] Catie Keck, A Look Under the Hood of the Most Successful Streaming Service on the Planet, The Verge (Nov. 17, 2021), https://www.theverge.com/22787426/netflix-cdn-open-connect.

[29] Yevgeniy Sverdlik, AWS Finds Way to Move a Lot of Data to Cloud Faster – by Putting It on a Shipping Truck, DataCenter Knowledge (Oct. 7, 2015), https://www.datacenterknowledge.com/archives/2015/10/07/aws-speeds-up-data-migration-to-cloud-using-shipping-trucks.

[30] Amazon, supra note 14.

[31] Bandwidth Pricing, Azure, https://azure.microsoft.com/en-us/pricing/details/bandwidth (last visited Jun. 15, 2023).

[32] All Network Pricing, Google Cloud, https://cloud.google.com/vpc/network-pricing (last visited June 15, 2023).

[33] 1 gibibyte (GiB) equals 1.074 gigabytes (GB).

[34] Google Cloud, supra note 32.

[35] Amazon, supra note 14.

[36] 2023 State of the Cloud Report, Flexera, https://info.flexera.com/CM-REPORT-State-of-the-Cloud#view-report (last visited Jun. 15, 2023).

[37] Grand View Research, supra 9.

[38] Byrne et al., supra 17.

[39] Grand View Research, supra 9.

[40] Kim S. Nash, J.P. Morgan Chase Names New CIO as Dana Deasy Exits, Wall Street Journal (Sep. 7, 2017), https://www.wsj.com/articles/j-p-morgan-chase-names-new-cio-as-dana-deasy-exits-1504822667.

[41] Id.

[42] Tools To Compete: Lower Costs, More Resources, and the Symbiosis of the Tech Ecosystem, CCIA Research Center and Engine (Jan. 25, 2023), https://research.ccianet.org/reports/tools-to-compete, at 6.

[43] Id. at 16.

[44] Security in the Age of AI, Oracle, available at https://www.oracle.com/a/ocom/docs/data-security-report.pdf, at 3 (last visited June 15, 2023),

[45] Id. at 6.

[46] Jonathan Vanian, ChatGPT And Generative AI Are Booming, but the Costs Can Be Extraordinary, CNBC (Mar. 13, 2023), https://www.cnbc.com/2023/03/13/chatgpt-and-generative-ai-are-booming-but-at-a-very-expensive-price.html.

[47] Id.

[48] Jyoti Mann, Meta Has Spent $36 Billion Building the Metaverse but Still Has Little to Show for It, While Tech Sensations Such as the iPhone, Xbox, and Amazon Echo Cost Way Less, Business Insider (Oct. 29, 2022), https://www.businessinsider.com/meta-lost-30-billion-on-metaverse-rivals-spent-far-less-2022-10.

[49] Krystal Hu, ChatGPT Sets Record for Fastest-Growing User Base – Analyst Note, Reuters (Feb. 2, 2023), https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01.

[50] Amazon Bedrock, Amazon, https://aws.amazon.com/bedrock (last visited Jun. 15, 2023).

[51] Azure AI, Microsoft, https://azure.microsoft.com/en-us/solutions/ai/#overview (last visited Jun. 15, 2023).

[52] Price Details, Microsoft, https://azure.microsoft.com/en-us/pricing/details/cognitive-services/openai-service/#pricing (last visited Jun. 15, 2023).

[53] AI Infrastructure, Google Cloud, https://cloud.google.com/ai-infrastructure (last visited Jun. 15, 2023)

Continue reading
Antitrust & Consumer Protection

Leave the Golf Leagues Alone

Popular Media After nearly two years of litigation and intense competition for the world’s top golfers, the PGA Tour and LIV Golf have agreed to create a . . .

After nearly two years of litigation and intense competition for the world’s top golfers, the PGA Tour and LIV Golf have agreed to create a new, as-yet-unnamed, for-profit joint entity. Most headlines about the deal have focused on the ethical and geopolitical problems that accompany the PGA’s joining forces with LIV’s sponsor, the Saudi Arabian Public Investment Fund. Within policy circles, the pseudo-merger has also stirred concerns regarding potential antitrust violations and harm to competition should the major golf leagues join forces as planned. The Justice Department (DOJ) has announced an investigation into the merger.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

ICLE Response to the AI Accountability Policy Request for Comment

Regulatory Comments I. Introduction: How Do You Solve a Problem Like ‘AI’? On behalf of the International Center for Law & Economics (ICLE), we thank the National . . .

I. Introduction: How Do You Solve a Problem Like ‘AI’?

On behalf of the International Center for Law & Economics (ICLE), we thank the National Telecommunications and Information Administration (NTIA) for the opportunity to respond to this AI Accountability Policy Request for Comment (RFC).

A significant challenge that emerges in discussions concerning accountability and regulation for artificial intelligence is the broad and often ambiguous definition of “AI” itself. This is demonstrated in the RFC’s framing:

This Request for Comment uses the terms AI, algorithmic, and automated decision systems without specifying any particular technical tool or process. It incorporates NIST’s definition of an ‘‘AI system,’’ as ‘‘an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments.’’  This Request’s scope and use of the term ‘‘AI’’ also encompasses the broader set of technologies covered by the Blueprint: ‘‘automated systems’’ with ‘‘the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.’’[1]

As stated, the RFC’s scope could be read to cover virtually all software.[2] But it is essential to acknowledge that, for the purposes of considering potential regulation, we lack a definition of AI that is either sufficiently broad as to cover all or even most areas of concern, and sufficiently focused as to be a useful lens for analysis. That is to say, what we think of as AI encompasses a significant diversity of discrete technologies that will be put to a huge number of potential uses.

One useful recent comparison is with the approach the Obama administration took in its deliberations over nanotechnology regulation in 2011.[3] Following years of consultation and debate, the administration opted for a parsimonious, context-specific approach precisely because “nanotechnology” is not really a single technology. In that proceeding, the administration ultimately recognized that it was not the general category of “nanotechnology” that was relevant, nor the fact that nanotechnologies are those that operate at very small scales, but rather the means by and degree to which certain tools grouped under the broad heading of “nanotechnology” could “alter the risks and benefits of a specific application.”[4] This calls to mind Judge Frank Easterbrook’s famous admonition that a “law of cyberspace” would be no more useful than a dedicated “law of the horse.”[5] Indeed, we believe Easterbrook’s observation applies equally to the creation of a circumscribed “law of AI.”

While there is nothing inherently wrong with creating a broad regulatory framework to address a collection of loosely related subjects, there is a danger that the very breadth of such a framework might over time serve to foreclose more fruitful and well-fitted forms of regulation.

A second concern in the matter immediately at hand is, as mentioned above, the potential for AI regulation to be formulated so broadly as to encompass essentially all software. Whether by design or accident, this latter case runs a number of risks. First, since the scope of the regulation will potentially cover a much broader subject, the narrow discussion of “AI” will miss many important aspects of broader software regulation, and will, as a consequence, create an ill-fitted legal regime. Second, by sweeping in a far wider range of tools into such a regulation than the drafters publicly acknowledge, the democratic legitimacy of the process is undermined.

A.      The Danger of Regulatory Overaggregation

The current hype surrounding AI has been driven by popular excitement, as well as incentives for media to capitalize on that excitement. While this is understandable, it arguably has led to oversimplification in public discussions about the underlying technologies. In reality, AI is an umbrella term that encompasses a diverse range of technologies, each with its own unique characteristics and applications.

For instance, relatively lower-level technologies like large language models (LLMs)[6] differ significantly from diffusion techniques.[7] At the level of applications, recommender systems can employ a wide variety of different machine-learning (or even more basic statistical) techniques.[8] All of these techniques collectively called “AI” also differ from the wide variety of algorithms employed by search engines, social media, consumer software, video games, streaming services, and so forth, although each also contains software “smarts,” so to speak, that could theoretically be grouped under the large umbrella of “AI.”

And none of the foregoing bear much resemblance at all to what the popular imagination conjures when we speak of AI—that is, artificial general intelligence (AGI), which some experts argue may not even be achievable.[9]

Attempting to create a single AI regulatory scheme commits what we refer to as “regulatory overaggregation”—sweeping together a disparate set of more-or-less related potential regulatory subjects under a single category in a manner that overfocuses on the abstract term and obscures differences among the subjects. The domains of “privacy rights” and “privacy regulation” are illustrative of the dangers inherent in this approach. There are, indeed, many potential harms (both online and offline) that implicate the concept of “privacy,” but the differences among these recommend examining closely the various contexts that attend each.

Individuals often invoke their expectation of “privacy,” for example, in contexts where they want to avoid the public revelation of personal or financial information. This sometimes manifests as the assertion of a right to control data as a form of quasi-property, or as a form of a right to anti-publicity (that is, a right not to be embarrassed publicly). Indeed, writing in 1890 with his law partner Samuel D. Warren, future Supreme Court Justice Louis Brandeis posited a “right to privacy” as akin to a property right.[10] Warren & Brandeis argued that privacy is not merely a matter of seclusion, but extends to the individual’s control over their personal information.[11] This “right to be let alone” delineates a boundary against unwarranted intrusion, which can be seen as a form of intangible property right.[12]

This framing can be useful as an abstract description of a broad class of interests and concerns, but it fails to offer sufficient specificity to describe actionable areas of law. Brandeis & Warren were concerned primarily with publicity;[13] that is, with a property right to control one’s public identity as a public figure. This, in turn, implicates a wide range of concerns, from an individual’s interest in commercialization of their public image to their options for mitigating defamation, as well as technologies that range from photography to website logging to GPS positioning.

But there are clearly other significant public concerns that fall broadly under the heading of “privacy” that cannot be adequately captured by the notion of controlling a property right “to be let alone.” Consider, for example, the emerging issue of “revenge porn.” It is certainly a privacy harm in the Brandeisian sense that it implicates the property right not to have one’s private images distributed without consent. But that framing fails to capture the full extent of potential harms, such as emotional distress and reputational damage.[14] Similarly, cases in which an individual’s cellphone location data are sold to bounty hunters are not primarily about whether a property right has been violated, as they raise broader issues concerning potential abuses of power, stalking, and even physical safety.[15]

These examples highlight some of the ways that, in failing to take account of the distinct facts and contexts that can attend privacy harms, an overaggregated “law of privacy” may tend to produce regulations insufficiently tailored to address those diverse harms.

By contrast, the domain of intellectual property (IP) may serve as an instructive counterpoint to the overaggregated nature of privacy regulation. IP encompasses a vast array of distinct legal constructs, including copyright, patents, trade secrets, trademarks, and moral rights, among others. But in the United States—and indeed, in most jurisdictions around the world—there is no overarching “law of intellectual property” that gathers all of these distinct concerns under a singular regulatory umbrella. Instead, legislation is specific to each area, resulting in copyright-specific acts, patent-specific acts, and so forth. This approach acknowledges that, within IP law, each IP construct invokes unique rights, harms, and remedies that warrant a tailored legislative focus.

The similarity of some of these areas does lend itself to conceptual borrowing, which has tended to enrich the legislative landscape. For example, U.S. copyright law has imported doctrines from patent law.[16] Despite such cross-pollination, copyright law and patent law remain distinct. In this way, intellectual property demonstrates the advantages of focusing on specific harms and remedies. This could serve as a valuable model for AI, where the harms and remedies are equally diverse and context dependent.

If AI regulations are too broad, they may inadvertently encompass any algorithm used in commercially available software, effectively stifling innovation and hindering technological advancements. This is no less true of good-faith efforts to craft laws in any number of domains that nonetheless suffer from a host of unintended consequences.[17]

At the same time, for a regulatory regime covering such a broad array of varying technologies to be intelligible, it is likely inevitable that tradeoffs made to achieve administrative efficiency will cause at least some real harms to be missed. Indeed, NTIA acknowledges this in the RFC:

Commentators have raised concerns about the validity of certain accountability measures. Some audits and assessments, for example, may be scoped too narrowly, creating a ‘‘false sense’’ of assurance. Given this risk, it is imperative that those performing AI accountability tasks are sufficiently qualified to provide credible evidence that systems are trustworthy.[18]

To avoid these unintended consequences, it is crucial to develop a more precise understanding of AI and its various subdomains, and to focus any regulatory efforts toward addressing specific harms that would not otherwise be captured by existing laws. The RFC declares that its aim is “to provide assurance—that AI systems are legal, effective, ethical, safe, and otherwise trustworthy.”[19] As we discuss below, rather than promulgate a set of recommendations about the use of AI, NTIA should focus on cataloguing AI technologies and creating useful taxonomies that regulators and courts can use when they identify tangible harms.

II. AI Accountability and Cost-Benefit Analysis

The RFC states that:

The most useful audits and assessments of these systems, therefore, should extend beyond the technical to broader questions about governance and purpose. These might include whether the people affected by AI systems are meaningfully consulted in their design and whether the choice to use the technology in the first place was well-considered.[20]

It is unlikely that consulting all of the people potentially affected by a set of technological tools could fruitfully contribute to the design of any regulatory system other than one that simply bans those tools.[21] Any intelligible accountability framework must be dedicated to evaluating the technology’s real-world impacts, rather than positing thought experiments about speculative harms. Where tangible harms can be identified, such evaluations should encompass existing laws that focus on those harms and how various AI technologies might alter how existing law would apply. Only in cases where the impact of particular AI technologies represents a new kind of harm, or raises concerns that fall outside existing legal regimes, should new regulatory controls be contemplated.

AI technologies will have diverse applications and consequences, with the potential for both beneficial and harmful outcomes. Rather than focus on how to constrain either AI developers or the technology itself, the focus should be on how best to mitigate or eliminate any potential negative consequences to individuals or society.

NTIA asks:

AI accountability measures have been proposed in connection with many different goals, including those listed below. To what extent are there tradeoffs among these goals?[22]

This question acknowledges that, fundamentally, AI accountability comes down to cost-benefit analysis. In conducting such analysis, we urge that the NTIA and any other agencies be sure to account not only for potential harms, but to take very seriously the massive benefits these technologies might provide.

A.      The Law Should Identify and Address Tangible Harms, Incorporating Incremental Changes

To illustrate the challenges inherent to tailoring regulation of a new technology like AI to address the ways that it might generally create harm, it could be useful to analogize to a different existing technology: photography. If camera technology were brand new, we might imagine a vast array of harms that could arise from its use. But it should be obvious that creating an overarching accountability framework for all camera technology is absurd. Instead, laws of general applicability should address harmful uses of cameras, such as the invasion of privacy rights posed by surreptitious filming. Even where a camera is used in the commission of a crime—e.g., surveilling a location in preparation to commit a burglary—it is not typically the technology itself that is the subject of legal concern; rather, it is the acts of surveillance and burglary.

Even where we can identify a tangible harm that a new technology facilitates, the analysis is not complete. Instead, we need to balance the likelihood of harmful uses of that technology with the likelihood of nonharmful (or beneficial) uses of that technology. Copyright law provides an apt example.

Sony,[23] often referred to as the “Betamax case,” was a landmark U.S. Supreme Court case in 1984 that centered on Sony’s Betamax VCR—the first consumer device that could record television shows for later viewing, a concept now referred to as time-shifting.[24] Plaintiffs alleged that, by manufacturing and selling the Betamax VCRs, Sony was secondarily liable for copyright infringement carried out by its customers when they recorded television shows.[25] In a 5-4 decision, the Supreme Court ruled in favor of Sony, holding that the use of the Betamax VCR to record television shows for later personal viewing constituted “fair use” under U.S. copyright law.[26]

Critical for our purposes here was that the Court found that Sony could not be held liable for contributory infringement because the Betamax VCR was capable of “substantial noninfringing uses.”[27] This is to say that, faced with a new technology (recording relatively high-quality copies of television shows and movies at home), the Court recognized that, while the Betamax might facilitate some infringement, it would be inappropriate to apply a presumption against its use.

Sony and related holdings did not declare that using VCRs to infringe copyright was acceptable. Indeed, copyright enforcement for illegal reproduction has continued apace, even when using new technologies capable of noninfringing uses.[28] At the same time, the government did not create a new regulatory and licensing regime to govern the technology, despite the fact that it was a known vector for some illicit activity.

Note, the Sony case is also important for its fair-use analysis, and is widely cited for the proposition that so-called “time shifting” is permissible. That is not central to our point here, particularly as there is no analogue to fair use proposed in the AI context. But even here, it represents how the law adapts to develop doctrines that excuse conduct that would otherwise be a violation. In the case of copyright, unauthorized reproduction is infringement, period.[29] Fair use is raised as an affirmative defense[30] to excuse some unauthorized reproduction because courts have long recognized that, when viewed case-by-case, application of legal rules need to be tailored to make room for unexpected fact patterns where acts that would otherwise be considered violations yield some larger social benefit.

We are not suggesting the development of a fair-use doctrine for AI, but are instead insisting that AI accountability and regulation must be consistent with the case-by-case approach that has characterized the common law for centuries. Toward that end, it would be best for law relevant to AI to emerge through that same bottom-up, case-by-case process. To the extent that any new legislation is passed, it should be incremental and principles-based, thereby permitting the emergence of law that best fits particular circumstances and does not conflict with other principles of common law.

By contrast, there are instances where the law has recognized that certain technologies are more likely to be used for criminal purposes and should be strictly regulated. For example, many jurisdictions have made possession of certain kinds of weapons—e.g., nunchaku, shuriken “throwing stars,” and switchblade knives—per se illegal, despite possible legal uses (such as martial-arts training).[31] Similarly, although there is a strong Second Amendment protection for firearms in the United States, it is illegal for a felon to possess a firearm.[32] The reason these prohibitions developed is because it was deemed that possession of these devices in most contexts had no other possible use than the violation of the law. But these sorts of technologies are the exception, not the rule. Many chemicals that can be easily used as poisons are nonetheless available as, e.g., cleaning agents or fertilizers.

1.        The EU AI Act: An overly broad attempt to regulate AI

Nonetheless, some advocate regulating AI by placing new technologies into various broad categories of risk, each with their own attendant rules. For example, as proposed by the European Commission, the EU’s AI Act would regulate the use of AI systems that ostensibly pose risks to health, safety, and fundamental rights.[33] The proposal defines AI systems broadly to include essentially any software, and sorts them into three risk levels: unacceptable, high, and limited risk.[34] Unacceptable-risk systems are prohibited outright, while high-risk systems are subject to strict requirements, including mandatory conformity assessments.[35] Limited-risk systems face certain requirements related to adequate documentation and transparency.[36]

The AI Act defines AI so broadly that it would apply even to ordinary general-purpose software, as well as software that uses machine learning but does not pose significant risks.[37] The plain terms of the AI Act could be read to encompass common office applications, spam filters, and recommendation engines, thus potentially imposing considerable compliance burdens on businesses for their use of software that provides benefits dramatically greater than any expected costs.[38] A recently proposed amendment would “ban the use of facial recognition in public spaces, predictive policing tools, and to impose transparency measures on generative AI applications OpenAI’s ChatGPT.”[39]

This approach constitutes a hodge-podge of top-down tech policing and one-off regulations. The AI Act starts with the presumption that regulators can design an abstract, high-level set of categories that capture the risk from “AI” and then proceeds to force arbitrary definitions of particular “AI” implementations into those categories. This approach may get some things right and some things wrong, but none of what good it does will be with principled consistency. For example, it might be the case that “predictive policing” is a problem that merits per se prohibition, but is it really an AI problem? What happens if the police get exceptionally good at using publicly available data and spreadsheets to approximate 80% of what they are able to do with AI? Or even just 50% efficacy? Is it the use of AI that is a harm, or is it the practice itself?

Similarly, a requirement that firms expose the sources on which they train their algorithms might be good in some contexts, but useless or harmful in others.[40] Certainly, it can make sense when thinking about current publicly available generative tools that create images and video, and have no ability to point to a license or permission for their training data. Such cases have a high likelihood of copyright infringement. But should every firm be expected to do this? Surely there will be many cases where firms use their own internal data, or data not subject to property-rights protection at all, but where exposing those sources reveals sensitive internal information, like know-how or other trade secrets. In those cases, a transparency obligation could have a chilling effect.

By contrast, it seems hard to believe that every use of public facial recognition should be banned. For instance, what if local authorities had limited access to facial recognition to find lost children or victims of trafficking?

More broadly, a strict transparency requirement could essentially make advanced machine-learning techniques illegal. By their nature, machine-learning systems and applications that employ LLMs make inferences and predictions that are, very often, not replicable.[41] That is, by their very nature they are not reviewable in a way that would be easily explained to a human in a transparency review. This means that strong transparency obligations could make it legally untenable to employ those techniques.

The broad risk-based approach taken by the AI Act faces difficult enforcement hurdles as well, as demonstrated by the EU’s proposal to essentially ban the open-source community from providing access to generative models.[42] In other words, not only do the proposed amendments seek to prohibit large companies such as OpenAI, Google, Anthropic, Amazon, Microsoft, and IBM from offering API access to generative AI models, but they would also prohibit open-source developers and distributors such as GitHub from doing the same.[43] Moreover, the prohibitions have extraterritorial effects; for example, the EU might seek to impose large fines on U.S. companies for permitting access to their models in the United States, on grounds that those models could be imported into the EU by third parties.[44] These provisions reflect not only an attempt to control the distribution of AI technology but also the wider implications that such attempts would essentially require steering worldwide innovation down a narrow, heavily regulated path.

2.        Focus on the harm and the wrongdoers, not the innovators

None of the foregoing is to suggest that it is impossible for AI to be misused. Where it is misused, there should be actionable legal consequences. For example, if a real-estate developer intentionally used AI tools to screen out individuals on the basis of protected characteristics from purchasing homes, that should be actionable. If a criminal found a novel way to use Chat GPT to commit fraud, that should be actionable. If generative AI is used to create “deep fakes” that further some criminal plot, that should be actionable. But in all those cases, it is not the AI itself that is the relevant unit of legal analysis, but the action of the criminal and the harm he causes.

To try to build a regulatory framework that makes it impossible for bad actors to misuse AI will be ultimately fruitless. Bad actors will always find ways to misuse tools, and heavy-handed regulatory requirements (or even strong suggestions of such) might chill the development of useful tools that could generate an enormous amount of social welfare.

B.      Do Not Neglect the Benefits

A major complication in parsing the wisdom of potential AI regulation is that the technology remains largely in development. Indeed, this is the impetus for many of the calls to “do something” before it is “too late.”[45] The fear that some express is that, unless a wise regulator intervenes in the development process, the technology will inevitably develop in ways that yield more harm than good.[46]

But trying to regulate AI in accordance with the precautionary principle would almost certainly stifle development and dampen the tremendous, but unknowable, good that would emerge as these technologies mature and we find unique uses for them. Moreover, precautionary regulation, even in high-risk industries like nuclear power, can lead to net harms to social welfare.[47]

It is important here to distinguish two broad categories of concern about AI. First, there is the generalized concern about AGI, expressed as fear that we are inadvertently creating a super intelligence with the power to snuff out human life at its whim. We reject this fear as a legitimate basis for new regulatory frameworks, although we concede that it is theoretically possible that this presumption may need to be revisited as AI technologies progress. None of the technologies currently under consideration are anywhere close to AGI. They are essentially just advanced prediction engines, whether the predictions concern text or pixels.[48] It seems highly unlikely that we will accidentally stumble onto AGI by plugging a few thousand prediction engines into one another.

There are more realistic concerns that these very impressive technologies will be misused to further discrimination and crime, or will have such a disruptive impact on areas like employment that they will quickly generate tremendous harms. When contemplating harms that could occur, however, it is also necessary to recognize that many significant benefits could also be generated. Moreover, as with earlier technologies, economic disruptions will provide both challenges and opportunities. It is easy to see the immediate effect on the jobs of content writers, for instance, posed by ChatGPT, but less easy to measure the benefits that will be realized by firms that can deploy this technology to “in-source” tasks.

Firms often face what is called the “make-or-buy” decision. A firm that decides to purchase the services of an outside designer or copywriter has determined that doing so is more efficient than developing that talent in-house. But the fact that many firms employ a particular mix of outsourced and in-house talent to fulfill their business needs does not suggest a universally optimal solution to the make-or-buy problem. All we can do is describe how, under current conditions, firms solve this problem.

AI will surely augment the basis on which firms deal with the make-or-buy decision. Pre-AI, it might have made sense to outsource a good deal of work that was not core to a firm’s mission. Post-AI, it might be the case that the firm can afford to hire additional workers who can utilize AI tools to more quickly and affordably manage the work that had been previously outsourced. Thus, the ability of AI tools to shift the make-or-buy decision, in itself, says nothing about the net welfare effects to society. Arguments could very well be made for either side. If history is any guide, however, it appears likely that AI tools will allow firms to do more with less, while also enabling more individuals to start new businesses with less upfront expense.

Moreover, by freeing capital from easily automated tasks, existing firms and new entrepreneurs could better focus on their core business missions. Excess investments previously made in supporting, for example, the creation of marketing content could be repurposed into R&D-intensive work. Simplistic static analyses of the substitution power of AI tools will almost surely mislead us, and make us neglect the larger social welfare that could be gained from organizations improving their efficiency with AI tools.

Economists have consistently found that dynamic competition—characterized by firms vying to deliver novel and enhanced products and services to consumers—contributes significantly more to economic growth than static competition, where technology is held constant, and firms essentially compete solely on price. As Joseph Schumpeter noted:

[I]t is not [price] competition which counts but the competition from the new commodity, the new technology, the new source of supply, the new type of organization…. This kind of competition is as much more effective than the other as a bombardment is in comparison with forcing a door, and so much more important that it becomes a matter of comparative indifference whether competition in the ordinary sense functions more or less promptly; the powerful lever that in the long run expands output and brings down prices is in any case made of other stuff.[49]

Technological advancements yield substantial welfare benefits for consumers, and there is a comprehensive body of scholarly work substantiating the contributions of technological innovation to economic growth and societal welfare. [50] There is also compelling evidence that technological progress engenders extensive spillovers not fully appropriated by the innovators.[51] Business-model innovations—such as advancements in organization, production, marketing, or distribution—can similarly result in extensive welfare gains.[52]

AI tools obviously are delivering a new kind of technological capability for firms and individuals. The disruptions they will bring will similarly spur business-model innovation as firms scramble to find innovative ways to capitalize on the technology. The potential economic dislocations can, in many cases, amount to reconstitution: a person who was a freelance content writer can be shifted to a different position that manages the output of generative AI and provides human edits to ensure that content makes sense and is based in fact. In many other cases, the dislocations will likely lead to increased opportunities for workers of all sorts.

With this in mind, policymakers need to consider how to identify those laws and regulations that are most likely to foster this innovation, while also enabling courts and regulators to adequately deal with potential harms. Although it is difficult to prescribe particular policies to boost innovation, there is strong evidence about what sorts of policies should be avoided. Most importantly, regulation of AI should avoid inadvertently destroying those technologies.[53] As Adam Thierer has argued, “if public policy is guided at every turn by the fear of hypothetical worst-case scenarios and the precautionary mindset, then innovation becomes less likely.”[54]

Thus, policymakers must be cautious to avoid unduly restricting the range of AI tools that compete for consumer acceptance. Key to fostering investment and innovation is not merely the endorsement of technological advancement, but advocacy for policies that empower innovators to execute and commercialize their technology.

By contrast, consider again the way that some EU lawmakers want to treat “high risk” algorithms under the AI Act. According to recently proposed amendments, if a “high risk” algorithm learns something beyond what its developers expect it to learn, the algorithm would need to undergo a conformity assessment.[55]

One of the prime strengths of AI tools is their capacity for unexpected discoveries, offering potential insights and solutions that might not have been anticipated by human developers. As the Royal Society has observed:

Machine learning is a branch of AI that enables computer systems to perform specific tasks intelligently. Traditional approaches to programming rely on hardcoded rules, which set out how to solve a problem, step-by-step. In contrast, machine learning systems are set a task, and given a large amount of data to use as examples (and non-examples) of how this task can be achieved, or from which to detect patterns. The system then learns how best to achieve the desired output.[56]

By labeling unexpected behavior as inherently risky and necessitating regulatory review, we risk stifling this serendipitous aspect of AI technologies, potentially curtailing their capacity for innovation. It could contribute to a climate of regulatory caution that hampers swift progress in discovering the full potential and utility of AI tools.

C.     AI Regulation Should Follow the Model of Common Law

In a recent hearing of the U.S. Senate Judiciary Committee, OpenAI CEO Sam Altman suggested that the United States needs a central “AI regulator.”[57] As a general matter, we expect this would be unnecessarily duplicative. As we have repeatedly emphasized, the right approach to regulating AI is not the establishment of an overarching regulatory framework, but a careful examination of how AI technologies will variously interact with different parts of the existing legal system. We are not alone in this; former Special Assistant to the President for Technology and Competition Policy Tim Wu recently opined that federal agencies would be well-advised to rely on existing law and enhance that law where necessary in order to catch unexpected situations that may arise from the use of AI tools.[58]

As Judge Easterbrook famously wrote in the context of what was then called “cyberspace,” we do not need a special law for AI any more than we need a “law of the horse.”[59]

1.        An AI regulator’s potential effects on competition

More broadly, there are risks to competition that attend creating a centralized regulator for a new technology like AI. As an established player in the AI market, OpenAI might favor a strong central regulator because of the potential that such an agency could act in ways that hinder the viability of new entrants.[60] In short, an incumbent often can gain by raising its rivals’ regulatory costs, or by manipulating the relationship between its industry’s average and marginal costs. This dynamic can create strong strategic incentives for industry incumbents to promote regulation.

Economists and courts have long studied actions that generate or amplify market dominance by placing competitors at a disadvantage, especially by raising rivals’ costs.[61] There exist numerous strategies to put competitors at a disadvantage or push them out of the market without needing to compete on price. While antitrust action focuses on private actors and their ability to raises rival’s costs, it is well-accepted that “lobbying legislatures or regulatory agencies to create regulations that disadvantage rivals” has similar effects.[62]

Suppose a new regulation costs $1 million in annual compliance costs. Only companies that are sufficiently large and profitable will be able to cover those costs, which keeps out newcomers and smaller competitors. This effect of keeping out smaller competitors by raising their costs may more than offset the regulatory burden on the incumbent. New entrants typically produce on a smaller scale, and therefore find it more difficult to spread increased costs over a large number of units. This makes it harder for them to compete with established firms like OpenAI, which can absorb these costs more easily due to their larger scale of production.

This type of cost increase can often look benign. In the United Mine Workers vs. Pennington[63] case, a coal corporation was alleged to have conspired with the union representing its workforce to establish higher wage rates. How could higher wages be anticompetitive? This seemingly contradictory conclusion came from University of California at Berkeley economist Oliver Williamson, who interpreted the action to be an effort to maximize profits by raising entry barriers.[64] Using a model with a dominant incumbent and a fringe of other competitors, he demonstrated that wage-rate increases could lead to profit maximization if they escalated the fringe’s costs more than they did the dominant firm’s costs. Intuitively, while the dominant firm is dominant, the market price is determined by the marginal producers and the dominant company’s price is determined by the prices of its competitors. If a regulation raises the competitors’ per-unit costs by $2, the dominant company will be able to raise its price by as much as $2 per unit. Even if the regulation hurts the dominant firm, so long as its price increase exceeds its additional cost, the dominant firm can profit from the regulation.

As a result, while regulations might increase costs for OpenAI, they also serve to protect it from potential competition by raising the barriers to entry. In this sense, regulation can be seen as a strategic tool for incumbent firms to maintain or strengthen their market position. None of this analysis rests on OpenAI explicitly wanting to raise its rivals’ costs. That is just the competitive implication of such regulations. Thus, while there may be many benign reasons for a firm like OpenAI to call for regulation in good faith, the ultimate lesson presented by the economics of regulation should counsel caution when imposing strong centralized regulations on a nascent industry.

2.        A central licensing regulator for AI would be a mistake

NTIA asks:

Are there ways in which accountability mechanisms are unlikely to further, and might even frustrate, the development of trustworthy AI? Are there accountability mechanisms that unduly impact AI innovation and the competitiveness of U.S. developers?[65]

We are not alone in the  belief that imposing a licensing regime would present just such a barrier to innovation.[66] In the recent Senate hearings, the idea of a central regulator was endorsed as means to create and administer a licensing regime.[67] Perhaps in some narrow applications of particular AI technologies, there could be specific contexts in which licensing is appropriate (e.g., in providing military weapons), but broadly speaking, we believe this is inadvisable. Owing to the highly diverse nature of AI technologies, trying to license AI development is a fraught exercise, as NTIA itself acknowledges:

A developer training an AI tool on a customer’s data may not be able to tell how that data was collected or organized, making it difficult for the developer to assure the AI system. Alternatively, the customer may use the tool in ways the developer did not foresee or intend, creating risks for the developer wanting to manage downstream use of the tool. When responsibility along this chain of AI system development and deployment is fractured, auditors must decide whose data and which relevant models to analyze, whose decisions to examine, how nested actions fit together, and what is within the audit’s frame.[68]

Rather than design a single regulation to cover AI, ostensibly administered through a single licensing regime, NTIA should acknowledge the broad set of industries currently seeking to employ a diverse range of AI products that differ in fundamental ways. The implications of AI deployment in health care, for instance, vastly differ from those in transportation. A centralized AI regulator might struggle to comprehend the nuances and intricacies of each distinct industry, thus potentially leading to ineffective or inappropriate licensing requirements.

Analogies have been drawn between AI and sectors like railroads and nuclear power, which have dedicated regulators.[69] These sectors, however, are more homogenous and discrete than the AI industry (if such an industry even exists, apart from the software industry more generally). AI is much closer to a general-purpose tool, like chemicals or combustion engines. We do not enact central regulators to license every aspect of the development and use of chemicals, but instead allow different agencies to treat their use differently as is appropriate for the context. For example, the Occupational Safety and Health Administration (OSHA) will regulate employee exposure to dangerous substances encountered in the workplace, while various consumer-protection boards will regulate the adulteration of goods.

The notion of licensing implies that companies would need to obtain permission prior to commercializing a particular piece of code. This could introduce undesirable latency into the process of bringing AI technologies to market (or, indeed, even of correcting errors in already-deployed products). Given the expansive potential to integrate AI technologies into diverse products and services, this delay could significantly impede technological progress and innovation. Given the strong global interest in the subject, such delays threaten to leave the United States behind its more energetic competitors in the race for AI innovation.

As in other consumer-protection regimes, a better approach would be to eschew licensing and instead create product-centric and harm-centric frameworks that other sectoral regulators or competition authorities could incorporate into their tailored rules for goods and services.

For instance, safety standards for medical devices should be upheld, irrespective of whether AI is involved. This product-centric regulatory approach would ensure that the desired outcomes of safety, quality, and effectiveness are achieved without stymieing innovation. With their deep industry knowledge and experience, sectoral regulators will generally be better positioned to address the unique challenges and considerations posed by AI technology deployed within their spheres of influence.

NTIA alludes to one of the risks of an overaggregated regulator when it notes that:

For some trustworthy AI goals, it will be difficult to harmonize standards across jurisdictions or within a standard- setting body, particularly if the goal involves contested moral and ethical judgements. In some contexts, not deploying AI systems at all will be the means to achieve the stated goals.[70]

Indeed, the institutional incentives that drive bureaucratic decision making often converge on this solution of preventing unexpected behavior by regulated entities.[71] But at what cost? If a regulator is unable to imagine how to negotiate the complicated tradeoffs among interested parties across all AI-infused technologies, it will act to slow or prevent the technology from coming to market. This will make us all worse off, and will only strengthen the position of our competitors on the world stage.

D.      The Impossibility of Explaining Complexity

NTIA notes that:

According to NIST, ‘‘trustworthy AI’’ systems are, among other things, ‘‘valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with their harmful bias managed.’’[72]

And in the section titled “Accountability Inputs and Transparency, NTIA asks a series of questions designed to probe what can be considered a realistic transparency obligation for developers and deployers of AI systems. We urge NTIA to resist the idea that AI systems be “explainable,” for the reasons set forth herein.

One of the significant challenges in AI accountability is making AI systems explainable to users. It is crucial to acknowledge that providing a clear explanation of how an AI model—such as an LLM or a diffusion model—arrives at a specific output is an inherently complex task, and may not be possible at all. As the UK Royal Society has noted in its paper on AI explainability:

Much of the recent excitement about advances in AI has come as a result of advances in statistical techniques. These approaches – including machine learning – often leverage vast amounts of data and complex algorithms to identify patterns and make predictions. This complexity, coupled with the statistical nature of the relationships between inputs that the system constructs, renders them difficult to understand, even for expert users, including the system developers. [73]

These models are designed with intricate architectures and often rely on vast troves of data to arrive at outputs, which can make it nearly impossible to reverse-engineer the process. Due to these complexities, it may be unfeasible to make AI fully explainable to users. Moreover, users themselves often do not value explainability, and may be largely content with a “black box” system when it consistently provides accurate results.[74]

Instead, to the extent that regulators demand visibility into AIs, the focus should be on the transparency of the AI-development process, system inputs, and the general guidelines for AI that developers use in preparing their models. Ultimately, we suspect that, even here, such measures will do little to resolve the inherent complexity in understanding how AI tools produce their outputs.

In a more limited sense, we should consider the utility in transparency of AI-infused technology for most products and consumers. NTIA asks:

Given the likely integration of generative AI tools such as large language models (e.g., ChatGPT) or other general-purpose AI or foundational models into downstream products, how can AI accountability mechanisms inform people about how such tools are operating and/or whether the tools comply with standards for trustworthy AI?[75]

As we note above, the proper level of analysis for AI technologies is the product into which they are incorporated. But even there, we need to ask whether it matters to an end user whether a product they are using relies on ChatGPT or a different algorithm for predictively generating text. If the product malfunctions, what matters is the malfunction and the accountability for the product. Most users do not really care whether a developer writes a program using C++ or Java, and neither should they explicitly care whether he incorporates a generative AI algorithm to predict text, or uses some other method of statistical analysis. The presence of an AI component becomes analytically necessary when diagnosing how something went wrong, but ex ante, it is likely irrelevant from a consumer’s perspective.

Thus, it may be the case that a more fruitful avenue for NTIA to pursue would be to examine how a strict-liability or product-liability legal regime might be developed for AI. These sorts of legal frameworks put the onus on AI developers to ensure that their products behave appropriately­. Such legal frameworks also provide consumers with reassurance that they have recourse if and when they are harmed by a product that contains AI technology. Indeed, it could very well be the case that overemphasizing “trust” in AI systems could end up misleading users in important contexts.[76] This would strengthen the case for a predictable liability regime.

1.        The deepfakes problem demonstrates that we do not need a new body of law

The phenomenon of generating false depictions of individuals using advanced AI techniques—commonly called “deepfakes”—is undeniably concerning, particularly when it can be used to create detrimental false public statements,[77] facilitate fraud,[78] or create nonconsensual pornography.[79] But while deepfakes use modern technological tools, they are merely the most recent iteration of the age-old problem of forgery. Importantly, existing law already equips us with the tools needed to address the challenges posed by deepfakes, rendering many recent legislative proposals at the state level both unnecessary and potentially counterproductive. Consider one of the leading proposals offered by New York State.[80]

Existing laws in New York and at the federal level provide remedies for individuals aggrieved by deepfakes, and they do so within a legal system that has already worked to incorporate the context of these harms, as well as the restrictions of the First Amendment and related defenses. For example, defamation laws can be applied where a deepfake falsely suggests an individual has posed for an explicit photograph or video.[81] New York law also acknowledges the tort of intentional infliction of emotional distress, which likely could be applied to the unauthorized use of a person’s likeness in explicit content.[82] In addition, the tort of unjust enrichment can be brought to bear where appropriate, as can the Lanham Act §43(a), which prohibits false advertising and implied false endorsements.[83] Furthermore, victims may hold copyright in the photograph or video used in a deepfake, presenting grounds for an infringement action.[84]

Thus, while advanced deepfakes are new, the harms they can cause and the law’s ability to address those harms is not novel. Legislation that attempts to carve out new categories of harms in these situations are, at best, reinventing the wheel and, at worst, risk creating confusing tensions in the existing legal system.

III.      The Role of NTIA in AI Accountability

NTIA asks if “the lack of a federal law focused on AI systems [is] a barrier to effective AI accountability?”[85] In short, no, this is not a barrier, so long as the legal system is allowed to evolve to incorporate the novel challenges raised by AI technologies.

As noted in the previous section, there is a need to develop standards, both legal and technical. As we are in the early days of AI technology, the exact contours of the various legal changes that might be needed to incorporate AI tools into existing law remain unclear. At this point, we would urge NTIA—to the extent that it wants to pursue regulatory, licensing, transparency, and other similar obligations—to develop a series of workshops through which leading technology and legal experts could confer on developing a vision for how such legal changes would work in practice.

By gathering stakeholders and fostering an ongoing dialogue, NTIA can help to create a collaborative environment in which organizations can share knowledge, experiences, and innovations to address AI accountability and its associated challenges. By promoting industry collaboration, NTIA could also help build a foundation of trust and cooperation among organizations involved in AI development and deployment. This, in turn, will facilitate the establishment of standards and best practices that address specific concerns, while mitigating the risk of overregulation that could stifle innovation and progress. In this capacity, NTIA should focus on encouraging the development of context-specific best practices that prioritize the containment of identifiable harms. By fostering a collaborative atmosphere, the agency can support a dynamic and adaptive AI ecosystem that is capable of addressing evolving challenges while safeguarding the societal benefits of AI advancements.

In addressing AI accountability, it is essential for NTIA to adopt a harm-focused framework that targets the negative impacts of AI systems rather than the technology itself. This approach would recognize that AI technology can have diverse applications, with consequences that will depend on the context in which they are used. By prioritizing the mitigation of specific harms, NTIA can ensure that regulations are tailored to address real-world outcomes and provide a more targeted and effective regulatory response.

A harm-focused framework also acknowledges that different AI technologies pose differing levels of risk and potential for misuse. NTIA can play a proactive role in guiding the creation of policies that reflect these nuances, striking a balance between encouraging innovation and ensuring the responsible development and use of AI. By centering the discussion on actual harms and their causes, NTIA can foster meaningful dialogue among stakeholders and facilitate the development of industry best practices designed to minimize negative consequences.

Moreover, this approach ensures that AI accountability policies are consistent with existing laws and regulations, as it emphasizes the need to assess AI-related harms within the context of the broader legal landscape. By aligning AI accountability measures with other established regulatory frameworks, the NTIA can provide clear guidance to AI developers and users, while avoiding redundancy and conflicting regulations. Ultimately, a harm-focused framework allows the NTIA to better address the unique challenges posed by AI technology and foster an assurance ecosystem that prioritizes safety, ethics, and legal compliance without stifling innovation.

IV.    Conclusion

Another risk of the current AI hysteria is that fatigue will set in, and the public will become numbed to potential harms. Overall, this may shrink the public’s appetite for the kinds of legal changes that will be needed to address those actual harms that do emerge. News headlines that push doomsday rhetoric and a community of experts all too eager to respond to the market incentives for apocalyptic projections only exacerbate the risk of that outcome. A recent one-line letter, signed by AI scientists and other notable figures, highlights the problem:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.[86]

Novel harms absolutely will emerge from products that employ AI, as has been the case for every new technology. The introduction of automobiles created new risks of harm from high-speed auto-related deaths, for example. But rhetoric about AI being an existential risk on the level of a pandemic or nuclear war is irresponsible.

Perhaps one of the most important positions NTIA can assume, therefore, is that of a calm, collected expert agency that helps restrain the worst impulses to regulate AI out of existence due to blind fear.

In essence, the key challenge confronting policymakers lies in navigating the dichotomy of mitigating actual risks presented by AI, while simultaneously safeguarding the substantial benefits it offers. It is undeniable that the evolution of AI will bring about disruption and may provide a conduit for malevolent actors, just as technologies like the printing press and the internet have done in the past. This does not, however, merit taking an overly cautious stance that would suppress the potential benefits of AI.

As we formulate policy, it is crucial to eschew dystopian science-fiction narratives and instead ground our approach in realistic scenarios. The proposition that computer systems, even those as advanced as AI tools, could spell the end of humanity lacks substantial grounding.

The current state of affairs represents a geo-economic competition to harness the benefits of AI in myriad domains. Contrary to fears that AI poses an existential risk, the real danger may well lie in attempts to overly regulate and stifle the technology’s potential. The indiscriminate imposition of regulations could inadvertently thwart AI advancements, resulting in a loss of potential benefits that could be far more detrimental to social welfare.

[1] AI Accountability Policy Request for Comment, Docket No. 230407-0093, 88 FR 22433, National Telecommunications and Information Administration (Apr. 14, 2023) (“RFC”).

[2] Indeed, this approach appears to be the default position of many policymakers around the world. See, e.g., Mikolaj Barczentewicz, EU’s Compromise AI Legislation Remains Fundamentally Flawed, Truth on the Market (Feb. 8, 2022), https://truthonthemarket.com/2022/02/08/eus-compromise-ai-legislation-remains-fundamentally-flawed; The fundamental flaw of this approach is that, while AI techniques use statistics, “statistics also includes areas of study which are not concerned with creating algorithms that can learn from data to make predictions or decisions. While many core concepts in machine learning have their roots in data science and statistics, some of its advanced analytical capabilities do not naturally overlap with these disciplines.” See, Explainable AI: The Basics, The Royal Society (2019) at 7 available at https://royalsociety.org/-/media/policy/projects/explainable-ai/AI-and-interpretability-policy-briefing.pdf (“Royal Society Briefing”).

[3] John P. Holdren, Cass R. Sunstein, & Islam A. Siddiqui, Memorandum for the Heads of Executive Departments and Agencies, Executive Office of the White House (Jun. 9, 2011), available at https://obamawhitehouse.archives.gov/sites/default/files/omb/inforeg/for-agencies/nanotechnology-regulation-and-oversight-principles.pdf.

[4] Id.

[5] Frank H. Easterbrook, Cyberspace and the Law of the Horse, 1996 U. Chi. L. Forum 207 (1996).

[6] LLMs are a type of artificial-intelligence model designed to parse and generate human language at a highly sophisticated level. The deployment of LLMs has driven progress in fields such as conversational AI, automated content creation, and improved language understanding across a multitude of applications, even suggesting that these models might represent an initial step toward the achievement of artificial general intelligence (AGI). See Alejandro Pen?a et al., Leveraging Large Language Models for Topic Classification in the Domain of Public Affairs, arXiv (Jun. 5, 2023), https://arxiv.org/abs/2306.02864v1.

[7] Diffusion models are a type of generative AI built from a hierarchy of denoising autoencoders, which can achieve state-of-the-art results in such tasks as class-conditional image synthesis, super-resolution, inpainting, colorization, and stroke-based synthesis. Unlike other generative models, these likelihood-based models do not exhibit mode collapse and training instabilities. By leveraging parameter sharing, they can model extraordinarily complex distributions of natural images without necessitating billions of parameters, as in autoregressive models. See Robin Rombach et al., High-Resolution Image Synthesis with Latent Diffusion Models, arXiv (Dec. 20, 2021), https://arxiv.org/abs/2112.10752.

[8] Recommender systems are advanced tools currently used across a wide array of applications, including web services, books, e-learning, tourism, movies, music, e-commerce, news, and television programs, where they provide personalized recommendations to users. Despite recent advancements, there is a pressing need for further improvements and research in order to offer more efficient recommendations that can be applied across a broader range of applications. See Deepjyoti Roy & Mala Dutta, A Systematic Review and Research Perspective on Recommender Systems, 9 J. Big Data 59 (2022), available at https://journalofbigdata.springeropen.com/counter/pdf/10.1186/s40537-022-00592-5.pdf.

[9] AGI refers to hypothetical future AI systems that possess the ability to understand or learn any intellectual task that a human being can do. While the realization of AGI remains uncertain, it is distinct from the more specialized AI systems currently in use. For a skeptical take on the possibility of AGI, see Roger Penrose, The Emperor’s New Mind (Oxford Univ. Press 1989).

[10] Samuel D. Warren & Louis D. Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193 (1890).

[11] Id. at 200.

[12] Id. at 193.

[13] Id. at 196-97.

[14] Notably, courts do try to place a value on emotional distress and related harms. But because these sorts of violations are deeply personal, attempts to quantify such harms in monetary terms are rarely satisfactory to the parties involved.

[15] Martin Giles, Bounty Hunters Tracked People Secretly Using US Phone Giants’ Location Data, MIT Tech. Rev. (Feb. 7, 2019), https://www.technologyreview.com/2019/02/07/137550/bounty-hunters-tracked-people-secretly-using-us-phone-giants-location-data.

[16] See, e.g., Sony Corp. of Am. v. Universal City Studios, Inc., 464 U.S. 417, 439 (1984) (The Supreme Court imported the doctrine of “substantial noninfringing uses” into copyright law from patent law).

[17] A notable example is how the Patriot Act, written to combat terrorism, was ultimately used to take down a sitting governor in a prostitution scandal. See Noam Biale, Eliot Spitzer: From Steamroller to Steamrolled, ACLU, Oct. 29, 2007, https://www.aclu.org/news/national-security/eliot-spitzer-steamroller-steamrolled.

[18] RFC at 22437.

[19] Id. at 22433.

[20] Id. at 22436.

[21] Indeed, the RFC acknowledges that, even as some groups are developing techniques to evaluate AI systems for bias or disparate impact, “It should be recognized that for some features of trustworthy AI, consensus standards may be difficult or impossible to create.” RFC at 22437. Arguably, this problem is inherent to constructing an overaggregated regulator, particularly one that will be asked to consulting a broad public on standards and rulemaking.

[22] Id. at 22439.

[23] Sony Corp. of Am. v. Universal City Studios, Inc., 464 U.S.at 417.

[24] Id.

[25] Id.

[26] Id. at 456.

[27] Id.

[28] See, e.g., Defendant Indicted for Camcording Films in Movie Theaters and for Distributing the Films on Computer Networks First Prosecution Under Newly-Enacted Family Entertainment Copyright Act, U.S. Dept of Justice (Aug. 4, 2005), available at https://www.justice.gov/archive/criminal/cybercrime/press-releases/2005/salisburyCharge.htm.

[29] 17 U.S.C. 106.

[30] See 17 U.S.C. 107; Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569, 590 (1994) (“Since fair use is an affirmative defense, its proponent would have difficulty carrying the burden of demonstrating fair use without favorable evidence about relevant markets.”).

[31] See, e.g., N.Y. Penal Law § 265.01; Wash. Rev. Code Ann. § 9.41.250; Mass. Gen. Laws Ann. ch. 269, § 10(b).

[32] See, e.g., 18 U.S.C.A. § 922(g).

[33] Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM/2021/206 final. The latest proposed text of the AI Act is available at https://www.europarl.europa.eu/doceo/document/A-9-2023-0188_EN.html.

[34] Id. at amendment 36 recital 14.

[35] Id.

[36] Id.

[37] See e.g., Mikolaj Barczentewicz, supra note 2.

[38] Id.

[39] Foo Yun Chee, Martin Coulter & Supantha Mukherjee, EU Lawmakers’ Committees Agree Tougher Draft AI Rules, Reuters (May 11, 2023), https://www.reuters.com/technology/eu-lawmakers-committees-agree-tougher-draft-ai-rules-2023-05-11.

[40] See infra at notes 71-77 and accompanying text.

[41] Explainable AI: The Basics, supra note 2 at 8.

[42] See e.g., Delos Prime, EU AI Act to Target US Open Source Software, Technomancers.ai (May 13, 2023), https://technomancers.ai/eu-ai-act-to-target-us-open-source-software.

[43] Id.

[44] To be clear, it is not certain how such an extraterritorial effect will be obtained, and this is just a proposed amendment to the law. Likely, there will need to be some form of jurisdictional hook, i.e., that this applies only to firms with an EU presence.

[45]  Eliezer Yudkowsky, Pausing AI Developments Isn’t Enough. We Need to Shut it All Down, Time (Mar. 29, 2023), https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough.

[46] See, e.g., Kiran Stacey, UK Should Play Leading Role on Global AI Guidelines, Sunak to Tell Biden, The Guardian (May 31, 2023), https://www.theguardian.com/technology/2023/may/31/uk-should-play-leading-role-in-developing-ai-global-guidelines-sunak-to-tell-biden.

[47] See, e.g., Matthew J. Neidell, Shinsuke Uchida & Marcella Veronesi, The Unintended Effects from Halting Nuclear Power Production: Evidence from Fukushima Daiichi Accident, NBER Working Paper 26395 (2022), https://www.nber.org/papers/w26395 (Japan abandoning nuclear energy in the wake of the Fukushima disaster led to decreased energy consumption, which in turn led to increased mortality).

[48] See, e.g., Will Knight, Some Glimpse AGI in ChatGPT. Others Call It a Mirage, Wired (Apr. 10, 2023), https://www.wired.com/story/chatgpt-agi-intelligence (“GPT-4, like its predecessors, had been fed massive amounts of text and code and trained to use the statistical patterns in that corpus to predict the words that should be generated in reply to a piece of text input.”)

[49] Joseph A. Schumpeter, Capitalism, Socialism And Democracy 74 (1976).

[50] See, e.g., Jerry Hausman, Valuation of New Goods Under Perfect and Imperfect Competition, in The Economics Of New Goods 209–67 (Bresnahan & Gordon eds., 1997).

[51] William D. Nordhaus, Schumpeterian Profits in the American Economy: Theory and Measurement, NBER Working Paper No. 10433 (Apr. 2004) at 1, http://www.nber.org/papers/w10433 (“We conclude that only a miniscule fraction of the social returns from technological advances over the 1948-2001 period was captured by producers, indicating that most of the benefits of technological change are passed on to consumers rather than captured by producers.”).

[52] See generally Oliver E. Williamson, Markets And Hierarchies, Analysis And Antitrust Implications: A Study In The Economics Of Internal Organization (1975).

[53] See, e.g., Nassim Nicholas Taleb, Antifragile: Things That Gain From Disorder (2012) (“In action, [via negativa] is a recipe for what to avoid, what not to do.”).

[54] Adam Thierer, Permissionless Innovation: The Continuing Case For Comprehensive Technological Freedom (2016).

[55] See, e.g., Artificial Intelligence Act, supra note 33, at amendment 112 recital 66.

[56] Explainable AI: The Basics, supra note 2 at 6.

[57] Cecilia Kang, OpenAI’s Sam Altman Urges A.I. Regulation in Senate Hearing, NY Times (May 16, 2023), https://www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html; see also Mike Solana & Nick Russo, Regulate Me, Daddy, Pirate Wires (May 23, 2023), https://www.piratewires.com/p/regulate-me-daddy.

[58] Cristiano Lima, Biden’s Former Tech Adviser on What Washington is Missing about AI, The Washington Post (May 30, 2023), https://www.washingtonpost.com/politics/2023/05/30/biden-former-tech-adviser-what-washington-is-missing-about-ai.

[59] Frank H. Easterbrook, supra note 5.

[60]  See Lima, supra note 58 (“I’m not in favor of an approach that would create heavy compliance costs for market entry and that would sort of regulate more abstract harms.”)

[61] Steven C. Salop & David T. Scheffman, Raising Rivals’ Costs, 73:2 Am. Econ. R. 267, 267–71 (1983), http://www.jstor.org/stable/1816853.

[62] Steven C. Salop & David T. Scheffman, Cost-Raising Strategies, 36:1 J. Indus. Econ. 19 (1987), https://doi.org/10.2307/2098594.

[63] United Mine Workers of Am. v. Pennington, 381 U.S. 657, 661 (1965).

[64] Oliver E. Williamson, Wage Rates as a Barrier to Entry: The Pennington Case in Perspective, 82:1 Q. J. Econ. 85 (1968), https://doi.org/10.2307/1882246.

[65] RFC at 22439.

[66] See, e.g., Lima, supra note 58 (“Licensing regimes are the death of competition in most places they operate”).

[67] Kang, supra note 57; Oversight of A.I.: Rules for Artificial Intelligence: Hearing Before the Subcomm. on Privacy, Technology, and the Law of the S. Comm. on the Judiciary, 118th Cong. (2023) (statement of Sam Altman, at 11), available at https://www.judiciary.senate.gov/download/2023-05-16-testimony-altman.

[68] RFC at 22437.

[69] See, e.g., Transcript: Senate Judiciary Subcommittee Hearing on Oversight of AI, Tech Policy Press (May 16, 2023), https://techpolicy.press/transcript-senate-judiciary-subcommittee-hearing-on-oversight-of-ai (“So what I’m trying to do is make sure that you just can’t go build a nuclear power plant. Hey Bob, what would you like to do today? Let’s go build a nuclear power plant. You have a nuclear regulatory commission that governs how you build a plant and is licensed.”)

[70] RFC at 22438.

[71] See, e.g., Raymond J. March, The FDA and the COVID?19: A Political Economy Perspective, 87(4) S. Econ. J. 1210, 1213-16 (2021), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8012986 (discussing the political economy that drives incentives of bureaucratic agencies in the context of the FDA’s drug-approval process).

[72] RFC at 22434.

[73] Explainable AI: The Basics, supra, note 2 at 12.

[74] Id. at 20.

[75] Id. at 22439.

[76] Explainable AI: The Basics, supra note 2 at 22. (“Not only is the link between explanations and trust complex, but trust in a system may not always be a desirable outcome. There is a risk that, if a system produces convincing but misleading explanations, users might develop a false sense of confidence or understanding, mistakenly believing it is trustworthy as a result.”)

[77] Kate Conger, Hackers’ Fake Claims of Ukrainian Surrender Aren’t Fooling Anyone. So What’s Their Goal?, NY Times (Apr. 5, 2022), https://www.nytimes.com/2022/04/05/us/politics/ukraine-russia-hackers.html.

[78] Pranshu Verma, They Thought Loved Ones Were Calling for Help. It Was an AI Scam, The Washington Post (Mar. 5, 2023), https://www.washingtonpost.com/technology/2023/03/05/ai-voice-scam.

[79] Video: Deepfake Porn Booms in the Age of A.I., NBC News (Apr. 28, 2023), https://www.nbcnews.com/now/video/deepfake-porn-booms-in-the-age-of-a-i-171726917562.

[80] S5857B, NY State Senate (2018), https://www.nysenate.gov/legislation/bills/2017/s5857/amendment/b.

[81] See, e.g., Rejent v. Liberation Publications, Inc., 197 A.D.2d 240, 244–45 (1994); see also, Leser v. Penido, 62 A.D.3d 510, 510–11 (2009).

[82] See, e.g., Howell v. New York Post Co,. 612 N.E.2d 699 (1993).

[83] See, e.g., Mandarin Trading Ltd. v. Wildenstein, 944 N.E.2d 1104 (2011); 15 U.S.C. §1125(a).

[84] 17 U.S.C. 106.

[85] RFC at 22440.

[86] Statement on AI Risk, Center for AI Safety, https://www.safe.ai/statement-on-ai-risk (last visited Jun. 7, 202).

Continue reading
Innovation & the New Economy

ICLE Amicus Brief in Illumina & Grail v FTC

Amicus Brief IDENTITY AND INTEREST OF AMICUS CURIAE AND SOURCE OF AUTHORITY TO FILE BRIEF The International Center for Law & Economics (“ICLE”) is a nonprofit, nonpartisan, . . .

IDENTITY AND INTEREST OF AMICUS CURIAE AND SOURCE OF AUTHORITY TO FILE BRIEF

The International Center for Law & Economics (“ICLE”) is a nonprofit, nonpartisan, global research and policy center aimed at building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law and economics methodologies, and economic findings, to inform public policy, and has longstanding expertise in antitrust law.

Amici also include 28 scholars of antitrust, law, and economics at leading universities and research institutions across the United States. Their names, titles, and academic affiliations are listed in Appendix A. All amici have extensive expertise in antitrust law and economics, and several served in senior positions at the Federal Trade Commission or the Antitrust Division of the Department of Justice.

Amici have an interest in ensuring that courts and agencies correctly apply the standards for evaluating horizontal and vertical mergers, and take into account the benefits commonly associated with vertical mergers.        

Amici are authorized to file this brief by Fed. R. App. P. 29(a)(2) because all parties have consented to its filing.

RULE 29(a)(4)(e) STATEMENT

Amici hereby state that no party’s counsel authored this brief in whole or in part; that no party or party’s counsel contributed money that was intended to fund the preparation or submission of the brief; and that no person other than amicus or its counsel contributed money that was intended to fund the preparation or submission of the brief.

INTRODUCTION AND SUMMARY OF ARGUMENT

The FTC’s decision to require Illumina to divest Grail rests on at least two misguided premises. The first is that the same scrutiny applies to both horizontal and vertical mergers. The second is that benefits typically associated with vertical mergers do not apply here.

A horizontal merger combines firms that compete in the same relevant market, which necessarily reduces the number of firms engaged in head-to-head competition and may eliminate substitutes. That reduction inherently tends to increase prices, but the price effect may be trivial.  In addition, market responses (competitive repositioning or new entry) or other benefits of the merger (savings in transaction and other costs, enhanced investment incentives) may neutralize or offset the impetus to higher prices. But because those benefits are not automatic (and the reduction of direct competition is), they must be proven rather than assumed if the merger otherwise poses a significant risk of anticompetitive effects.

A vertical merger, in contrast, combines firms with an upstream-downstream (e.g., seller-buyer) relationship—that is, “firms or assets at different stages of the same supply chain.” Dep’t of Justice, Antitrust Division and FTC, Vertical Merger Guidelines 1 (2020). Examples include a manufacturer’s acquiring a distributor or a firm providing a manufacturing input.

The economic consequences of combining complements rather than substitutes are fundamentally different. Whereas the first-order effect of a horizontal merger is upward pricing pressure, the first-order effect of a vertical merger is downward pricing pressure. Vertical mergers typically entail the elimination of double marginalization (“EDM”), which is akin to downward pricing pressure (and often considered alongside efficiencies). David Reiffen & Michael Vita, Comment: Is There New Thinking on Vertical Mergers? 63 Antitrust L.J. 917, 920 (1995). Vertical integration also typically internalizes externalities in research and development, resulting in greater investment. Henry Ogden Armour & David J. Teece, Vertical Integration and Technological Innovation, 62 Rev. Econ. & Stat. 470 (1980). Like horizontal mergers, vertical mergers often confer other benefits such as operational and transactional efficiencies. Dennis W. Carlton, Transaction Costs and Competition Policy, 73 Int’l J. Indus. Org. 1 (2019); Oliver Williamson, The Economic Institutions of Capitalism 86 (1985).

Thus, while both types of mergers can create benefits from cost savings, their intrinsic effects move in opposite directions: higher prices and less investment with horizontal mergers, and lower prices and more investment with vertical mergers.

  1. The FTC’s conclusion that the same scrutiny applies to horizontal and vertical mergers, Opinion 75, conflicts with precedent (and long-standing economic research). Courts and economists alike recognize that vertical integration typically is procompetitive, and it is widely accepted that vertical mergers and horizontal mergers should be evaluated under different presumptions. As the leading antitrust treatise puts it, “[i]n the great majority of cases no anticompetitive consequences can be attached to [vertical integration], and injury to competition should never be inferred from the mere fact of vertical integration.” 3B Phillip Areeda & Herbert Hovenkamp, Antitrust Law ¶?755a (4th ed. 2017). That vertical mergers can be anticompetitive—under specific facts and circumstances—does not establish that vertical integration is likely to be anticompetitive (it is not) or that there is no useful antitrust distinction between vertical and horizontal mergers (there is).

The Commission did not simply presume that this vertical merger would be anticompetitive, however. It also discounted both the likelihood of efficiencies in vertical mergers and specific evidence of efficiencies associated with the already-consummated merger. As a result, the Commission did not properly assess the likely competitive effects of the merger.

  1. The Commission also disregarded evidence of a current and operative constraint on any potential anticompetitive effects of the merger. Illumina’s Open Offer appears to be contractually binding, and addresses the risk of foreclosure that is the primary competitive concern here. Proper consideration of the Open Offer should have shifted the Commission further away from presuming harm. Instead, the Commission gave it no weight.
  2. The existing standards for vertical merger scrutiny are informed by, and consistent with, economic research regarding vertical mergers and other forms of vertical integration. That research shows that the Commission was wrong to hold that vertical and horizontal mergers should be analyzed identically, and wrong to disregard the well-established benefits of vertical integration. While economic theory indicates that vertical mergers can be anticompetitive, the weight of the empirical evidence overwhelmingly indicates that they tend to be procompetitive or competitively neutral. Indeed, the large majority of vertical mergers that have been studied have been found to be procompetitive or benign. That suggests that case-specific evidence is paramount in assessing both potential anticompetitive effects and countervailing pro-consumer efficiencies.

ARGUMENT

I.    Vertical and Horizontal Mergers Should Be Scrutinized Differently.

A.  Prima Facie Standards and the Government’s “Ultimate Burden” Differ in Horizontal and Vertical Merger Cases.

Courts have long recognized that horizontal and vertical mergers are categorically different. “As horizontal agreements are generally more suspect than vertical agreements,” courts are “cautious about importing relaxed standards of proof into vertical agreement cases.” Republic Tobacco v. North Atlantic Trading Co., 381 F. 3d 717, 737 (7th Cir. 2004). Thus, for vertical mergers, “unlike horizontal mergers, the government cannot use a shortcut to establish a presumption of anticompetitive effect.…” United States v. AT&T, Inc., 916 F.3d 1029, 1032 (D.C. Cir. 2019) (“AT&T II”). In a vertical merger case, “the government must make a fact-specific showing that the proposed merger is likely to be anticompetitive,” and the “ultimate burden of persuasion… remains with the government at all times.” Id. (emphasis added; cleaned up).

As the ALJ’s Initial Decision (ID) recognized (at 132), the burden-shifting approach is not bound by any specific, sequential form. Then-Judge Thomas stressed in United States v. Baker Hughes, 908 F.2d 981, 984 (D.C. Cir. 1990), that “[t]he Supreme Court has adopted a totality-of-the-circumstances approach…, weighing a variety of factors to determine the effects of particular transactions on competition.” As the ALJ aptly put it, the Baker Hughes “‘burden-shifting language’” provides “‘a flexible framework rather than an air-tight rule’”; “in practice, evidence is often considered all at once and the burdens are often analyzed together.” ID 132 (quoting Chicago Bridge & Iron Co. v. FTC, 534 F.3d 410, 424-24 (5th Cir. 2008)).

The differential treatment of vertical and horizontal mergers parallels the Supreme Court’s vertical restraints jurisprudence. The potential anticompetitive effects of vertical restraints are similar to those posed by vertical mergers, as both obtain between firms at different levels of the supply chain.

Over time, the Supreme Court has eliminated per se condemnation for vertical restraints. In 1977, the Court rejected per se illegality for vertical non-price restraints, Cont’l T.V., Inc. v. GTE Sylvania Inc., 433 U.S. 36, 49, 52 n.19, 58 (1977) (overruling United States v. Arnold, Schwinn & Co., 388 U.S. 365 (1967)), later confirming that “a vertical restraint is not illegal per se unless it includes some agreement on price or price levels.” Bus. Elecs. Corp. v. Sharp Elecs. Corp., 485 U.S. 717, 735-36 (1988). Eventually, the Court repudiated the last vertical per se prohibition—of vertical minimum price restraints. Leegin Creative Leather Prods., Inc. v. PSKS, Inc., 551 U.S. 877 (2007) (overruling Dr. Miles Med. Co. v. John D. Park & Sons Co., 220 U.S. 373 (1911)). In these decisions, the Court emphasized that any departure from the evidence-specific rule of reason “must be based on demonstrable economic effect, rather than . . . upon formalistic line drawing.” Bus. Elecs., 485 U.S. at 724.

These decisions reflect a nearly categorical repudiation of presumptions of illegality in dealings involving entities at different levels of the supply chain. Here, however, the Commission took the opposite approach, presuming anticompetitive effect while rejecting the significance of rigorously established benefits in a way that approaches a per se standard. This Court should reject that departure from sound law and economics.

B.  The FTC Did Not Undertake the Necessary Fact-Specific Examination of the Merged Firm’s Incentives Given the Merger’s Efficiencies.

The FTC had to show that Illumina has a greater incentive to foreclose rivals following—and because of—the merger. Instead, the Commission adopted a standard of review that elides the requirement that, in a vertical merger case where there is “no presumption of harm in play,” “the government must make a fact-specific showing that the proposed merger is likely to be anticompetitive” both at the prima facie stage and in the final analysis. United States v. AT&T Inc., 310 F. Supp. 3d 161, 192 (D.D.C. 2018) (“AT&T I”), aff’d, 916 F.3d 1029 (D.C. Cir. 2019); see AT&T II, 916 F.3d at 1032.

To be sure, there is little recent case law regarding the standard of review for vertical mergers because the federal antitrust agencies have rarely challenged, let alone litigated, vertical acquisitions. The Department of Justice challenge to the AT&T/Time Warner merger marked “the first time in 40 years that a court has heard a fully-litigated challenge to a vertical merger.” Joshua D. Wright & Jan M. Rybnicek, US v. AT&T Time Warner: A Triumph of Economic Analysis, 6 J. Antitrust Enforcement 3 (2018).

Nevertheless, up to now, the agencies have considered likely structural benefits, transactional efficiencies, and potential remedies, along with potential harms, in toto and on net, in assessing a merger’s likely competitive impact. Hence, the Vertical Merger Guidelines—jointly adopted by the FTC and the Antitrust Division of the Department of Justice in June 2020 (although withdrawn by the FTC while this case was pending)—state that “[t]he Agencies do not challenge a merger if cognizable efficiencies are of a character and magnitude such that the merger is unlikely to be anticompetitive in any relevant market.” Vertical Merger Guidelines at 11. Even under the Horizontal Merger Guidelines, “[t]he Agency will not challenge a merger if cognizable efficiencies are of a character and magnitude such that the merger is not likely to be anticompetitive.” Dep’t of Justice, Antitrust Division & FTC, Horizontal Merger Guidelines § 10 (Aug. 19, 2010).

But here the Commission did not seriously account for likely efficiencies or other benefits that may be derived from practices inconsistent with foreclosure. Put simply, Illumina’s ability to profit from the merger without foreclosing rivals reduces its incentive to foreclose. Although the foreclosure incentive may remain on some margins, the question of the greater incentive cannot be resolved without assessing the incentives against foreclosure as well as those for it.

Illumina’s post-merger incentive to foreclose rivals may be constrained by:

  • its interest in revenue realized from a broader array of sequencing clients than the relatively few engaged in multi-cancer early detection (MCED) research;
  • the procompetitive—and pro-consumer—cost advantages it is likely to realize from integration with Grail;
  • the relatively low risk of entry by close substitutes for Galleri in the near, or even foreseeable, future;
  • the Open Offer;
  • reputational or transactional harms that may result from refusing to deal with firms in its industry; and
  • the litigation and regulatory risks attending attempted foreclosure.

But the FTC presumed away these and other factors that could mitigate the risk of harm.

1.   Presumptions Suitable to Horizontal Mergers Are Not Fit for the Analysis of Vertical Mergers

The Commission maintains that the same scrutiny applies to efficiencies claims in vertical and horizontal transactions. To justify this conclusion, the Commission declined to “simply take managers’ word for efficiencies without independent verification, because then the efficiency defense ‘might well swallow the whole of Section 7,’ as managers could present large unsubstantiated efficiencies claims and courts would be hard pressed to find otherwise.” Opinion 75-76 (quoting United States v. H&R Block, Inc., 833 F. Supp. 2d 36, 91 (D.D.C. 2011). But this is a non sequitur, and wrong for three reasons.

First, the Commission need not (and the ID did not) “simply take managers’ word for efficiencies.” As the Initial Decision noted, courts and academic authorities both recognize procompetitive effects, including efficiencies, generally observed with vertical integration. ID 133-35, 196. See also ID 135 (noting case-specific evidence regarding research and development efficiencies, EDM, and the acceleration of access).

Second, to support its rejection of competitive benefits, the FTC continued to conflate the legal standards for horizontal and vertical mergers, relying only on serial string citations to horizontal merger cases. Opinion 75-76. Because a vertical merger puts direct, downward pressure on prices and upward pressure on complementary investments—the inverse of horizontal merger effects—the reliance on horizontal cases highlights the Commission’s failure to recognize the fundamental difference between horizontal and vertical integration. See AT&T II, 916 F.3d at 1032.

Third, ignoring that distinction, and the resulting need for a “fact-specific showing” of the likely anticompetitive effects of a vertical merger, id. at 1032, the Commission repeatedly relied upon H&R Block, which says nothing about standards of review for vertical mergers. H&R Block involved a horizontal merger that allegedly would have produced “an effective duopoly.” 833 F. Supp. 2d at 44.

The FTC’s Opinion also cites a horizontal merger decision, FTC v. H.J. Heinz Co., 246 F.3d 708, 713 (D.C. Cir. 2001), and AT&T II—a vertical merger case—for the proposition that the Baker Hughes framework applies to both horizontal and vertical mergers. That is misleading. Again, AT&T emphasizes that the distinction between horizontal and vertical mergers precludes similar presumptions of anticompetitive effects, and makes it easier to establish certain recognized efficiency and other benefits of vertical integration.  See AT&T II, 916 F.3d at 1032; Vertical Merger Guidelines at 5. Not incidentally, the Government lost its merger challenge in AT&T, both at trial and on appeal.

2.   The Commission Failed to Give Due Consideration to Evident Benefits

The Commission also discounted—or ignored—various efficiencies and other benefits on the ground that “efficiencies are ‘inherently difficult to verify and quantify.’” Opinion 75 (citing H&R Block, 833 F. Supp. 2d at 89). To justify this approach, the Commission cites five horizontal merger matters: H&R Block; H.J. Heinz; Otto Bock HealthCare N. Am., Inc., 168 F.T.C. 324 (2019); FTC v. Wilh. Wilhelmsen Holding ASA, 341 F. Supp. 3d 27 (D.D.C. 2018); and FTC v Penn State Hershey Medical Center, 838 F.3d 327 (3d Cir. 2016).

Although some claimed efficiencies from horizontal mergers can be hard to verify, many efficiencies from vertical mergers are inherent. Specifically, if upstream and downstream margins are positive, basic economic theory predicts that the merger will mitigate double marginalization. Empirical research confirms this. See, e.g., Gregory S. Crawford, et al., The Welfare Effects of Vertical Integration in Multi­channel Television Markets, 86 Econometrica 891 (2018). Similarly, when vertically related firms make complementary investments, theory predicts—and empirical research confirms—that vertical mergers will internalize investment spillovers in a way that tends to expand investment. See, e.g., Chenyu Yang, Vertical Structure and Innovation: A Study of the SoC and Smartphone Industries, 51 Rand J. Econ. 739 (2020). Meanwhile, operational and transactional efficiencies can be supported by both theoretical and empirical evidence, as well as case-specific evidence about the merging firms.

Here, the ALJ’s findings of fact detail ongoing innovation by Illumina, including improvements to its next generation sequencing (NGS) technologies ranging from the release of new reagents to software updates expected to result from the merger. ID 88-89. The Initial Decision also describes a complex process of integration between Illumina’s NGS technology and the requirements of different MCED testing programs. ID 89-91.

Given the Commission’s disregard of efficiencies, it is unclear when or how procompetitive benefits could ever offset the harm alleged to result if the consummated merger were left undisturbed—harm that the Commission did not quantify in either magnitude or likeli­hood.

3.   The Commission’s Speculative Prima Facie Case Fails to Account for the Likely Risk of Actual Harm

The efficiencies and competitive benefits here seem substantially easier to verify and quantify than the magnitude or likelihood of the supposed harm that the Commission neither quantified nor estimated. The Commission did not seriously try to quantify the effects of the merger on the timing and competitive significance of entry of complex clinical products, such as MCED tests, in early stages of development. Rather, the Commission simply asserted that “likely substantial harms to current, ongoing innovation competition in nascent markets are sufficiently probable and imminent to violate Section 7” of the Clayton Act, Opinion 60-61 (cleaned up). But the Commission identified no evidence to support this assertion, or to refute the ALJ’s determinations that MCED tests in development were not poised to enter into competition with Grail’s Galleri test, ID 143-144, that most of the research on possible MCED tests was relatively preliminary, ID 144-145, and that most of the tests being investigated appeared to be far from close substitutes for Galleri. ID 145-153; see also ID 27-28, 44-61.

Instead, the Commission disputed the legal relevance of those findings, stating that its analysis “rests on harm to current, ongoing R&D efforts, rather than the precise timing or nature of any firm’s commercialization of an MCED test.” Opinion 56 n. 38. But that harm, too, is assumed rather than observed, and is neither verified nor quantified.

Thus, the Commission’s prima facie case rests both on a peremptory dismissal of competitive benefits and efficiencies and an uncritical acceptance of speculative theories of harm. Pre-merger, Illumina maintained a substantial ownership interest in Grail of no less than 12%, ID 7-11, yet the Commission did not identify any attempts by Illumina or Grail to interfere with research and development of any MCED test that might enter to compete with Galleri. The only head-to-head R&D competition noted was between Grail and one firm with a pipeline MCED test (Exact/Thrive), on two dimensions: first, various “prelaunch” activities, such as “competing for mindshare with physicians, with health systems, with payers,” ID 34; second, competition for research scientists capable of contributing to the development of MCED tests, id. But there was neither allegation nor evidence that Illumina or Grail engaged in anticompetitive conduct in these areas, and no obvious way in which Illumina could exploit whatever market power it enjoys in NGS markets to foreclose access to “mindshare” or research scientists.

Given no past, present, or ongoing harm to third-party R&D efforts, there is no basis to ignore the likelihood of entry into the MCED test product market, the likely timing of entry, or the likely competitive significance of entry by particular MCED tests that might be relatively close or poor substitutes for Galleri.

Each of those factors is directly relevant to the present risk of potential harm to future competition. They determine whatever risk ongoing R&D into MCED tests would pose to Grail, and hence affect the merged firm’s foreclosure incentives. Equally relevant is the risk to Illumina’s core income stream from NGS sales and services should it prove unreliable or capricious in fulfilling its contracts. That core business includes diverse clinical testing well beyond the potential rivals at issue, ID 92-93, with clients including “leading genomic research centers, academic institutions, government laboratories and hospitals, as well as pharmaceutical, biotechnology, commercial molecular diagnostic laboratories, and consumer genomics companies.” ID 6.

4.   Evidence of Likely Procompetitive Effects Should Not Be Ignored at Any Stage of Analysis

Finally, the Commission contends that “[c]ourts have never held that efficiencies alone immunized an otherwise unlawful transaction.” Opinion 75. That puts the cart before the horse, as benefits from aligning incentives between producers of complements (what the Commission terms “efficiencies”) often determine whether a transaction—especially a vertical transaction—is “unlawful” in the first place.

Most important, the courts have never held that these benefits are irrelevant generally (as the FTC would have it), or to the question whether a transaction is unlawful in the first instance. To the contrary, analysis of a vertical merger must account for the procompetitive benefits and efficiencies it is likely to achieve. See AT&T I, 310 F. Supp. at 198 (noting need “to ‘balance’ whether the Government’s asserted harms outweigh the merger’s conceded consumer benefits.”). Even in horizontal mergers, sufficiently large efficiency benefits may prevent a merger from being illegal. New York v. Deutsche Telekom AG, 439 F. Supp. 3d 179, 207 (S.D.N.Y. 2020).

Because AT&T II made clear that no presumption of illegality applies to vertical mergers, the Commission properly faces a rigorous burden to prove on case-specific evidence that the proposed merger is likely to cause substantial, actual harm to competition and consumers—not a possibility of some degree of harm to competition that in theory could harm consumers.  The Commission has not carried that burden.

II.          The Open Offer Undercuts the Commission’s Prima Face Case and Its Disregard of Potential Remedies.

The Commission’s legal error went beyond its application of a misplaced presumption of illegality that is impervious to evidence of the benefits from combining complements. The Commission also failed to recognize key structural differences between horizontal and vertical mergers.

The primary source of potential anticompetitive harm from vertical integration is foreclosure. While foreclosure is not consistently defined, one passable definition is:

[A] dominant firm’s denial of proper access to an essential good it produces, with the intent of extending monopoly power from that segment of the market (the bottleneck segment) to an adjacent segment (the potentially competitive segment).

Patrick Rey & Jean Tirole, A Primer on Foreclosure, in 3 Handbook of Industrial Organization 2145, 2148 (Mark Armstrong & Robert H. Porter, eds.) (2007). Because denial of access is a crucial aspect of foreclosure, agreements (or remedies) granting access to essential goods or services can mitigate the risk of foreclosure.

Illumina’s “Open Offer” appears to grant such access, yet the Commission failed to give proper weight to its effect on the risk of anticompetitive conduct. In contrast, the ALJ examined the Open Offer in detail, see, e.g., ID 98-125, 182-189, finding that it “provides a compre­hensive set of protections for Illumina’s customers for all aspects of conduct and competition.” ID 120. The Commission rejected those findings, relying in part on a mischaracterization of the Open Offer as only a proposed remedy, and in part on an overbroad repudiation of behavioral remedies.

First, the record indicates that the Open Offer is binding under New York law, at least with respect to several firms engaged in MCED research, and that it will remain so through August 2033. ID 103-04. Firms that have accepted (or will accept) the Open Offer can enforce it whether or not the merger is blocked; and they would have every incentive to do so if Illumina interfered with their R&D efforts. That is not just a proposed remedy, but a fully operative constraint.  If accepted, the Open Offer will become part of the institutional framework within which Illumina operates, further reducing or eliminating the firm’s incentives and ability to raise its rivals’ costs. ID 103-04, 179. Cf. United States v. General Dynamics Corp., 415 U.S. 486, 501-02 (1974) (noting importance of existing contracts in assessing competitive landscape).

That constraint seems especially significant given how few firms might someday enter to compete with Grail’s MCED test, and the difficulty inherent in trying to forecast R&D competition so far in advance.

Second, the Commission strains credulity in disregarding the Open Offer on the grounds that behavioral remedies can be hard to monitor and tend to be disfavored. If the Open Offer were incorporated into a consent order, the FTC would have to monitor only a very few agreements. The affected parties would assist in monitoring compliance, well-funded would-be entrants would have every incentive to report any difficulty gaining access to Illumina’s sequencing technology, and the FTC could modify the order as needed. Illumina, for its part, would face both the risk of damages imposed under state law and the risk of statutory penalties, among other remedies, for violations of FTC consent orders.

Under the flexible Baker Hughes approach, the Commission should have accorded substantial weight to the Open Offer in assessing whether the Illumina-Grail transaction is truly likely to cause harm. This behavioral remedy is neither cumbersome nor ineffective. Given the Open Offer, the Commission does not appear to have established that harm to R&D competition is likely or imminent.

III.       The Economics of Vertical Integration Support the Differential Treatment of Vertical and Horizontal Mergers.

Economic and empirical research confirm that the Commission was wrong to conclude that vertical and horizontal mergers should be analyzed identically. Horizontal mergers, by definition, remove a competitor from a relevant market; vertical mergers do not. As the economics literature makes clear, that structural distinction is central to antitrust analysis.

A.           In Theory, The Competitive Implications of Vertical Mergers Are Ambiguous.

The Supreme Court’s modern vertical restraints decisions underscore the importance of developments in the economic literature for assessing how to evaluate any type of integration under the antitrust laws. The Court removed per se prohibitions on vertical restraints in part because “economics literature is replete with procompetitive justifi­ca­tions for” them. Leegin, 551 U.S. at 889.

The economics literature is equally “replete with procompetitive justifications” for vertical integration. Vertical integration typically confers benefits, such as eliminating double marginalization, Reiffen & Vita, supra, 63 Antitrust L.J. 917; increasing R&D investment, Armour & Teece, supra, 62 Rev. Econ. & Stat. 470; and creating operational and transactional efficiencies, Carlton, supra, 73 Int’l J. Indus. Org. 1.

The logic behind EDM is simple: Vertical mergers can increase welfare, even if the upstream or downstream firm has market power. When firms “markup” their products over their marginal cost of production, that reduces output and increases the (input or distribution) costs of their (downstream or upstream) rivals. In other words, independent upstream and downstream firms can exert negative externalities on each other that ultimately push prices upwards. When firms have no incentive to consider the effect of their price (and output) determinations on downstream firms’ profits, see, e.g., Michael A. Salinger, Vertical Mergers and Market Foreclosure, 103 Q.J. Econ. 345 (1988), there is an additional markup over the downstream firm’s marginal cost of production, or “double marginalization.” Vertical mergers enable firms to coordinate their pricing behavior, eliminating this externality without the negative effects that coordination would entail in horizontal merger cases. See Reiffen & Vita, supra, 63 Antitrust L. J. at 920.

In a vertical merger, EDM is likely automatic. Id. That is “precisely opposite of the outcome that arises under the frequently used Cournot oligopoly model of horizontal competition with substitute products. Under Cournot oligopoly, joint pricing raises price; under Cournot complements [as in a vertical merger], it lowers price.” Daniel O’Brien, The Antitrust Treatment of Vertical Restraint: Beyond the Possibility Theorems, in Konkurrensverket, Swedish Competition Authority, Report: The Pros and Cons of Vertical Restraints 22, 36 (2008).[1]

To be clear, vertical mergers are not necessarily procompetitive. An integrated firm may have an incentive to exclude rivals, see Steven C. Salop & David T. Scheffman, Cost-Raising Strategies, 36 J. Indus. Econ. 19 (1985), and a vertical merger can have an anticompetitive effect if the upstream firm has market power and the ability, post-acquisition, to foreclose its competitors’ access to a key input. See Janusz A. Ordover, Garth Saloner & Steven C. Salop, Equilibrium Vertical Foreclosure, 80 Am. Econ. Rev. 127 (1990). In that regard, raising rivals’ costs can “represent[] a credible theory of economic harm” if other conditions of exclusionary conduct are met. Malcom B. Coate & Andrew N. Kleit, Exclusion, Collusion, and Confusion: The Limits of Raising Rivals’ Costs, FTC Bureau of Economics Working Paper No. 179 (1990). But this is merely a possibility, not a likely conclusion without solid empirical evidence: “The circumstances… in which [raising rivals’ costs] can occur are usually so limited that [it] almost always represents a minimal threat to competition.” Id. at 3.

The implications of vertical mergers are thus theoretically ambiguous, not typically anticompetitive. But while the Commission now seeks to equate horizontal and vertical mergers,

[a] major difficulty in relying principally on theory to guide vertical enforcement policy is that the conditions necessary for vertical restraints to harm welfare generally are the same conditions under which the practices increase consumer welfare.

James C. Cooper, et al., Vertical Antitrust Policy as a Problem of Inference, 23 Int’l. J. Indus. Org. 639, 643 (2005).

This structural ambiguity weighs against any presumption against vertical mergers, and suggests the importance of empirical research in formulating standards to evaluate vertical transactions.

B.           Empirical Research Establishes that Vertical Mergers Tend to Be Procompetitive In Practice.

Empirical evidence supports the established legal distinctions between horizontal mergers and vertical mergers (as well as other forms of vertical integration), indicating that vertical integration tends to be procompetitive or benign.

A meta-analysis of more than seventy studies of vertical transactions analyzed groups of studies for their implications for various theories or models of vertical integration, and for the effects of vertical integration. From that analysis

a fairly clear empirical picture emerges. The data appear to be telling us that efficiency considerations overwhelm anticompetitive motives in most contexts. Furthermore, even when we limit attention to natural monopolies or tight oligopolies, the evidence of anticompetitive harm is not strong.

Francine Lafontaine & Margaret Slade, Vertical Integration and Firm Boundaries: The Evidence, 45 J. Econ. Lit. 629, 677 (2007).

On the contrary, “under most circumstances, profit-maximizing vertical integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view.” Id. And “[a]lthough there are isolated studies that contradict this claim, the vast majority support it….” Id. Lafontaine and Slade accordingly concluded that “faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked.” Id.

Another study of vertical restraints finds that, “[e]mpirically, vertical restraints appear to reduce price and/or increase output. Thus, absent a good natural experiment to evaluate a particular restraint’s effect, an optimal policy places a heavy burden on plaintiffs to show that a restraint is anticompetitive.” Cooper, et al., supra, 23 J. Indus. Org. at 639.

Subsequent research has reinforced these findings. Reviewing the more recent literature from 2009-18, John Yun concluded “the weight of the empirical evidence continues to support the proposition that vertical mergers are less likely to generate competitive concerns than horizontal ones.” John M. Yun, Vertical Mergers and Integration in Digital Markets, in The GAI Report on the Digital Economy (Joshua D. Wright & Douglas H. Ginsburg, eds., 2020) at 245.

Leading contributors to the empirical literature, reviewing both new studies and critiques of the established view of vertical mergers, maintain a consistent view. For example, testifying at a 2018 FTC hearing, Francine Lafontaine, a former Director of the FTC’s Bureau of Economics, acknowledged that some of the early empirical evidence is less than ideal, in terms of data and methods, but reinforced the overall conclusions of her earlier research “that the empirical literature reveals consistent evidence of efficiencies associated with the use of vertical restraints (when chosen by market participants) and, similarly, with vertical integration decisions.” Francine Lafontaine, Vertical Mergers (Presentation Slides), in FTC, Competition and Consumer Protection in the 21st Century; FTC Hearing #5: Vertical Merger Analysis and the Role of the Consumer Welfare Standard in U.S. Antitrust Law, Presentation Slides 93 (Nov. 1, 2018) (“FTC Hearing #5”), available at https://www.ftc.gov/system/files/documents/public_events/1415284/ftc_hearings_5_georgetown_slides.pdf. See also Francine Lafontaine & Margaret E. Slade, Presumptions in Vertical Mergers: The Role of Evidence, 59 Rev. Indus. Org. 255 (2021).

In short, empirical research confirms that the law properly does not presume that vertical mergers have anticompetitive effects, but requires specific evidence of both harms and efficiencies.

C.           New Research Does Not Undermine the Prevailing View of Vertical Mergers.

Critics of prevailing legal standards and agency practice have pointed to a few studies that might cast doubt on the ubiquity of benefits associated with vertical mergers. We briefly review several of those studies, including those discussed at the FTC’s 2018 “Competition and Consumer Protection in the 21st Century” hearings that purported to suggest that the “econometric evidence does not support a stronger procompetitive presumption [for vertical mergers].” Steven C. Salop, Revising the Vertical Merger Guidelines (Presentation Slides), in FTC Hearing #5, supra, Presentation Slides 25. In fact, these studies do not undermine the longstanding economic literature. See Geoffrey A. Manne, Kristian Stout & Eric Fruits, The Fatal Economic Flaws of the Contemporary Campaign Against Vertical Integration, 69 Kansas L. Rev. 923 (2020). “[T]he newer literature is no different than the old in finding widely procompetitive results overall, intermixed with relatively few seemingly harmful results.” Id. at 951.

One oft-cited study examined Coca-Cola and PepsiCo’s acquisitions of some of their downstream bottlers. Fernando Luco & Guillermo Marshall, The Competitive Impact of Vertical Integration by Multiproduct Firms, 110 Am. Econ. Rev. 2041 (2020). The authors presented their results as finding that “vertical integration in the US carbonated-beverage industry caused anticompetitive price increases in products for which double margins were not eliminated.” Id. at 2062. But the authors actually found that, while such acquisitions were associated with price increases for independent Dr Pepper Snapple Group products, they were associated with price decreases for both Coca-Cola and PepsiCo products bottled by vertically integrated bottlers. Because the products associated with increased prices accounted for such a small market share, “vertical integration did not have a significant effect on the price index when considering the full set of products.” Id. at 2056. Overall, the consumer impact was either an efficiency gain or no significant change. As Francine Lafontaine characterized the study, “in total, consumers were better off given who was consuming how much of what.” FTC Hearing #5, supra, Transcript 88 (statement of Francine Lafontaine), available at https://www.ftc.gov/system/files/documents/public_events/1415284/ftc_hearings_session_5_transcript_11-1-18.pdf.

In another study often cited by skeptics of vertical integration, Justine Hastings and Richard Gilbert examined wholesale price changes charged by a vertically integrated refiner/retailer using data from 1996-98. Justine S. Hastings & Richard J. Gilbert, Market Power, Vertical Integration, and the Wholesale Price of Gasoline, 33 J. Indus. Econ. 469 (2005). They observed that the firm charged higher wholesale prices in cities where its retail outlets competed more with independent gas stations, and concluded that their observations were consistent with the theory of raising rivals’ costs. Id. at 471.

In subsequent research, however, three FTC economists publishing in the American Economic Review examined retail gasoline prices following the 1997 acquisition of an independent gasoline retailer by a vertically integrated refiner/retailer. Their estimates suggested that the merger was associated with minuscule—and economically insignificant—price increases. Christopher T. Taylor, et al., Vertical Relationships and Competition in Retail Gasoline Markets: Empirical Evidence from Contract Changes in Southern California: Comment, 100 Am. Econ. Rev. 1269 (2010).

Hastings explains the discrepancy with Taylor et al., by noting the challenges of evaluating vertical mergers with incomplete data or, simply, different data sets, as seemingly similar data can yield very different results. Justine Hastings, Vertical Relationships and Competition in Retail Gasoline Markets: Empirical Evidence from Contract Changes in Southern California: Reply, 100 Am. Econ. Rev. 1227 (2010). But that observation does not undercut Taylor et al.’s findings. Rather, it suggests caution in drawing general conclusions from this line of research, even with regard to gasoline/refiner integration, much less to vertical integration generally.

Other commonly cited studies are no more persuasive. For example, one study examined vertical mergers between cable-programming distributors and regional sports networks using counterfactual simulations that enforced program access rules. Crawford, et al., supra, 86 Econometrica 891. While some have characterized their findings as “mixed” (FTC Hearing #5, supra, Transcript 54 (statement of Margaret Slade))—suggesting that vertical integration could have some negative as well as positive effects—their overall results indicated “that vertical integration leads to significant gains in both consumer and aggregate welfare.” Crawford, et al., supra, 86 Econometrica at 893-894.

Harvard economist Robin Lee, a co-author of the study, concluded that the findings demonstrated that the consumer benefits of efficiency gains outweighed any harms from foreclosure. As he testified at the FTC’s 2018 hearings,

our key findings are that, on average, across channels and simulations, there is a net consumer welfare gain from integration. Don’t get me wrong, there are significant foreclosure effects, and rival distributors are harmed, but these negative effects are oftentimes offset by sizeable efficiency gains. Of course, this is an average. It masks considerable heterogeneity.

FTC, Competition and Consumer Protection in the 21st Century: FTC Hearing #3: Multi-Sided Platforms, Labor Markets, and Potential Competition, Transcript 101 (Oct. 17, 2018), available at https://www.ftc.gov/system/files/documents/public_events/1413712/ftc_hearings_session_3_transcript_day_3_10-17-18_0.pdf.

While these studies indicate that vertical mergers can sometimes lead to harm, that point was never disputed. What is important is that the studies do not support any general presumption against vertical mergers or, indeed, any revision to either the legal distinction between horizontal and vertical mergers or to what was, up to now, established agency practice in merger review. The weight of the empirical evidence plainly indicates that vertical integration tends to be procompetitive; hence, no presumption of anticompetitive effects or of illegality should apply, and none should have been applied here.

CONCLUSION

There is much at stake here. The potential for harm from the merger seems speculative, but the benefits seem conspicuous and substantial, not only reducing the risk of net competitive harm but promising significant enhancement to consumer welfare. As the Commission observed, “better screening methods to detect more cancers at an earlier stage … have the potential to extend and improve many human lives.” Opinion 3. Those benefits should not be forestalled by speculation about possible harms that ignores the differences between vertical and horizontal mergers.

The FTC’s decision should be reversed.

[1] Many discussions of the competitive effects of vertical mergers, including the Vertical Merger Guidelines, conflate EDM, investment benefits, and transactional efficiencies.

Continue reading
Antitrust & Consumer Protection

How Much Information Do Markets Require?

TOTM One of the biggest names in economics, Daron Acemoglu, recently joined the mess that is Twitter. He wasted no time in throwing out big ideas for . . .

One of the biggest names in economics, Daron Acemoglu, recently joined the mess that is Twitter. He wasted no time in throwing out big ideas for discussion and immediately getting tons of, let us say, spirited replies.

One of Acemoglu’s threads involved a discussion of F.A. Hayek’s famous essay “The Use of Knowledge in Society,” wherein Hayek questions central planners’ ability to acquire and utilize such knowledge. Echoing many other commentators, Acemoglu asks: can supercomputers and artificial intelligence get around Hayek’s concerns?

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

Brian Albrecht on Robert Lucas and Armen Alchian

Presentations & Interviews ICLE Chief Economist Brian Albrecht joined the Human Action Podcast to discuss the work of economists Armen Alchian and Robert Lucas. Video of the full . . .

ICLE Chief Economist Brian Albrecht joined the Human Action Podcast to discuss the work of economists Armen Alchian and Robert Lucas. Video of the full conversation is embedded below.

Continue reading
Antitrust & Consumer Protection

Network Effects and Interoperability

TL;DR Background: The European Union’s Digital Markets Act (DMA), which went into effect in November 2022,  requires online platforms deemed to be “gatekeepers” to make their . . .

Background: The European Union’s Digital Markets Act (DMA), which went into effect in November 2022,  requires online platforms deemed to be “gatekeepers” to make their services interoperable. Interoperability refers to the ability of different systems, devices, or applications to communicate and exchange information. Importantly, the DMA envisions horizontal interoperability for messaging services, as well as vertical interoperability obligations. These include the ability to install third-party app stores and to install applications through sideloading, along with ensuring access to operating systems’ critical functionalities and specific devices’ hardware capabilities.

However… While interoperability requirements can reduce switching costs between platforms and possibly  help consumers avoid being “locked-in” to inferior products, the net effects on new technology and greater competition are mostly speculative. Claims that mandatory interoperability is a “super tool” for platform competition rely on excessive switching costs between platforms effectively serving as a barrier to entry. The rise of new social networks like TikTok and messaging services like Discord suggests that network effects may be less pervasive than previously thought. Many consumers are perfectly comfortable with “multi-homing” and using multiple platforms. 

Network Effects Are Everywhere; Network Harms Are More Specific

Consumers in any market—not exclusively or even predominantly digital markets—strike a balance between using multiple providers (multi-homing) and remaining loyal to just one. Network effects can give incumbents an advantage over challengers, but identifying that a given market has network effects does not, in itself, justify mandating interoperability. For any potential interoperability mandate, we must ask how costly it is for consumers to multi-home. 

For example, a consumer may find it low-cost to download multiple apps—such as Zelle, PayPal, or Venmo—that each allow one to send money to a friend. By contrast, it may be quite costly to gain followers on a new social-media platform. Interoperability mandates have tended to focus on markets that already have low switching costs, hence limiting potential gains.

Lock-In Can Increase Competition

We say a consumer is “locked-in” when high switching costs make it difficult for them to switch suppliers even when quality changes. But markets subject to lock-in may still see fierce competition for users. Companies compete upfront to attract such consumers through tactics like penetration pricing, introductory offers, and price wars. This “competition for the market” can effectively substitute for standard compatible competition and might even be more intense, as it reduces differentiation. It is not a simple linear relationship, where lower switching costs are always better for consumers.

Interoperability Isn’t Always Good

Interoperability proponents argue that it levels the playing field between tech giants and smaller competitors. The debate often imagines a low-quality incumbent using lock-in to keep a high-quality challenger at bay. But we don’t necessarily want everything to be interoperable. It would be a problem if, e.g., everyone’s door keys were interoperable. The analogous problem in tech is cybersecurity. More interconnected systems are more vulnerable to cyberattacks and data breaches. Mandating interoperability, such as between messaging services, can inadvertently expose users to greater security risks by creating additional points of access for bad actors.

Static Standards and Dynamic Markets

There are many examples of interoperability resulting from the voluntary adoption of standards. Credit-card companies manage vast, interoperable payment networks; screwdrivers work with screws made by various manufacturers; and U.S. colleges accept credits from other institutions. 

Interoperability also tends to evolve over time and regulators should not imagine the current system will last forever. Bluetooth was initially developed for wireless communication between devices like headsets and phones, but has evolved to also enable seamless connectivity among various speakers, keyboards, smartwatches, and so forth—all from different manufacturers. This standardization has greatly simplified wireless connections and improved user experience.

Calculate Costs in Addition to Benefits

While a literature review on switching costs and network effects by esteemed scholars Joseph Farrell and Paul Klemperer concluded that “firms probably seek incompatibility too often. We therefore favor thoughtfully pro-compatibility public policy,” they also recognize that competition to be the dominant platform “can adequately replace ordinary compatible competition, and can even be fiercer than compatible competition by weakening differentiation.”

Moreover, the theoretical papers they considered mostly ask whether increasing or decreasing switching costs increases consumer welfare. Mandates implemented through public policy tend to be more blunt and, after accounting for factors like increased security risks, are less likely to pass a cost-benefit test. Consumers often come across situations where interoperability might provide some benefits, but where the costs outweigh the gains. Policymakers should take the same approach.

For more on this issue, see “Antitrust Unchained: The EU’s Case Against Self-Preferencing” by Giuseppe Colangelo; “Privacy and Security Implications of Regulation of Digital Services in the EU and in the US” by Mikolaj Barczentewicz; and “Mandatory Interoperability Is Not a ‘Super Tool’ for Platform Competition” by Samuel Bowman.

Continue reading
Antitrust & Consumer Protection