Showing 9 of 218 Publications by Kristian Stout

ICLE Comments to DOJ on Promoting Competition in Artificial Intelligence

Regulatory Comments Executive Summary We thank the U.S. Justice Department Antitrust Division (DOJ) for this invitation to comment (ITC) on “Promoting Competition in Artificial Intelligence.”[1] The International . . .

Executive Summary

We thank the U.S. Justice Department Antitrust Division (DOJ) for this invitation to comment (ITC) on “Promoting Competition in Artificial Intelligence.”[1] The International Center for Law & Economics (ICLE) is a nonprofit, nonpartisan global research and policy center founded with the goal of building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law & economics methodologies to inform public-policy debates and has longstanding expertise in the evaluation of competition law and policy. ICLE’s interest is to ensure that competition law remains grounded in clear rules, established precedent, a record of evidence, and sound economic analysis.

In these comments, we express the view that policymakers’ current concerns about competition in AI industries may be unwarranted. This is particularly true of the notions that data-network effects shield incumbents in AI markets from competition; that Web 2.0’s most successful platforms will be able to leverage their competitive positions to dominate generative-AI markets; that these same platforms may use strategic partnerships with AI firms to insulate themselves from competition; and that generative-AI services occupy narrow markets that leave firms with significant market power.

In fact, we are still far from understanding the boundaries of antitrust-relevant markets in AI. There are three main things that need to be at the forefront of competition authorities’ minds when they think about market definition in AI products and services. First, understand that the “AI market” is not unitary, but is instead composed of many distinct goods and services. Second, and relatedly, look beyond the AI marketing hype to see how this extremely heterogeneous products landscape intersects with an equally variegated consumer-demand landscape.

In other words: AI products and services may, in many instances, be substitutable for non-AI products, which would mean that, for the purposes of antitrust law, AI and non-AI products contend in the same relevant market. Getting this relevant product-market definition right is important in antitrust because wrong market definitions could lead to wrong inferences about market power. While either an overly broad or overly narrow market definition could lead to both over and underenforcement, we believe the former currently represents the bigger threat.

Third, overenforcement in the field of generative AI could paradoxically engender the very harms that policymakers are seeking to avert. As we explain in greater detail below, preventing so-called “big tech” firms from competing in AI markets (for example, by threatening competition intervention whenever they forge strategic relationships with AI startups, launch their own generative-AI services, or embed such services in their existing platforms) may thwart an important source of competition and continued innovation. In short, competition in AI markets is important,[2] but trying naïvely to hold incumbent (in adjacent markets) tech firms back, out of misguided fears they will come to dominate the AI space, is likely to do more harm than good. It is essential to acknowledge how little we know about these nascent markets and that the most important priority at the moment is simply to ask the right questions that will lead to sound competition policy.

The comments proceed as follows. Section I debunks the notion that incumbent tech platforms can use their allegedly superior datasets to overthrow competitors in markets for generative AI. Section II discusses how policymakers should approach strategic partnerships among tech incumbents and AI startups. Section III outlines some of the challenges to defining relevant product markets in AI, and suggests how enforcers could navigate the perils of market definition in the nascent, fast-moving world of AI.

I. Anticompetitive Leveraging in AI Markets

Antitrust enforcers have recently expressed concern that incumbent tech platforms may leverage their existing market positions and resources (particularly their vast datasets) to stifle competitive pressure from AI startups. As this sections explains, however, these fears appear overblown, as well as underpinned by assumptions about data-network effects that are unlikely to play a meaningful role in generative AI. Instead, the competition interventions that policymakers are contemplating would, paradoxically, remove an important competitive threat for today’s most successful AI providers, thereby reducing overall competition in generative-AI markets.

Subsection A summarizes recent calls for competition intervention in generative-AI markets. Subsection B argues that many of these calls are underpinned by fears of data-related incumbency advantages (often referred to as “data-network effects”), including in the context of mergers. Subsection C explains why these effects are unlikely to play a meaningful role in generative-AI markets. Subsection D offers five key takeaways to help policymakers better weigh the tradeoffs inherent to competition-enforcement interventions in generative-AI markets.

A. Calls for Intervention in AI Markets

It was once (and frequently) said that Google’s “data monopoly” was unassailable: “If ‘big data’ is the oil of the information economy, Google has Standard Oil-like monopoly dominance—and uses that control to maintain its dominant position.”[3] Similar claims of data dominance have been attached to nearly all large online platforms, including Facebook (Meta), Amazon, and Uber.[4]

While some of these claims continue even today (for example, “big data” is a key component of the DOJ Google Search and adtech antitrust suits),[5] a shiny new data target has emerged in the form of generative artificial intelligence (AI). The launch of ChatGPT in November 2022, as well as the advent of AI image-generation services like Midjourney and Dall-E, have dramatically expanded the public’s conception of what is—and what might be—possible to achieve with generative-AI technologies built on massive datasets.

While these services remain both in the early stages of mainstream adoption and in the throes of rapid, unpredictable technological evolution, they nevertheless already appear to be on the radar of competition policymakers around the world. Several antitrust enforcers appear to believe that, by acting now, they can avoid the “mistakes” that purportedly were made during the formative years of Web 2.0.[6] These mistakes, critics assert, include failing to appreciate the centrality of data in online markets, as well as letting mergers go unchecked and allowing early movers to entrench their market positions.[7] As Federal Trade Commission (FTC) Chair Lina Khan has put it: “we are still reeling from the concentration that resulted from Web 2.0, and we don’t want to repeat the mis-steps of the past with AI.”[8]

This response from the competition-policy world is deeply troubling. Rather than engage in critical self-assessment and adopt an appropriately restrained stance, the enforcement community appears to be champing at the bit. Rather than assessing their prior assumptions based on the current technological moment, enforcers’ top priority appears to be figuring out how to rapidly and almost reflexively deploy existing competition tools to address the presumed competitive failures presented by generative AI.[9]

It is increasingly common for competition enforcers to argue that so-called “data-network effects” serve not only to entrench incumbents in those markets where the data is collected, but also to confer similar, self-reinforcing benefits in adjacent markets. Several enforcers have, for example, prevented large online platforms from acquiring smaller firms in adjacent markets, citing the risk that they could use their vast access to data to extend their dominance into these new markets.[10]

They have also launched consultations to ascertain the role that data plays in AI competition. For instance, in a recent consultation, the European Commission asked: “What is the role of data and what are its relevant characteristics for the provision of generative AI systems and/or components, including AI models?”[11] Unsurprisingly, the FTC has likewise been hypervigilant about the risks ostensibly posed by incumbents’ access to data. In comments submitted to the U.S. Copyright Office, for example, the FTC argued that:

The rapid development and deployment of AI also poses potential risks to competition. The rising importance of AI to the economy may further lock in the market dominance of large incumbent technology firms. These powerful, vertically integrated incumbents control many of the inputs necessary for the effective development and deployment of AI tools, including cloud-based or local computing power and access to large stores of training data. These dominant technology companies may have the incentive to use their control over these inputs to unlawfully entrench their market positions in AI and related markets, including digital content markets.[12]

Recently, in the conference that prompts these comments, Jonathan Kanter, assistant U.S. attorney general for antitrust, claimed that:

We also see structures and trends in AI that should give us pause AI relies on massive amounts of data and computing power, which can give already dominant firms a substantial advantage. Powerful networks and feedback effects may enable dominant firms to control these new markets, and existing power in the digital economy may create a powerful incentive to control emerging innovations that will not only impact our economy, but the health and well-being of our society and free expression itself.[13]

On an even more hyperbolic note, Andreas Mundt, the head of Germany’s Federal Cartel Office, called AI a “first-class fire accelerator” for anticompetitive behavior and argued it “will make all the problems only worse.”[14] He further argued that “there’s a great danger that we’ll will get an even deeper concentration of digital markets and power increase at various levels, from chips to the front end.”[15] In short, Mundt is one of many policymakers who believes that AI markets will enable incumbent tech firms to further entrench their market positions.

Certainly, it makes sense that the largest online platforms—including Alphabet, Meta, Apple, and Amazon—should have a meaningful advantage in the burgeoning markets for generative-AI services. After all, it is widely recognized that data is an essential input for generative AI.[16] This competitive advantage should be all the more significant, given that these firms have been at the forefront of AI technology for more than a decade. Over this period, Google’s DeepMind and AlphaGo and Meta’s NLLB-200 have routinely made headlines.[17] Apple and Amazon also have vast experience with AI assistants, and all of these firms deploy AI technologies throughout their platforms.[18]

Contrary to what one might expect, however, the tech giants have, to date, been largely unable to leverage their vast troves of data to outcompete startups like OpenAI and Midjourney. At the time of writing, OpenAI’s ChatGPT appears to be, by far, the most successful chatbot,[19] despite the large tech platforms’ apparent access to far more (and more up-to-date) data.

Moreover, it is important not to neglect the role that open-source models currently play in fostering innovation and competition. As former DOJ Chief Antitrust Economist Susan Athey pointed out in a recent interview, “[the AI industry] may be very concentrated, but if you have two or three high quality — and we have to find out what that means, but high enough quality — open models, then that could be enough to constrain the for-profit LLMs.[20] Open-source models are important because they allow innovative startups to build upon models already trained on large datasets—therefore entering the market without incurring that initial cost. Apparently, there is no lack of open-source models, since companies like xAI, Meta, and Google offer their AI models for free.[21]

There are important lessons to glean from these developments, if only enforcers would stop to reflect. The meteoric rise of consumer-facing AI services should offer competition enforcers and policymakers an opportunity for introspection. As we explain, the rapid emergence of generative-AI technology may undercut many core assumptions of today’s competition-policy debates, which have focused largely on the rueful after-effects of the purported failure of 20th-century antitrust to address the allegedly manifest harms of 21st-century technology. These include the notions that data advantages constitute barriers to entry and can be leveraged to project dominance into adjacent markets; that scale itself is a market failure to be addressed by enforcers; and that the use of consumer data is inherently harmful to those consumers.

B. Data-Network Effects Theory and Enforcement

Proponents of more extensive intervention by competition enforcers into digital markets often cite data-network effects as a source of competitive advantage and barrier to entry (though terms like “economies of scale and scope” may offer more precision).[22] The crux of the argument is that “the collection and use of data creates a feedback loop of more data, which ultimately insulates incumbent platforms from entrants who, but for their data disadvantage, might offer a better product.”[23] This self-reinforcing cycle purportedly leads to market domination by a single firm. Thus, it is argued, e.g., that Google’s “ever-expanding control of user personal data, and that data’s critical value to online advertisers, creates an insurmountable barrier to entry for new competition.[24]

But it is important to note the conceptual problems these claims face. Because data can be used to improve products’ quality and/or to subsidize their use, if possessing data constitutes an entry barrier, then any product improvement or price reduction made by an incumbent could be problematic. This is tantamount to an argument that competition itself is a cognizable barrier to entry. Of course, it would be a curious approach to antitrust if competition were treated as a problem, as it would imply that firms should under-compete—i.e., should forego consumer-welfare enhancements—in order to inculcate a greater number of firms in a given market, simply for its own sake.[25]

Meanwhile, actual economic studies of data-network effects have been few and far between, with scant empirical evidence to support the theory.[26] Andrei Hagiu and Julian Wright’s theoretical paper offers perhaps the most comprehensive treatment of the topic to date.[27] The authors ultimately conclude that data-network effects can be of differing magnitudes and have varying effects on firms’ incumbency advantage.[28] They cite Grammarly (an AI writing-assistance tool) as a potential example: “As users make corrections to the suggestions offered by Grammarly, its language experts and artificial intelligence can use this feedback to continue to improve its future recommendations for all users.”[29]

This is echoed by economists who contend that “[t]he algorithmic analysis of user data and information might increase incumbency advantages, creating lock-in effects among users and making them more reluctant to join an entrant platform.”[30] Crucially, some scholars take this logic a step further, arguing that platforms may use data from their “origin markets” in order to enter and dominate adjacent ones:

First, as we already mentioned, data collected in the origin market can be used, once the enveloper has entered the target market, to provide products more efficiently in the target market. Second, data collected in the origin market can be used to reduce the asymmetric information to which an entrant is typically subject when deciding to invest (for example, in R&D) to enter a new market. For instance, a search engine could be able to predict new trends from consumer searches and therefore face less uncertainty in product design.[31]

This possibility is also implicit in Hagiu and Wright’s paper.[32] Indeed, the authors’ theoretical model rests on an important distinction between “within-user” data advantages (that is, having access to more data about a given user) and “across-user” data advantages (information gleaned from having access to a wider user base). In both cases, there is an implicit assumption that platforms may use data from one service to gain an advantage in another market (because what matters is information about aggregate or individual user preferences, regardless of its origin).

Our review of the economic evidence suggests that several scholars have, with varying degrees of certainty, raised the possibility that incumbents may leverage data advantages to stifle competitors in their primary market or in adjacent ones (be it via merger or organic growth). As we explain below, however, there is ultimately little evidence to support such claims. Policymakers have nonetheless been keenly receptive to these limited theoretical findings, basing multiple decisions on these theories, often with little consideration given to the caveats that accompany them.[33]

Indeed, it is remarkable that, in its section on “[t]he data advantage for incumbents,” the “Furman Report” created for the UK government cited only two empirical economic studies, and they offer directly contradictory conclusions with respect to the question of the strength of data advantages.[34] The report nevertheless concluded that data “may confer a form of unmatchable advantage on the incumbent business, making successful rivalry less likely,”[35] and it adopted without reservation what it deemed “convincing” evidence from non-economists that have no apparent empirical basis.[36]

In the Google/Fitbit merger proceedings, the European Commission found that the combination of data from Google services with that of Fitbit devices would reduce competition in advertising markets:

Giving [sic] the large amount of data already used for advertising purposes that Google holds, the increase in Google’s data collection capabilities, which goes beyond the mere number of active users for which Fitbit has been collecting data so far, the Transaction is likely to have a negative impact on the development of an unfettered competition in the markets for online advertising.[37]

As a result, the Commission cleared the merger only on the condition that Google refrain from using data from Fitbit devices for its advertising platform.[38] The Commission also appears likely to focus on similar issues in its ongoing investigation of Microsoft’s investment in OpenAI.[39]

Along similar lines, in its complaint to enjoin Meta’s purchase of Within Unlimited—makers of the virtual-reality (VR) fitness app Supernatural—the FTC relied on, among other things, the fact that Meta could leverage its data about VR-user behavior to inform its decisions and potentially outcompete rival VR-fitness apps: “Meta’s control over the Quest platform also gives it unique access to VR user data, which it uses to inform strategic decisions.”[40]

The DOJ’s twin cases against Google also implicate data leveraging and data barriers to entry. The agency’s adtech complaint charges that “Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry.”[41] Similarly, in its Google Search complaint, the agency argued that:

Google’s anticompetitive practices are especially pernicious because they deny rivals scale to compete effectively. General search services, search advertising, and general search text advertising require complex algorithms that are constantly learning which organic results and ads best respond to user queries; the volume, variety, and velocity of data accelerates the automated learning of search and search advertising algorithms.[42]

Finally, updated merger guidelines published in recent years by several competition enforcers cite the acquisition of data as a potential source of competition concerns. For instance, the FTC and DOJ’s 2023 guidelines state that “acquiring data that helps facilitate matching, sorting, or prediction services may enable the platform to weaken rival platforms by denying them that data.”[43] Likewise, the UK Competition and Markets Authority warned against incumbents acquiring firms in order to obtain their data and foreclose other rivals:

Incentive to foreclose rivals…

7.19(e) Particularly in complex and dynamic markets, firms may not focus on short term margins but may pursue other objectives to maximise their long-run profitability, which the CMA may consider. This may include… obtaining access to customer data….[44]

In short, competition authorities around the globe have taken an increasingly aggressive stance on data-network effects. Among the ways this has manifested is in enforcement decisions based on fears that data collected by one platform might confer decisive competitive advantages in adjacent markets. Unfortunately, these concerns rest on little to no empirical evidence, either in the economic literature or the underlying case records.

C. Data-Incumbency Advantages in Generative-AI

Given the assertions detailed in the previous section, it would be reasonable to assume that firms such as Google, Meta, and Amazon should be in pole position to meet the burgeoning demand for generative AI. After all, these firms have not only been at the forefront of the field for the better part of a decade, but they also have access to vast troves of data, the likes of which their rivals could only dream when they launched their own services. Thus, the authors of the Furman Report caution that “to the degree that the next technological revolution centres around artificial intelligence and machine learning, then the companies most able to take advantage of it may well be the existing large companies because of the importance of data for the successful use of these tools.”[45]

To date, however, this is not how things have unfolded (although it bears noting that these technologies remain in flux and the competitive landscape is susceptible to change). The first significantly successful generative-AI service was arguably not from either Meta—which had been working on chatbots for years and had access to, arguably, the world’s largest database of actual chats—or Google. Instead, the breakthrough came from a previously unknown firm called OpenAI.

OpenAI’s ChatGPT service currently accounts for an estimated 60% of visits to online AI tools (though reliable numbers are somewhat elusive).[46] It broke the record for the fastest online service to reach 100 million users (in only a couple of months), more than four times faster than TikTok, the previous record holder.[47] Based on Google Trends data, ChatGPT is nine times more popular worldwide than Google’s own Bard service, and 14 times more popular in the United States.[48] In April 2023, ChatGPT reportedly registered 206.7 million unique visitors, compared to 19.5 million for Google’s Bard.[49] In short, at the time we are writing, ChatGPT appears to be the most popular chatbot. The entry of large players such as Google Bard or Meta AI appear to have had little effect thus far on its leading position.[50]

The picture is similar in the field of AI-image generation. As of August 2023, Midjourney, Dall-E, and Stable Diffusion appear to be the three market leaders in terms of user visits.[51] This is despite competition from the likes of Google and Meta, who arguably have access to unparalleled image and video databases by virtue of their primary platform activities.[52]

This raises several crucial questions: how have these AI upstarts managed to be so successful, and is their success just a flash in the pan before Web 2.0 giants catch up and overthrow them? While we cannot answer either of these questions dispositively, we offer what we believe to be some relevant observations concerning the role and value of data in digital markets.

A first important observation is that empirical studies suggest that data exhibits diminishing marginal returns. In other words, past a certain point, acquiring more data does not confer a meaningful edge to the acquiring firm. As Catherine Tucker put it, following a review of the literature: “Empirically there is little evidence of economies of scale and scope in digital data in the instances where one would expect to find them.”[53]

Likewise, following a survey of the empirical literature on this topic, Geoffrey Manne and Dirk Auer conclude that:

Available evidence suggests that claims of “extreme” returns to scale in the tech sector are greatly overblown. Not only are the largest expenditures of digital platforms unlikely to become proportionally less important as output increases, but empirical research strongly suggests that even data does not give rise to increasing returns to scale, despite routinely being cited as the source of this effect.[54]

In other words, being the firm with the most data appears to be far less important than having enough data. Moreover, this lower bar may be accessible to far more firms than one might initially think possible. Furthermore, obtaining sufficient data could become easier still—that is, the volume of required data could become even smaller—with technological progress. For instance, synthetic data may provide an adequate substitute to real-world data,[55] or may even outperform real-world data.[56] As Thibault Schrepel and Alex Pentland surmise:

[A]dvances in computer science and analytics are making the amount of data less relevant every day. In recent months, important technological advances have allowed companies with small data sets to compete with larger ones.[57]

Indeed, past a certain threshold, acquiring more data might not meaningfully improve a service, where other improvements (such as better training methods or data curation) could have a large impact. In fact, there is some evidence that excessive data impedes a service’s ability to generate results appropriate for a given query: “[S]uperior model performance can often be achieved with smaller, high-quality datasets than massive, uncurated ones. Data curation ensures that training datasets are devoid of noise, irrelevant instances, and duplications, thus maximizing the efficiency of every training iteration.”[58]

Consider, for instance, a user who wants to generate an image of a basketball. Using a model trained on an indiscriminate range and number of public photos in which a basketball appears surrounded by copious other image data, the user may end up with an inordinately noisy result. By contrast, a model trained with a better method on fewer, more carefully selected images could readily yield far superior results.[59] In one important example:

The model’s performance is particularly remarkable, given its small size. “This is not a large language model trained on the whole Internet; this is a relatively small transformer trained for these tasks,” says Armando Solar-Lezama, a computer scientist at the Massachusetts Institute of Technology, who was not involved in the new study…. The finding implies that instead of just shoving ever more training data into machine-learning models, a complementary strategy might be to offer AI algorithms the equivalent of a focused linguistics or algebra class.[60]

Platforms’ current efforts are thus focused on improving the mathematical and logical reasoning of large language models (LLMs), rather than maximizing training datasets.[61] Two points stand out. The first is that firms like OpenAI rely largely on publicly available datasets—such as GSM8K—to train their LLMs.[62] Second, the real challenge to creating innovative AI lies not so much in collecting data, but in creating innovative AI-training processes and architectures:

[B]uilding a truly general reasoning engine will require a more fundamental architectural innovation. What’s needed is a way for language models to learn new abstractions that go beyond their training data and have these evolving abstractions influence the model’s choices as it explores the space of possible solutions.

We know this is possible because the human brain does it. But it might be a while before OpenAI, DeepMind, or anyone else figures out how to do it in silicon.[63]

Furthermore, it is worth noting that the data most relevant to startups in a given market may not be those held by large incumbent platforms in other markets. They might instead be data specific to the market in which the startup is active or, even better, to the given problem it is attempting to solve:

As Andres Lerner has argued, if you wanted to start a travel business, the data from Kayak or Priceline would be far more relevant. Or if you wanted to start a ride-sharing business, data from cab companies would be more useful than the broad, market-cross-cutting profiles Google and Facebook have. Consider companies like Uber, Lyft and Sidecar that had no customer data when they began to challenge established cab companies that did possess such data. If data were really so significant, they could never have competed successfully. But Uber, Lyft and Sidecar have been able to effectively compete because they built products that users wanted to use—they came up with an idea for a better mousetrap. The data they have accrued came after they innovated, entered the market and mounted their successful challenges—not before.[64]

The bottom line is that data is not the be-all and end-all that many in competition circles make it out to be. While data may often confer marginal benefits, there is little evidence that these benefits are ultimately decisive.[65] As a result, incumbent platforms’ access to vast numbers of users and troves of data in their primary markets might only marginally affect their competitiveness in AI markets.

A related observation is that firms’ capabilities and other features of their products arguably play a more important role than the data they own.[66] Examples of this abound in digital markets. Google overthrew Yahoo in search, despite initially having access to far fewer users and far less data. Google and Apple overcame Microsoft in the smartphone operating-system market, despite having comparatively tiny ecosystems (at the time) to leverage. TikTok rose to prominence despite intense competition from incumbents like Instagram, which had much larger userbases. In each of these cases, important product-design decisions (such as the PageRank algorithm, recognizing the specific needs of mobile users,[67] and TikTok’s clever algorithm) appear to have played far more significant roles than the firms’ initial user and data endowments (or lack thereof).

All of this suggests that the early success of OpenAI likely has more to do with its engineering decisions than with what data it did or did not possess. Going forward, OpenAI and its rivals’ relative abilities to offer and monetize compelling use cases by offering custom versions of their generative-AI technologies will arguably play a much larger role than (and contribute to) their ownership of data.[68] In other words, the ultimate challenge is arguably to create a valuable platform, of which data ownership is a consequence, not a cause.

It is also important to note that, in those instances where it is valuable, data does not just fall from the sky. Instead, it is through smart business and engineering decisions that firms can generate valuable information (which does not necessarily correlate with owning more data). For instance, OpenAI’s success with ChatGPT is often attributed to its more efficient algorithms and training models, which arguably have enabled the service to improve more rapidly than its rivals.[69] Likewise, the ability of firms like Meta and Google to generate valuable data for advertising arguably depends more on design decisions that elicit the right data from users, rather than the raw number of users in their networks.

Put differently, setting up a business so as to gather and organize the right information is more important than simply owning vast troves of data.[70] Even in those instances where high-quality data is an essential parameter of competition, it does not follow that having vaster databases or more users on a platform necessarily leads to better information for the platform. Indeed, if data ownership consistently conferred a significant competitive advantage, these new AI firms would not be where they are today.

This does not, of course, mean that data is worthless. Rather, it means that competition authorities should not assume that the mere possession of data is a dispositive competitive advantage, absent compelling empirical evidence to support such a finding. In this light, the current wave of decisions and competition-policy pronouncements that rely on data-related theories of harm are premature.

D. Five Key Takeaways: Reconceptualizing the Role of Data in Generative-AI Competition

As we explain above, data network effects are not the source of barriers to entry that they are sometimes made out to be. The picture is far more nuanced. Indeed, as economist Andres Lerner demonstrated almost a decade ago (and the assessment is only truer today):

Although the collection of user data is generally valuable for online providers, the conclusion that such benefits of user data lead to significant returns to scale and to the entrenchment of dominant online platforms is based on unsupported assumptions. Although, in theory, control of an “essential” input can lead to the exclusion of rivals, a careful analysis of real-world evidence indicates that such concerns are unwarranted for many online businesses that have been the focus of the “big data” debate.[71]

While data can be an important part of the competitive landscape, incumbents’ data advantages are far less pronounced than today’s policymakers commonly assume. In that respect, five primary lessons emerge:

  1. Data can be (very) valuable, but beyond a certain threshold, those benefits tend to diminish. In other words, having the most data is less important than having enough;
  2. The ability to generate valuable information does not depend on the number of users or the amount of data a platform has previously acquired;
  3. The most important datasets are not always proprietary;
  4. Technological advances and platforms’ engineering decisions affect their ability to generate valuable information, and this effect swamps those that stem from the amount of data they own; and
  5. How platforms use data is arguably more important than what data or how much data they own.

These lessons have important ramifications for policy debates over the competitive implications of data in technologically evolving areas.

First, it is not surprising that startups, rather than incumbents, have taken an early lead in generative AI (and in Web 2.0 before it). After all, if data-incumbency advantages are small or even nonexistent, then smaller and more nimble players may have an edge over established tech platforms. This is all the more likely given that, despite significant efforts, the biggest tech platforms were unable to offer compelling generative-AI chatbots and image-generation services before the emergence of ChatGPT, Dall-E, Midjourney, etc.

This suggests that, in a process akin to Clayton Christensen’s “innovator’s dilemma,”[72] something about the incumbent platforms’ existing services and capabilities might have been holding them back in this emerging industry. Of course, this does not necessarily mean that those same services or capabilities could not become an advantage when the generative-AI industry starts addressing issues of monetization and scale.[73] But it does mean that assumptions about a firm’s market power based primarily on its possession of data are likely to be off the mark.

Another important implication is that, paradoxically, policymakers’ efforts to prevent Web 2.0 platforms from competing freely in generative-AI markets may ultimately backfire and lead to less, not more, competition. Indeed, OpenAI is currently acquiring a sizeable lead in generative AI. While competition authorities might like to think that other startups will emerge and thrive in this space, it is important not to confuse those desires with reality. While there currently exists a vibrant AI-startup ecosystem, there is at least a case to be made that significant competition for today’s AI leaders will come from incumbent Web 2.0 platforms—although nothing is certain at this stage.

Policymakers should beware not to stifle that competition on the misguided assumption that competitive pressure from large incumbents is somehow less valuable to consumers than that which originates from smaller firms. This is particularly relevant in the context of merger control. An acquisition (or an “acqui-hire”) by a “big tech” company does not only, in principle, entail a minor risk to harm competition (it is not a horizontal merger),[74] but could create a stronger competitor to the current market leaders.

Finally, even if there were a competition-related market failure to be addressed in the field of generative AI (which is anything but clear), the remedies under contemplation may do more harm than good. Some of the solutions that have been put forward have highly ambiguous effects on consumer welfare. Scholars have shown that, e.g., mandated data sharing—a solution championed by EU policymakers, among others—may sometimes dampen competition in generative AI.[75] This is also true of legislation like the General Data Protection Regulation (GDPR), which makes it harder for firms to acquire more data about consumers—assuming such data is, indeed, useful to generative-AI services.[76]

In sum, it is a flawed understanding of the economics and practical consequences of large agglomerations of data that has led competition authorities to believe data-incumbency advantages are likely to harm competition in generative AI—or even in the data-intensive Web 2.0 markets that preceded it. Indeed, competition or regulatory intervention to “correct” data barriers and data network and scale effects is liable to do more harm than good.

II. Merger Policy and AI

Policymakers have expressed particular concern about the anticompetitive potential of deals wherein AI startups obtain funding from incumbent tech firms, even in cases where these strategic partnerships cannot be considered mergers in the antitrust sense (because there is no control exercised by one firm over the other). To date, there is no evidence to support differentiated scrutiny for mergers involving AI firms or, in general, firms working with information technology. The view that so-called “killer acquisitions,” for instance, pose a significant competition risk in AI markets is not supported by solid evidence.[77] To the contrary, there is reason to believe these acquisitions bolster competition by allowing larger firms to acquire capabilities relevant to innovation, and by increasing incentives to invest for startup founders.[78]

Companies with “deep pockets” that invest in AI startups may provide those firms the resources to compete with prevailing market leaders. Firms like Amazon, Google, Meta, and Microsoft, for instance, have been investing to create their own microchips capable of building AI systems, aiming to be less dependent on Nvidia.[79] The tributaries of this flow of funds could serve to enhance competition at all levels of the AI industry.[80]

A. Existing AI Partnerships Are Unlikely to Be Anticompetitive

Some jurisdictions have also raised concerns regarding recent partnerships among big tech firms and AI “unicorns,”[81] in particular, Amazon’s partnership with Anthropic; Microsoft’s partnership with Mistral AI; and Microsoft’s hiring of former Inflection AI employees (including, notably, founder Mustafa Suleyman) and related arrangements with the company. Publicly available information, however, suggests that these transactions may not warrant merger-control investigation, let alone the heightened scrutiny that comes with potential Phase II proceedings. At the very least, given the AI industry’s competitive landscape, there is little to suggest these transactions merit closer scrutiny than similar deals in other sectors.

Overenforcement in the field of generative AI could paradoxically engender the very harms that policymakers are seeking to avert. Preventing big tech firms from competing in these markets (for example, by threatening competition intervention as soon as they build strategic relationships with AI startups) may thwart an important source of competition needed to keep today’s leading generative-AI firms in check. In short, while competition in AI markets is important,[82] trying naïvely to hold incumbent (in adjacent markets) tech firms back, out of misguided fears they will come to dominate this space, is likely to do more harm than good.

At a more granular level, there are important reasons to believe these kinds of agreements will have no negative impact on competition and may, in fact, benefit consumers—e.g., by enabling those startups to raise capital and deploy their services at an even larger scale. In other words, they do not bear any of the prima facie traits of “killer acquisitions,” or even of the acquisition of “nascent potential competitors.”[83]

Most importantly, these partnerships all involve the acquisition of minority stakes and do not entail any change of control over the target companies. Amazon, for instance, will not have “ownership control” of Anthropic. The precise amount of shares acquired has not been made public, but a reported investment of $4 billion in a company valued at $18.4 billion does not give Amazon a majority stake or sufficient voting rights to control the company or its competitive strategy. [84] It has also been reported that the deal will not give Amazon any seats on the Anthropic board or special voting rights (such as the power to veto some decisions).[85] There is thus little reason to believe Amazon has acquired indirect or de facto control over Anthropic.

Microsoft’s investment in Mistral AI is even smaller, in both absolute and relative terms. Microsoft is reportedly investing just $16 million in a company valued at $2.1 billion.[86] This represents less than 1% of Mistral’s equity, making it all but impossible for Microsoft to exert any significant control or influence over Mistral AI’s competitive strategy. There have similarly been no reports of Microsoft acquiring seats on Mistral AI’s board or any special voting rights. We can therefore be confident that the deal will not affect competition in AI markets.

Much the same applies to Microsoft’s dealings with Inflection AI. Microsoft hired two of the company’s three founders (which currently does not fall under the scope of merger laws), and also paid $620 million for nonexclusive rights to sell access to the Inflection AI model through its Azure Cloud.[87] Admittedly, the latter could entail (depending on deal’s specifics) some limited control over Inflection AI’s competitive strategy, but there is currently no evidence to suggest this will be the case.

Finally, none of these deals entail any competitively significant behavioral commitments from the target companies. There are no reports of exclusivity agreements or other commitments that would restrict third parties’ access to these firms’ underlying AI models. Again, this means the deals are extremely unlikely to negatively impact the competitive landscape in these markets.

B. AI Partnerships Increase Competition

As discussed in the previous section, the AI partnerships that have recently grabbed antitrust headlines are unlikely to harm competition. They do, however, have significant potential to bolster competition in generative-AI markets by enabling new players to scale up rapidly and to challenge more established players by leveraging the resources of incumbent tech platforms.

The fact that AI startups willingly agree to the aforementioned AI partnerships suggests this source of funding presents unique advantages for them, or they would have pursued capital through other venues. The question for antitrust policymakers is whether this advantage is merely an anticompetitive premium, paid by big tech platforms to secure monopoly rents, or whether the investing firms are bringing something else to the table. As we discussed in the previous section, there is little reason to believe these partnerships are driven by anticompetitive motives. More importantly, however, these deals may present important advantages for AI startups that, in turn, are likely to boost competition in these burgeoning markets.

To start, partnerships with so-called big tech firms are likely a way for AI startups to rapidly obtain equity financing. While this lies beyond our area of expertise, there is ample economic literature to suggest that debt and equity financing are not equivalent for firms.[88] Interestingly for competition policy, there is evidence to suggest firms tend to favor equity over debt financing when they operate in highly competitive product markets.[89]

Furthermore, there may be reasons that AI startups to turn to incumbent big tech platforms to obtain financing, rather than to other partners (though there is evidence these firms are also raising significant amounts of money from other sources).[90] In short, big tech platforms have a longstanding reputation for deep pockets, as well as a healthy appetite for risk. Because of the relatively small amounts at stake—at least, relative to the platforms’ market capitalizations—these firms may be able to move faster than rivals, for whom investments of this sort may present more significant risks. This may be a key advantage in the fast-paced world of generative AI, where obtaining funding and scaling rapidly could be the difference between becoming the next GAFAM or an also-ran.

Partnerships with incumbent tech platforms may also create valuable synergies that enable startups to extract better terms than would otherwise be the case (because the deal creates more surplus for parties to distribute among themselves). Potential synergies include better integrating generative-AI services into existing platforms; several big tech platforms appear to see the inevitable integration of AI into their services as a challenge similar to the shift from desktop to mobile internet, which saw several firms thrive, while others fell by the wayside.[91]

Conversely, incumbent tech platforms may have existing infrastructure that AI startups can use to scale up faster and more cheaply than would otherwise be the case. Running startups’ generative-AI services on top of this infrastructure may enable much faster deployment of generative-AI technology.[92] Importantly, if these joint strategies entail relationship-specific investments on the part of one or both partners, then big tech platforms taking equity positions in AI startups may be an important facilitator to prevent holdup.[93] Both of these possibilities are perfectly summed up by Swami Sivasubramanian, Amazon’s vice president of Data and AI, when commenting on Amazon’s partnership with Anthropic:

Anthropic’s visionary work with generative AI, most recently the introduction of its state-of-the art Claude 3 family of models, combined with Amazon’s best-in-class infrastructure like AWS Tranium and managed services like Amazon Bedrockfurther unlocks exciting opportunities for customers to quickly, securely, and responsibly innovate with generative AI. Generative AI is poised to be the most transformational technology of our time, and we believe our strategic collaboration with Anthropic will further improve our customers’ experiences, and look forward to what’s next.[94]

All of this can be expected to have a knock-on effect on innovation and competition in generative-AI markets. To put it simply, a leading firm like OpenAI might welcome the prospect of competition authorities blocking the potential funding of one of its rivals. It may also stand to benefit if incumbent tech firms are prevented from rapidly upping their generative-AI game via partnerships with other AI startups. In short, preventing AI startups from obtaining funding from big tech platforms could not only arrest those startups’ growth, but also harm long-term competition in the burgeoning AI industry.

III. Market Definition in AI

The question of market definition, long a cornerstone of antitrust analysis, is of particular importance and complexity in the context of AI. The difficulty in defining relevant markets accurately stems not only from the novelty of AI technologies, but from their inherent heterogeneity and the myriad ways they intersect with existing markets and business models. In short, it is not yet clear how to determine the boundaries of markets for AI-powered products. Indeed, traditional approaches to market definition will ultimately provide the correct tools to accomplish this task, but, as we discuss below, we do not yet know the right questions to ask.

Regulators and policymakers must develop a nuanced understanding of AI markets, one that moves beyond broad generalizations and marketing hyperbole to examine the specific characteristics of these emerging technologies and their impacts on various product and service markets.

There are three main things that need to be at the forefront of competition authorities’ minds when they think about market definition in AI products and services. First, they must understand that AI is not a single thing, but is a composite category composed of many distinct goods and services. Second, and related to looking beyond the AI marketing hype, they must recognize how the extremely heterogeneous products landscape of “AI” intersects with an equally variegated consumer-demand landscape. Finally, they must acknowledge how little we know about these nascent markets, and that the most important priority at the moment is simply to ask the right questions that will lead to sound competition policy.

A. AI Is Difficult to Define and Not Monolithic

The task of defining AI for the purposes of antitrust analysis is fraught with complexity, stemming from the multifaceted nature of AI technologies and their diverse applications across industries. It is imperative to recognize that AI does not constitute a monolithic entity or a singular market, but rather encompasses a heterogeneous array of technologies, techniques, and applications that defy simplistic categorization.[95]

At its core, the “AI Stack” comprises multiple layers of interrelated yet distinct technological components. At the foundational level, we find specialized hardware such as semiconductors, graphics processing units (GPUs), and tensor processing units (TPUs), as well as other specialized chipsets designed to accelerate the computationally intensive tasks associated with AI. These hardware components, while critical to AI functionality, also serve broader markets beyond AI applications (e.g., crypto and gaming), complicating efforts to delineate clear market boundaries.

The data layer presents another dimension of complexity. AI systems rely on vast quantities of both structured and unstructured data for training and operation.[96] The sourcing, curation, and preparation of this data constitute distinct markets within the AI ecosystem, each with its own competitive dynamics and potential barriers to entry.

Moving up the stack, we encounter the algorithmic layer, where a diverse array of machine-learning techniques—including, but not limited to, supervised learning, unsupervised learning, and reinforcement learning[97]—are employed. These algorithmic approaches, while fundamental to AI functionality, are not uniform in their application or market impact. Different AI applications may utilize distinct combinations of these techniques,[98] potentially serving disparate markets and consumer needs.

At the application level, the heterogeneity of AI becomes most apparent. From natural-language processing and computer vision to predictive analytics and autonomous vehicles, AI technologies manifest in a multitude of forms, each potentially constituting a distinct relevant market for antitrust purposes. Moreover, these AI applications can intersect with and compete against non-AI solutions, further blurring the boundaries of what might be considered an “AI market.”

The deployment models for AI technologies add yet another layer of complexity to the task of defining antitrust-relevant markets. Cloud-based AI services, edge-computing solutions, and on-premises AI deployments may each serve different market segments and face distinct competitive pressures. The ability of firms to make “build or buy” decisions regarding AI capabilities further complicates the delineation of clear market boundaries.[99]

B. Look Beyond the Marketing Hype

The application of antitrust principles to AI markets necessitates a rigorous analytical approach that transcends superficial categorizations and marketing rhetoric. It is imperative for enforcement authorities to eschew preconceived notions and popular narratives surrounding AI, and to focus instead on empirical evidence and careful economic analysis, in order to accurately assess competitive dynamics in AI-adjacent markets.

The allure of AI as a revolutionary technology has led to a proliferation of marketing claims and industry hype[100] that often may obscure the true nature and capabilities of AI systems. This obfuscation presents a significant challenge for antitrust authorities, who must disentangle factual competitive realities from speculative or exaggerated assertions about AI’s market impact. This task is further complicated by the rapid pace of technological advancement in the field, which can render even recent market analyses obsolete.

A particularly pernicious misconception that must be addressed is the notion that AI technologies operate in a competitive vacuum, distinct from and impervious to competition from non-AI alternatives. This perspective risks leading antitrust authorities to define markets too narrowly, potentially overlooking significant competitive constraints from traditional technologies or human-driven services.

Consider, for instance, the domain of natural-language processing. While AI-powered language models have made significant strides in recent years, they often compete directly with human translators, content creators, and customer-service representatives. Similarly, in the realm of data analysis, AI systems may vie for market share not only with other AI solutions, but also with traditional statistical methods and human analysts. Failing to account for these non-AI competitors in market-definition exercises could result in a distorted view of market power and competitive dynamics.

Moreover, the tendency to treat AI as a monolithic entity obscures the reality that many AI-powered products and services are, in fact, hybrid solutions that combine AI components with traditional software and human oversight.[101] This hybridization further complicates market-definition efforts, as it becomes necessary to assess the degree to which the AI element of a product or service contributes to its market position and substitutability.

C. Current Lack of Knowledge About Relevant Markets

It is crucial to acknowledge at this juncture the profound limitations in our current understanding of how AI technologies will ultimately shape competitive landscapes across various industries. This recognition of our informational constraints should inform a cautious and empirically grounded approach to market definition in the context of AI.

The dynamic nature of AI development renders many traditional metrics for market definition potentially unreliable or prematurely restrictive. Market share, often a cornerstone of antitrust analysis, may prove particularly volatile in AI markets, where technological breakthroughs can rapidly alter competitive positions. Moreover, the boundaries between distinct AI applications and markets remain fluid, with innovations in one domain frequently finding unexpected applications in others, and thereby further complicating efforts to delineate stable market boundaries.

In this context, Jonathan Barnett’s observations regarding the dangers of preemptive antitrust approaches in nascent markets are particularly salient.[102] Barnett argues persuasively that, at the early stages of a market’s development, uncertainty concerning the competitive effects of certain business practices is likely to be especially high.[103] This uncertainty engenders a significant risk of false-positive error costs, whereby preemptive intervention may inadvertently suppress practices that are either competitively neutral or potentially procompetitive.[104]

The risk of regulatory overreach is particularly acute in the realm of AI, where the full spectrum of potential applications and competitive dynamics remains largely speculative. Premature market definition and subsequent enforcement actions based on such definitions could stifle innovation and impede the natural evolution of AI technologies and business models.

Further complicating matters is the fact that what constitutes a relevant product in AI markets is often ambiguous and subject to rapid change. The modular nature of many AI systems, where components can be combined and reconfigured to serve diverse functions, challenges traditional notions of product markets. For instance, a foundational language model might serve as a critical input for a wide array of downstream applications, from chatbots to content-generation tools, each potentially constituting a distinct product market. The boundaries between these markets, and the extent to which they overlap or remain distinct, are likely to remain in flux in the near future.

Given these uncertainties, antitrust authorities must adopt a posture of epistemic humility when approaching market definition in the context of AI. This approach of acknowledged uncertainty and adaptive analysis does not imply regulatory paralysis. Rather, it calls for a more nuanced and dynamic form of antitrust oversight, one that remains vigilant to potential competitive harms while avoiding premature or overly rigid market definitions that could impede innovation.

Market definition should reflect our best understanding of both AI and AI markets. Since this understanding is still very much in an incipient phase, antitrust authorities should view their current efforts not as definitive pronouncements on the structure of AI markets, but as iterative steps in an ongoing process of learning and adaptation. By maintaining this perspective, regulators can hope to strike a balance between addressing legitimate competitive concerns and fostering an environment conducive to continued innovation and dynamic competition in the AI sector.

D. Key Questions to Ask

Finally, the most important function for enforcement authorities to play at the moment is to ask the right questions that will help to optimally develop an analytical framework of relevant markets in subsequent competition analyses. This framework should be predicated on a series of inquiries designed to elucidate the true nature of competitive dynamics in AI-adjacent markets. While the specific contours of relevant markets may remain elusive, the process of rigorous questioning can provide valuable insights and guide enforcement decisions.

Two fundamental questions emerge as critical starting points for any attempt to define relevant markets in AI contexts.

First, “Who are the consumers, and what is the product or service?” This seemingly straightforward inquiry belies a complex web of considerations in AI markets. The consumers of AI technologies and services are often not end-users, but rather, intermediaries that participate in complex value chains. For instance, the market for AI chips encompasses not only direct purchasers like cloud-service providers, but also downstream consumers of AI-powered applications. Similarly, the product or service in question may not be a discrete AI technology, but rather a bundle of AI and non-AI components, or even a service powered by AI but indistinguishable to the end user from non-AI alternatives.

The heterogeneity of AI consumers and products necessitates a granular approach to market definition. Antitrust authorities must carefully delineate between different levels of the AI value chain, considering the distinct competitive dynamics at each level. This may involve separate analyses for markets in AI inputs (such as specialized hardware or training data), AI development tools, and AI-powered end-user applications.

Second, and perhaps more crucially, “Does AI fundamentally transform the product or service in a way that creates a distinct market?” This question is at the heart of the challenge in defining AI markets. It requires a nuanced assessment of the degree to which AI capabilities alter the nature of a product or service from the perspective of consumers.

In some cases, AI’s integration into products or services may represent merely an incremental improvement, not warranting the delineation of a separate market. For example, AI-enhanced spell-checking in word-processing software might not constitute a distinct market from traditional spell-checkers if consumers do not perceive a significant functional difference.

Conversely, in other cases, AI may enable entirely new functionalities or levels of performance that create distinct markets. Large language models capable of generating human-like text, for instance, might be considered to operate in a market separate from traditional writing aids or information-retrieval tools (or not, depending on the total costs and benefits of the option).

The analysis must also consider the potential for AI to blur the boundaries between previously distinct markets. As AI systems become more versatile, they may compete across multiple traditional product categories, challenging conventional market definitions.

In addressing these questions, antitrust authorities should consider several additional factors:

  1. The degree of substitutability between AI and non-AI solutions, from the perspective of both direct purchasers and end-users.
  2. The extent to which AI capabilities are perceived as essential or differentiating factors by consumers in the relevant market.
  3. The potential for rapid evolution in AI capabilities and consumer preferences, which may necessitate dynamic market definitions.
  4. The presence of switching costs or lock-in effects, which could influence market boundaries.
  5. The geographic scope of AI markets, which may transcend traditional national or regional boundaries.

It is crucial to note that these questions do not yield simple or static answers. Rather, they serve as analytical tools to guide ongoing assessment of AI markets. Antitrust authorities must be prepared to revisit and refine their market definitions as technological capabilities evolve and market dynamics shift.

Moreover, the process of defining relevant markets in the context of AI should not be viewed as an end in itself, but as a means to understand competitive dynamics and to inform enforcement decisions. In some cases, traditional market-definition exercises may prove insufficient, necessitating alternative analytical approaches that focus on competitive effects or innovation harms.

By embracing this questioning approach, antitrust authorities can develop a more nuanced and adaptable framework for market definition in AI contexts. This approach would acknowledge the complexities and uncertainties inherent in AI markets, while providing a structured methodology to assess competitive dynamics. As our understanding of AI markets deepens, this framework will need to evolve further, ensuring that antitrust enforcement remains responsive to the unique challenges posed by artificial-intelligence technologies.

[1] Press Release, Justice Department and Stanford University to Cohost Workshop “Promoting Competition in Artificial Intelligence”, U.S. Justice Department (May 21, 2024), https://www.justice.gov/opa/pr/justice-department-and-stanford-university-cohost-workshop-promoting-competition-artificial.

[2] Artificial intelligence is, of course, not a market (at least not a relevant antitrust market). Within the realm of what is called “AI,” companies offer myriad products and services, and specific relevant markets would need to be defined before assessing harm to competition in specific cases.

[3] Nathan Newman, Taking on Google’s Monopoly Means Regulating Its Control of User Data, Huffington Post (Sep. 24, 2013), http://www.huffingtonpost.com/nathan-newman/taking-on-googlesmonopol_b_3980799.html.

[4] See, e.g., Lina Khan & K. Sabeel Rahman, Restoring Competition in the U.S. Economy, in Untamed: How to Check Corporate, Financial, and Monopoly Power (Nell Abernathy, Mike Konczal, & Kathryn Milani, eds., 2016), at 23. (“From Amazon to Google to Uber, there is a new form of economic power on display, distinct from conventional monopolies and oligopolies…, leverag[ing] data, algorithms, and internet-based technologies… in ways that could operate invisibly and anticompetitively.”); Mark Weinstein, I Changed My Mind—Facebook Is a Monopoly, Wall St. J. (Oct. 1, 2021), https://www.wsj.com/articles/facebook-is-monopoly-metaverse-users-advertising-platforms-competition-mewe-big-tech-11633104247 (“[T]he glue that holds it all together is Facebook’s monopoly over data…. Facebook’s data troves give it unrivaled knowledge about people, governments—and its competitors.”).

[5] See, generally, Abigail Slater, Why “Big Data” Is a Big Deal, The Reg. Rev. (Nov. 6, 2023), https://www.theregreview.org/2023/11/06/slater-why-big-data-is-a-big-deal; Amended Complaint at ¶36, United States v. Google, 1:20-cv-03010- (D.D.C. 2020); Complaint at ¶37, United States v. Google, 1:23-cv-00108 (E.D. Va. 2023), https://www.justice.gov/opa/pr/justice-department-sues-google-monopolizing-digital-advertising-technologies (“Google intentionally exploited its massive trove of user data to further entrench its monopoly across the digital advertising industry.”).

[6] See, e.g., Press Release, Commission Launches Calls for Contributions on Competition in Virtual Worlds and Generative AI, European Commission (Jan. 9, 2024), https://ec.europa.eu/commission/presscorner/detail/en/IP_24_85; Krysten Crawford, FTC’s Lina Khan Warns Big Tech over AI, SIEPR (Nov. 3, 2020), https://siepr.stanford.edu/news/ftcs-lina-khan-warns-big-tech-over-ai (“Federal Trade Commission Chair Lina Khan delivered a sharp warning to the technology industry in a speech at Stanford on Thursday: Antitrust enforcers are watching what you do in the race to profit from artificial intelligence.”) (emphasis added).

[7] See, e.g., John M. Newman, Antitrust in Digital Markets, 72 Vand. L. Rev. 1497, 1501 (2019) (“[T]he status quo has frequently failed in this vital area, and it continues to do so with alarming regularity. The laissez-faire approach advocated for by scholars and adopted by courts and enforcers has allowed potentially massive harms to go unchecked.”); Bertin Martins, Are New EU Data Market Regulations Coherent and Efficient?, Bruegel Working Paper 21/23 (2023), https://www.bruegel.org/working-paper/are-new-eu-data-market-regulations-coherent-and-efficient (“Technical restrictions on access to and re-use of data may result in failures in data markets and data-driven services markets.”); Valéria Faure-Muntian, Competitive Dysfunction: Why Competition Law Is Failing in a Digital World, The Forum Network (Feb. 24, 2021), https://www.oecd-forum.org/posts/competitive-dysfunction-why-competition-law-is-failing-in-a-digital-world.

[8] See Rana Foroohar, The Great US-Europe Antitrust Divide, Financial Times (Feb. 5, 2024), https://www.ft.com/content/065a2f93-dc1e-410c-ba9d-73c930cedc14.

[9] See, e.g., Press Release, European Commission, supra note 6.

[10] See infra, Section I.B. Commentators have also made similar claims; see, e.g., Ganesh Sitaram & Tejas N. Narechania, It’s Time for the Government to Regulate AI. Here’s How, Politico (Jan. 15, 2024) (“All that cloud computing power is used to train foundation models by having them “learn” from incomprehensibly huge quantities of data. Unsurprisingly, the entities that own these massive computing resources are also the companies that dominate model development. Google has Bard, Meta has LLaMa. Amazon recently invested $4 billion into one of OpenAI’s leading competitors, Anthropic. And Microsoft has a 49 percent ownership stake in OpenAI — giving it extraordinary influence, as the recent board struggles over Sam Altman’s role as CEO showed.”).

[11] Press Release, European Commission, supra note 6.

[12] Comment of U.S. Federal Trade Commission to the U.S. Copyright Office, Artificial Intelligence and Copyright, Docket No. 2023-6 (Oct. 30, 2023), at 4, https://www.ftc.gov/legal-library/browse/advocacy-filings/comment-federal-trade-commission-artificial-intelligence-copyright (emphasis added).

[13] Jonathan Kanter, Remarks at the Promoting Competition in AI Conference (May 30, 2024), https://youtu.be/yh–1AGf3aU?t=424.

[14] Karin Matussek, AI Will Fuel Antitrust Fires, Big Tech’s German Nemesis Warns, Bloomberg (Jun. 26, 2024), https://www.bloomberg.com/news/articles/2024-06-26/ai-will-fuel-antitrust-fires-big-tech-s-german-nemesis-warns?srnd=technology-vp.

[15] Id.

[16] See, e.g., Joe Caserta, Holger Harreis, Kayvaun Rowshankish, Nikhil Srinidhi, & Asin Tavakoli, The Data Dividend: Fueling Generative AI, McKinsey Digital (Sep. 15, 2023), https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-data-dividend-fueling-generative-ai (“Your data and its underlying foundations are the determining factors to what’s possible with generative AI.”).

[17] See, e.g., Tim Keary, Google DeepMind’s Achievements and Breakthroughs in AI Research, Techopedia (Aug. 11, 2023), https://www.techopedia.com/google-deepminds-achievements-and-breakthroughs-in-ai-research; see, e.g., Will Douglas Heaven, Google DeepMind Used a Large Language Model to Solve an Unsolved Math Problem, MIT Technology Review (Dec. 14, 2023), https://www.technologyreview.com/2023/12/14/1085318/google-deepmind-large-language-model-solve-unsolvable-math-problem-cap-set; see also, A Decade of Advancing the State-of-the-Art in AI Through Open Research, Meta (Nov. 30, 2023), https://about.fb.com/news/2023/11/decade-of-advancing-ai-through-open-research; see also, 200 Languages Within a Single AI Model: A Breakthrough in High-Quality Machine Translation, Meta, https://ai.meta.com/blog/nllb-200-high-quality-machine-translation (last visited Jan. 18, 2023).

[18] See, e.g., Jennifer Allen, 10 Years of Siri: The History of Apple’s Voice Assistant, Tech Radar (Oct. 4, 2021), https://www.techradar.com/news/siri-10-year-anniversary; see also Evan Selleck, How Apple Is Already Using Machine Learning and AI in iOS, Apple Insider (Nov. 20, 2023), https://appleinsider.com/articles/23/09/02/how-apple-is-already-using-machine-learning-and-ai-in-ios; see also, Kathleen Walch, The Twenty Year History Of AI At Amazon, Forbes (Jul. 19, 2019), https://www.forbes.com/sites/cognitiveworld/2019/07/19/the-twenty-year-history-of-ai-at-amazon.

[19] See infra Section I.C.

[20] Josh Sisco, POLITICO PRO Q&A: Exit interview with DOJ Chief Antitrust Economist Susan Athey, Politico Pro (Jul. 2, 2024), https://subscriber.politicopro.com/article/2024/07/politico-pro-q-a-exit-interview-with-doj-chief-antitrust-economist-susan-athey-00166281.

[21] Belle Lin, Open-Source Companies Are Sharing Their AI Free. Can They Crack OpenAI’s Dominance?, Wall St. J. (Mar. 21, 2024), https://www.wsj.com/articles/open-source-companies-are-sharing-their-ai-free-can-they-crack-openais-dominance-26149e9c.

[22] See, e.g., Cédric Argenton & Jens Prüfer, Search Engine Competition with Network Externalities, 8 J. Comp. L. & Econ. 73, 74 (2012).

[23] John M. Yun, The Role of Big Data in Antitrust, in The Global Antitrust Institute Report on the Digital Economy (Joshua D. Wright & Douglas H. Ginsburg, eds., Nov. 11, 2020) at 233, https://gaidigitalreport.com/2020/08/25/big-data-and-barriers-to-entry/#_ftnref50; see also, e.g., Robert Wayne Gregory, Ola Henfridsson, Evgeny Kaganer, & Harris Kyriakou, The Role of Artificial Intelligence and Data Network Effects for Creating User Value, 46 Acad. of Mgmt. Rev. 534 (2020), final pre-print version at 4, http://wrap.warwick.ac.uk/134220) (“A platform exhibits data network effects if, the more that the platform learns from the data it collects on users, the more valuable the platform becomes to each user.”); see also, Karl Schmedders, José Parra-Moyano, & Michael Wade, Why Data Aggregation Laws Could be the Answer to Big Tech Dominance, Silicon Republic (Feb. 6, 2024), https://www.siliconrepublic.com/enterprise/data-ai-aggregation-laws-regulation-big-tech-dominance-competition-antitrust-imd.

[24] Nathan Newman, Search, Antitrust, and the Economics of the Control of User Data, 31 Yale J. Reg. 401, 409 (2014) (emphasis added); see also id. at 420 & 423 (“While there are a number of network effects that come into play with Google, [“its intimate knowledge of its users contained in its vast databases of user personal data”] is likely the most important one in terms of entrenching the company’s monopoly in search advertising…. Google’s overwhelming control of user data… might make its dominance nearly unchallengeable.”).

[25] See also Yun, supra note 23 at 229 (“[I]nvestments in big data can create competitive distance between a firm and its rivals, including potential entrants, but this distance is the result of a competitive desire to improve one’s product.”).

[26] For a review of the literature on increasing returns to scale in data (this topic is broader than data-network effects) see Geoffrey Manne & Dirk Auer, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins, 28 Geo Mason L. Rev. 1281, 1344 (2021).

[27] Andrei Hagiu & Julian Wright, Data-Enabled Learning, Network Effects, and Competitive Advantage, 54 RAND J. Econ. 638 (2023).

[28] Id. at 639. The authors conclude that “Data-enabled learning would seem to give incumbent firms a competitive advantage. But how strong is this advantage and how does it differ from that obtained from more traditional mechanisms… .”

[29] Id.

[30] Bruno Jullien & Wilfried Sand-Zantman, The Economics of Platforms: A Theory Guide for Competition Policy, 54 Info. Econ. & Pol’y 10080, 101031 (2021).

[31] Daniele Condorelli & Jorge Padilla, Harnessing Platform Envelopment in the Digital World, 16 J. Comp. L. & Pol’y 143, 167 (2020).

[32] See Hagiu & Wright, supra note 27.

[33] For a summary of these limitations, see generally Catherine Tucker, Network Effects and Market Power: What Have We Learned in the Last Decade?, Antitrust (2018) at 72, available at https://sites.bu.edu/tpri/files/2018/07/tucker-network-effects-antitrust2018.pdf; see also Manne & Auer, supra note 26, at 1330.

[34] See Jason Furman, Diane Coyle, Amelia Fletcher, Derek McAuley, & Philip Marsden (Dig. Competition Expert Panel), Unlocking Digital Competition (2019) at 32-35 (“Furman Report”), available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf.

[35] Id. at 34.

[36] Id. at 35. To its credit, it should be noted, the Furman Report does counsel caution before mandating access to data as a remedy to promote competition. See id. at 75. That said, the Furman Report maintains that such a remedy should remain on the table because “the evidence suggests that large data holdings are at the heart of the potential for some platform markets to be dominated by single players and for that dominance to be entrenched in a way that lessens the potential for competition for the market.” Id. The evidence, however, does not show this.

[37] Case COMP/M.9660 — Google/Fitbit, Commission Decision (Dec. 17, 2020) (Summary at O.J. (C 194) 7), available at https://ec.europa.eu/competition/mergers/cases1/202120/m9660_3314_3.pdf, at 455,

[38] Id. at 896.

[39] See Natasha Lomas, EU Checking if Microsoft’s OpenAI Investment Falls Under Merger Rules, TechCrunch (Jan. 9, 2024), https://techcrunch.com/2024/01/09/openai-microsoft-eu-merger-rules.

[40] Amended Complaint at 11, Meta/Zuckerberg/Within, Fed. Trade Comm’n. (2022) (No. 605837), available at https://www.ftc.gov/system/files/ftc_gov/pdf/D09411%20-%20AMENDED%20COMPLAINT%20FILED%20BY%20COUNSEL%20SUPPORTING%20THE%20COMPLAINT%20-%20PUBLIC%20%281%29_0.pdf.

[41] Amended Complaint (D.D.C), supra note 5 at ¶37.

[42] Amended Complaint (E.D. Va), supra note 5 at ¶8.

[43] Merger Guidelines, US Dep’t of Justice & Fed. Trade Comm’n (2023) at 25, available at https://www.ftc.gov/system/files/ftc_gov/pdf/2023_merger_guidelines_final_12.18.2023.pdf.

[44] Merger Assessment Guidelines, Competition and Mkts. Auth (2021) at ¶7.19(e), available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1051823/MAGs_for_publication_2021_–_.pdf.

[45] Furman Report, supra note 34, at ¶4.

[46] See, e.g., Chris Westfall, New Research Shows ChatGPT Reigns Supreme in AI Tool Sector, Forbes (Nov. 16, 2023), https://www.forbes.com/sites/chriswestfall/2023/11/16/new-research-shows-chatgpt-reigns-supreme-in-ai-tool-sector/?sh=7de5de250e9c; Sujan Sarkar, AI Industry Analysis: 50 Most Visited AI Tools and Their 24B+ Traffic Behavior, Writerbuddy (last visited, Jul. 15, 2024), https://writerbuddy.ai/blog/ai-industry-analysis.

[47] See Krystal Hu, ChatGPT Sets Record for Fastest-Growing User Base, Reuters (Feb. 2, 2023), https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01; Google: The AI Race Is On, App Economy Insights (Feb. 7, 2023), https://www.appeconomyinsights.com/p/google-the-ai-race-is-on.

[48] See Google Trends, https://trends.google.com/trends/explore?date=today%205-y&q=%2Fg%2F11khcfz0y2,%2Fg%2F11ts49p01g&hl=en (last visited Jan. 12, 2024) and https://trends.google.com/trends/explore?date=today%205-y&geo=US&q=%2Fg%2F11khcfz0y2,%2Fg%2F11ts49p01g&hl=en (last visited Jan. 12, 2024).

[49] See David F. Carr, As ChatGPT Growth Flattened in May, Google Bard Rose 187%, Similarweb Blog (Jun. 5, 2023), https://www.similarweb.com/blog/insights/ai-news/chatgpt-bard.

[50] See Press Release, Introducing New AI Experiences Across Our Family of Apps and Devices, Meta (Sep. 27, 2023), https://about.fb.com/news/2023/09/introducing-ai-powered-assistants-characters-and-creative-tools; Sundar Pichai, An Important Next Step on Our AI Journey, Google Keyword Blog (Feb. 6, 2023), https://blog.google/technology/ai/bard-google-ai-search-updates.

[51] See Ion Prodan, 14 Million Users: Midjourney’s Statistical Success, Yon (Aug. 19, 2023), https://yon.fun/midjourney-statistics; see also Andrew Wilson, Midjourney Statistics: Users, Polls, & Growth [Oct 2023], ApproachableAI (Oct. 13, 2023), https://approachableai.com/midjourney-statistics.

[52] See Hema Budaraju, New Ways to Get Inspired with Generative AI in Search, Google Keyword Blog (Oct. 12, 2023), https://blog.google/products/search/google-search-generative-ai-october-update; Imagine with Meta AI, Meta (last visited Jan. 12, 2024), https://imagine.meta.com.

[53] Catherine Tucker, Digital Data, Platforms and the Usual [Antitrust] Suspects: Network Effects, Switching Costs, Essential Facility, 54 Rev. Indus. Org. 683, 686 (2019).

[54] Manne & Auer, supra note 26, at 1345.

[55] See, e.g., Stefanie Koperniak, Artificial Data Give the Same Results as Real Data—Without Compromising Privacy, MIT News (Mar. 3, 2017), https://news.mit.edu/2017/artificial-data-give-same-results-as-real-data-0303 (“[Authors] describe a machine learning system that automatically creates synthetic data—with the goal of enabling data science efforts that, due to a lack of access to real data, may have otherwise not left the ground. While the use of authentic data can cause significant privacy concerns, this synthetic data is completely different from that produced by real users—but can still be used to develop and test data science algorithms and models.”).

[56] See, e.g., Rachel Gordon, Synthetic Imagery Sets New Bar in AI Training Efficiency, MIT News (Nov. 20, 2023), https://news.mit.edu/2023/synthetic-imagery-sets-new-bar-ai-training-efficiency-1120 (“By using synthetic images to train machine learning models, a team of scientists recently surpassed results obtained from traditional ‘real-image’ training methods.).

[57] Thibault Schrepel & Alex ‘Sandy’ Pentland, Competition Between AI Foundation Models: Dynamics and Policy Recommendations, MIT Connection Science Working Paper (Jun. 2023), at 8.

[58] Igor Susmelj, Optimizing Generative AI: The Role of Data Curation, Lightly (last visited Jan. 15, 2024), https://www.lightly.ai/post/optimizing-generative-ai-the-role-of-data-curation.

[59] See, e.g., Xiaoliang Dai, et al., Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack, ArXiv (Sep. 27, 2023) at 1, https://ar5iv.labs.arxiv.org/html/2309.15807 (“[S]upervised fine-tuning with a set of surprisingly small but extremely visually appealing images can significantly improve the generation quality.”); see also, Hu Xu, et al., Demystifying CLIP Data, ArXiv (Sep. 28, 2023), https://arxiv.org/abs/2309.16671.

[60] Lauren Leffer, New Training Method Helps AI Generalize like People Do, Sci. Am. (Oct. 26, 2023), https://www.scientificamerican.com/article/new-training-method-helps-ai-generalize-like-people-do (discussing Brendan M. Lake & Marco Baroni, Human-Like Systematic Generalization Through a Meta-Learning Neural Network, 623 Nature 115 (2023)).

[61] Timothy B. Lee, The Real Research Behind the Wild Rumors about OpenAI’s Q* Project, Ars Technica (Dec. 8, 2023), https://arstechnica.com/ai/2023/12/the-real-research-behind-the-wild-rumors-about-openais-q-project.

[62] Id.; see also GSM8K, Papers with Code (last visited Jan. 18, 2023), https://paperswithcode.com/dataset/gsm8k; MATH Dataset, GitHub (last visited Jan. 18, 2024), https://github.com/hendrycks/math.

[63] Lee, supra note 61.

[64] Geoffrey Manne & Ben Sperry, Debunking the Myth of a Data Barrier to Entry for Online Services, Truth on the Market (Mar. 26, 2015), https://truthonthemarket.com/2015/03/26/debunking-the-myth-of-a-data-barrier-to-entry-for-online-services (citing Andres V. Lerner, The Role of ‘Big Data’ in Online Platform Competition (Aug. 26, 2014), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2482780.).

[65] See Catherine Tucker, Digital Data as an Essential Facility: Control, CPI Antitrust Chron. (Feb. 2020), at 11 (“[U]ltimately the value of data is not the raw manifestation of the data itself, but the ability of a firm to use this data as an input to insight.”).

[66] Or, as John Yun put it, data is only a small component of digital firms’ production function. See Yun, supra note 23, at 235 (“Second, while no one would seriously dispute that having more data is better than having less, the idea of a data-driven network effect is focused too narrowly on a single factor improving quality. As mentioned in supra Section I.A, there are a variety of factors that enter a firm’s production function to improve quality.”).

[67] Luxia Le, The Real Reason Windows Phone Failed Spectacularly, History–Computer (Aug. 8, 2023), https://history-computer.com/the-real-reason-windows-phone-failed-spectacularly.

[68] Introducing the GPT Store, Open AI (Jan. 10, 2024), https://openai.com/blog/introducing-the-gpt-store.

[69] See Michael Schade, How ChatGPT and Our Language Models Are Developed, OpenAI, https://help.openai.com/en/articles/7842364-how-chatgpt-and-our-language-models-are-developed; Sreejani Bhattacharyya, Interesting Innovations from OpenAI in 2021, AIM (Jan. 1, 2022), https://analyticsindiamag.com/interesting-innovations-from-openai-in-2021; Danny Hernadez & Tom B. Brown, Measuring the Algorithmic Efficiency of Neural Networks, ArXiv (May 8, 2020), https://arxiv.org/abs/2005.04305.

[70] See Yun, supra note 23 at 235 (“Even if data is primarily responsible for a platform’s quality improvements, these improvements do not simply materialize with the presence of more data—which differentiates the idea of data-driven network effects from direct network effects. A firm needs to intentionally transform raw, collected data into something that provides analytical insights. This transformation involves costs including those associated with data storage, organization, and analytics, which moves the idea of collecting more data away from a strict network effect to more of a ‘data opportunity.’”).

[71] Lerner, supra note 64, at 4-5 (emphasis added).

[72] See Clayton M. Christensen, The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail (2013).

[73] See David J. Teece, Dynamic Capabilities and Strategic Management: Organizing for Innovation and Growth (2009).

[74] Antitrust merger enforcement has long assumed that horizontal mergers are more likely to cause problems for consumers than vertical mergers. See: Geoffrey A. Manne, Dirk Auer, Brian Albrecht, Eric Fruits, Daniel J. Gilman, & Lazar Radic, Comments of the International Center for Law and Economics on the FTC & DOJ Draft Merger Guidelines, (Sep. 18, 2023), https://laweconcenter.org/resources/comments-of-the-international-center-for-law-and-economics-on-the-ftc-doj-draft-merger-guidelines.

[75] See Hagiu & Wright, supra note 27, at 27 (“We use our dynamic framework to explore how data sharing works: we find that it in-creases consumer surplus when one firm is sufficiently far ahead of the other by making the laggard more competitive, but it decreases consumer surplus when the firms are sufficiently evenly matched by making firms compete less aggressively, which in our model means subsidizing consumers less.”); see also Lerner, supra note 64.

[76] See, e.g., Hagiu & Wright, id. (“We also use our model to highlight an unintended consequence of privacy policies. If such policies reduce the rate at which firms can extract useful data from consumers, they will tend to increase the incumbent’s competitive advantage, reflecting that the entrant has more scope for new learning and so is affected more by such a policy.”); Jian Jia, Ginger Zhe Jin, & Liad Wagman, The Short-Run Effects of the General Data Protection Regulation on Technology Venture Investment, 40 Marketing Sci. 593 (2021) (finding GDPR reduced investment in new and emerging technology firms, particularly in data-related ventures); James Campbell, Avi Goldfarb, & Catherine Tucker, Privacy Regulation and Market Structure, 24 J. Econ. & Mgmt. Strat. 47 (2015) (“Consequently, rather than increasing competition, the nature of transaction costs implied by privacy regulation suggests that privacy regulation may be anti-competitive.”).

[77] See Jonathan M. Barnett, “Killer Acquisitions” Reexamined: Economic Hyperbole in the Age of Populist Antitrust, 3 U. Chi. Bus. L. Rev. 39 (2023).

[78] Id. at 85. (“At the same time, these transactions enhance competitive conditions by supporting the profit expectations that elicit VC investment in the startups that deliver the most transformative types of innovation to the biopharmaceutical ecosystem (and, in some cases, mature into larger firms that can challenge incumbents).)”

[79] Cade Metz, Karen Weise, & Mike Isaac, Nvidia’s Big Tech Rivals Put Their Own A.I. Chips on the Table, N.Y. Times (Jan. 29, 2024), https://www.nytimes.com/2024/01/29/technology/ai-chips-nvidia-amazon-google-microsoft-meta.html.

[80] See, e.g., Chris Metinko, Nvidia’s Big Tech Rivals Put Their Own A.I. Chips on the Table, CrunchBase (Jun. 12, 2024), https://news.crunchbase.com/ai/msft-nvda-lead-big-tech-startup-investment.

[81] CMA Seeks Views on AI Partnerships and Other Arrangements, Competition and Mkts. Auth. (Apr. 24, 2024), https://www.gov.uk/government/news/cma-seeks-views-on-ai-partnerships-and-other-arrangements.

[82] As noted infra, companies offer myriad “AI” products and services, and specific relevant markets would need to be defined before assessing harm to competition in specific cases.

[83] Start-ups, Killer Acquisitions and Merger Control, OECD (2020), available at https://web-archive.oecd.org/2020-10-16/566931-start-ups-killer-acquisitions-and-merger-control-2020.pdf.

[84] Kate Rooney & Hayden Field, Amazon Spends $2.75 Billion on AI Startup Anthropic in Its Largest Venture Investment Yet, CNBC (Mar. 27, 2024), https://www.cnbc.com/2024/03/27/amazon-spends-2point7b-on-startup-anthropic-in-largest-venture-investment.html.

[85] Id.

[86] Tom Warren, Microsoft Partners with Mistral in Second AI Deal Beyond OpenAI, The Verge (Feb. 26, 2024), https://www.theverge.com/2024/2/26/24083510/microsoft-mistral-partnership-deal-azure-ai.

[87] Mark Sullivan, Microsoft’s Inflection AI Grab Likely Cost More Than $1 Billion, Says An Insider (Exclusive), Fast Company  (Mar. 26, 2024), https://www.fastcompany.com/91069182/microsoft-inflection-ai-exclusive; see also, Mustafa Suleyman, DeepMind and Inflection Co-Founder, Joins Microsoft to Lead Copilot, Microsoft Corporate Blogs (Mar. 19, 2024), https://blogs.microsoft.com/blog/2024/03/19/mustafa-suleyman-deepmind-and-inflection-co-founder-joins-microsoft-to-lead-copilot; Krystal Hu & Harshita Mary Varghese, Microsoft Pays Inflection $ 650 Mln in Licensing Deal While Poaching Top Talent, Source Says, Reuters (Mar. 21, 2024), https://www.reuters.com/technology/microsoft-agreed-pay-inflection-650-mln-while-hiring-its-staff-information-2024-03-21; The New Inflection: An Important Change to How We’ll Work, Inflection (Mar. 19, 2024), https://inflection.ai/the-new-inflection; Julie Bort, Here’s How Microsoft Is Providing a ‘Good Outcome’ for Inflection AI VCs, as Reid Hoffman Promised, Tech Crunch (Mar. 21, 2024), https://techcrunch.com/2024/03/21/microsoft-inflection-ai-investors-reid-hoffman-bill-gates.

[88]  See, e.g., Paul Marsh, The Choice Between Equity and Debt: An Empirical Study, 37 The J. of Finance 121, 142 (1982) (“First, it demonstrates that companies are heavily influenced by market conditions and the past history of security prices in choosing between equity and debt. Indeed, these factors appeared to be far more significant in our model than, for example, other variables such as the company’s existing financial structure. Second, this study provides evidence that companies do appear to make their choice of financing instrument as though they had target levels in mind for both the long term debt ratio, and the ratio of short term to total debt. Finally, the results are consistent with the notion that these target levels are themselves functions of company size, bankruptcy risk, and asset composition.”); see also, Armen Hovakimian, Tim Opler, & Sheridan Titman, The Debt-Equity Choice, 36 J. of Financial and Quantitative Analysis 1, 3(2001) (“Our results suggest that, although pecking order considerations affect corporate debt ratios in the short-run, firms tend to make financing choices that move them toward target debt ratios that are consistent with tradeoff models of capital structure choice. For example, our findings confirm that more profitable firms have, on average, lower leverage ratios. But we also find that more profitable firms are more likely to issue debt rather than equity and are more likely to repurchase equity rather than retire debt. Such behavior is consistent with our conjecture that the most profitable firms become under-levered and that firms’ financing choices tend to offset these earnings-driven changes in their capital structures.”): see also, Sabri Boubaker, Wael Rouatbi, & Walid Saffar, The Role of Multiple Large Shareholders in the Choice of Debt Source, 46 Financial Management 241, 267 (2017) (“Our analysis shows that firms controlled by more than one large shareholder tend to rely more heavily on bank debt financing. Moreover, we find that the proportion of bank debt in total debt is significantly higher for firms with higher contestability of the largest controlling owner’s power.”).

[89] Sabri Boubaker, Walid Saffar, & Syrine Sassi, Product Market Competition and Debt Choice, 49 J. of Corp. Finance 204, 208 (2018). (“Our findings that firms substitute away from bank debt when faced with intense market pressure echo the intuition in previous studies that the disciplinary force of competition substitutes for the need to discipline firms through other forms of governance.”).

[90] See, e.g., George Hammond, Andreessen Horowitz Raises $7.2bn and Sets Sights on AI Start-ups, Financial Times (Apr. 16, 2024), https://www.ft.com/content/fdef2f53-f8f7-4553-866b-1c9bfdbeea42; Elon Musk’s xAI Says It Raised $6 Billion to Develop Artificial Intelligence, Moneywatch (May. 27, 2024), https://www.cbsnews.com/news/elon-musk-xai-6-billion; Krystal Hu, AI Search Startup Genspark Raises $60 Million in Seed Round to Challenge Google, Reuters (Jun. 18, 2024), https://www.reuters.com/technology/artificial-intelligence/ai-search-startup-genspark-raises-60-million-seed-round-challenge-google-2024-06-18; Visa to Invest $100 Million in Generative AI for Commerce and Payments, PMYNTS (Oct. 2, 2023), https://www.pymnts.com/artificial-intelligence-2/2023/visa-to-invest-100-million-in-generative-ai-for-commerce-and-payments.

[91] See, e.g., Eze Vidra, Is Generative AI the Biggest Platform Shift Since Cloud and Mobile?, VC Cafe (Mar. 6, 2023), https://www.vccafe.com/2023/03/06/is-generative-ai-the-biggest-platform-shift-since-cloud-and-mobile. See also, OpenAI and Apple Announce Partnership to Integrate ChatGPT into Apple Experiences, OpenAI (Jun. 10, 2024), https://openai.com/index/openai-and-apple-announce-partnership (“Apple is integrating ChatGPT into experiences within iOS, iPadOS, and macOS, allowing users to access ChatGPT’s capabilities—including image and document understanding—without needing to jump between tools.”). See also, Yusuf Mehdi, Reinventing Search With a new AI-powered Microsoft Bing and Edge, Your Copilot for the Web, Microsoft Official Blog (Feb. 7, 2023), https://blogs.microsoft.com/blog/2023/02/07/reinventing-search-with-a-new-ai-powered-microsoft-bing-and-edge-your-copilot-for-the-web (“‘AI will fundamentally change every software category, starting with the largest category of all – search,’ said Satya Nadella, Chairman and CEO, Microsoft. ‘Today, we’re launching Bing and Edge powered by AI copilot and chat, to help people get more from search and the web.’”).

[92] See, e.g., Amazon and Anthropic Deepen Their Shared Commitment to Advancing Generative AI, Amazon (Mar. 27, 2024), https://www.aboutamazon.com/news/company-news/amazon-anthropic-ai-investment (“Global organizations of all sizes, across virtually every industry, are already using Amazon Bedrock to build their generative AI applications with Anthropic’s Claude AI. They include ADP, Amdocs, Bridgewater Associates, Broadridge, CelcomDigi, Clariant, Cloudera, Dana-Farber Cancer Institute, Degas Ltd., Delta Air Lines, Druva, Enverus, Genesys, Genomics England, GoDaddy, HappyFox, Intuit, KT, LivTech, Lonely Planet, LexisNexis Legal & Professional, M1 Finance, Netsmart, Nexxiot, Parsyl, Perplexity AI, Pfizer, the PGA TOUR, Proto Hologram, Ricoh USA, Rocket Companies, and Siemens.”).

[93] Ownership of another firm’s assets is widely seen as a solution to contractual incompleteness. See, e.g., Sanford J. Grossman & Oliver D. Hart, The Costs and Benefits of Ownership: A Theory of Vertical and Lateral Integration, 94 J. Polit. Econ. 691, 716 (1986) (“When it is too costly for one party to specify a long list of the particular rights it desires over another party’s assets, then it may be optimal for the first party to purchase all rights except those specifically mentioned in the contract. Ownership is the purchase of these residual rights of control.”).

[94] See Amazon Staff, supra note 92.

[95] As the National Security Commission on Artificial Intelligence has observed: “AI is not a single technology breakthrough… The race for AI supremacy is not like the space race to the moon. AI is not even comparable to a general-purpose technology like electricity. However, what Thomas Edison said of electricity encapsulates the AI future: “It is a field of fields … it holds the secrets which will reorganize the life of the world.” Edison’s astounding assessment came from humility. All that he discovered was “very little in comparison with the possibilities that appear.” National Security Commission on Artificial Intelligence, Final Report, 7 (2021), available at https://www.dwt.com/-/media/files/blogs/artificial-intelligence-law-advisor/2021/03/nscai-final-report–2021.pdf.

[96] See, e.g., Structured vs Unstructured Data, IBM Cloud Education (Jun. 29, 2021), https://www.ibm.com/think/topics/structured-vs-unstructured-data; Dongdong Zhang, et al., Combining Structured and Unstructured Data for Predictive Models: A Deep Learning Approach, BMC Medical Informatics and Decision Making (Oct. 29, 2020), https://link.springer.com/article/10.1186/s12911-020-01297-6 (describing generally the use of both structured and unstructured data in predictive models for health care).

[97] For a somewhat technical discussion of all three methods, see generally Eric Benhamou, Similarities Between Policy Gradient Methods (PGM) in Reinforcement Learning (RL) and Supervised Learning (SL), SSRN (2019), https://ssrn.com/abstract=3391216.

[98] Id.

[99] For a discussion of the “buy vs build” decisions firms employing AI undertake, see Jonathan M. Barnett, The Case Against Preemptive Antitrust in the Generative Artificial Intelligence Ecosystem, in Artificial Intelligence and Competition Policy (Alden Abbott and Thibault Schrepel eds., 2024), at 3-6.

[100] See, e.g., Melissa Heikkilä & Will Douglas Heaven, What’s Next for AI in 2024, MIT Tech. Rev. (Jan. 4, 2024), https://www.technologyreview.com/2024/01/04/1086046/whats-next-for-ai-in-2024 (Runway hyping Gen-2 as a major film-production tool that, to date, still demonstrates serious limitations). LLMs, impressive as they are, have been touted as impending replacements for humans across many job categories, but still demonstrate many serious limitations that may ultimately limit their use cases. See, e.g., Melissa Malec, Large Language Models: Capabilities, Advancements, And Limitations, HatchWorksAI (Jun. 14, 2024), https://hatchworks.com/blog/gen-ai/large-language-models-guide.

[101] See, e.g., Hybrid AI: A Comprehensive Guide to Applications and Use Cases, SoluLab, https://www.solulab.com/hybrid-ai (last visited Jul. 12, 2024); Why Hybrid Intelligence Is the Future of Artificial Intelligence at McKinsey, McKinsey & Co. (Apr. 29, 2022), https://www.mckinsey.com/about-us/new-at-mckinsey-blog/hybrid-intelligence-the-future-of-artificial-intelligence; Vahe Andonians, Harnessing Hybrid Intelligence: Balancing AI Models and Human Expertise for Optimal Performance, Cognaize (Apr. 11, 2023), https://blog.cognaize.com/harnessing-hybrid-intelligence-balancing-ai-models-and-human-expertise-for-optimal-performance; Salesforce Artificial Intelligence, Salesforce, https://www.salesforce.com/artificial-intelligence (last visited Jul. 12, 2024) (combines traditional CRM and algorithms with AI modules); AI Overview, Adobe, https://www.adobe.com/ai/overview.html (last visited Jul. 12, 2024) (Adobe packages generative AI tools into its general graphic-design tools).

[102] Barnett supra note 99.

[103] Id. at 7-8.

[104] Id.

Continue reading
Antitrust & Consumer Protection

The WGA’s Misguided Fears: Unpacking the Myths of Media Consolidation in the Streaming Era

TOTM While last year’s labor disputes between the Writers Guild of America (WGA) and the Screen Actors Guild (SAG-AFTRA), on the one hand, and Hollywood’s major . . .

While last year’s labor disputes between the Writers Guild of America (WGA) and the Screen Actors Guild (SAG-AFTRA), on the one hand, and Hollywood’s major movie studios, on the other, have been settled for months now, lingering questions remain about competitive conditions in the industry.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

Coalition Letter Opposing California SB 1047

Regulatory Comments We, the undersigned organizations and individuals, are writing to express our serious concerns about SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence . . .

We, the undersigned organizations and individuals, are writing to express our serious concerns about SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act. We believe that the bill, as currently written, would have severe unintended consequences that could stifle innovation, harm California’s economy, and undermine America’s global leadership in AI.

Our main concerns with SB 1047 are as follows:

  1. The application of the precautionary principle, codified as a “limited duty exemption,” would require developers to guarantee that their models cannot be misused for various harmful purposes, even before training begins. Given the general-purpose nature of AI technology, this is an unreasonable and impractical standard that could expose developers to criminal and civil liability for actions beyond their control.
  2. The bill’s compliance requirements, including implementing safety guidance from multiple sources and paying fees to fund the Frontier Model Division, would be expensive and time-consuming for many AI companies. This could drive businesses out of California and discourage new startups from forming. Given California’s current budget deficit and the state’s reliance upon capital gains taxation, even a marginal shift of AI startups to other states could be deleterious to the state government’s fiscal position.
  3. The bill’s definition of a “covered model”–models trained with more than 10^26 floating-point operations at a cost above $100 million-will create confusion, encourage an adversarial relationship between the Frontier Model Division and AI developers, and interfere with industry dynamics in unpredictable ways. First, it is not always straightforward to say what a training run for a model costs. Second, the Frontier Model Division will have an incentive to investigate AI companies’ finances and other records to ensure they are not training covered models, which will create another burden for developers. Finally, it penalizes companies based on the size of their investment in AI: if one company trains a model above the threshold, they will be regulated in perpetuity. Yet because compute costs fall rapidly, a competitor could train a model six months later and be subject to no regulation at all. This is nonsensical.
  4. The bill’s combination of the precautionary principle and liability (both criminal and civil) is incompatible with the way open-source software has been developed and distributed for decades. While this bill would not ban any existing open-source model, it constitutes a gradual legislated phasing out of open-source AI near today’s frontier.

These restrictions on open-source AI models would undermine a key driver of innovation and collaboration in the field. The vast majority of stakeholders, including large tech companies, startups, the broader business community, academia, and civil society organizations like the Center for American Progress, have voiced support for open-source AI development. Open-source AI has also thus far played an essential role in interpretability and safety research; by limiting access to future open-source models, this bill could undermine progress in those vital fields.

We believe that SB 1047, if enacted in its current form, would have a chilling effect on AI research and development in California and potentially across the United States. It could slow down progress in a field that holds immense promise for advancing scientific understanding, improving medicine, and driving economic growth.

While we share the goal of ensuring that AI is developed and deployed responsibly, we urge you to reconsider the approach taken in SB 1047. The bill is also broadly inconsistent with the legislative direction suggested by the United States Senate’s Bipartisan Working Group on AI; if SB 1047 passes, California likely would be an unfortunate outlier in the broader context of American policy stances toward AI. In conclusion, we respectfully request that you either make substantial changes to SB 1047 to address the concerns outlined above or withdraw the bill entirely. We stand ready to work with you to find a path forward that promotes innovation while also ensuring the safe and responsible development of AI technology.

Sincerely,

Neil Chilson, Head of AI Policy,, Abundance Institute

Kristian Stout, Director of Innovation Policy, International Center for Law & Economics

Lisa B. Nelson, CEO, ALEC Action

Logan Kolas, Director of Technology Policy, American Consumer Institute

Daniel Castro, Director, Center for Data Innovation

Taylor Barkley, Director of Public Policy, Abundance Institute

Adam Thierer, Resident Senior Fellow, Technology & Innovation, R Street Institute

Vance Ginn, Ph.D., Former Chief Economist, White House Office of Management and Budget

Jessica Melugin, Director, Center for Technology and Innovation, Competitive Enterprise Institute

Nathan Leamer, Executive Director, Digital First Project

Continue reading
Innovation & the New Economy

Dynamic Competition in Broadband Markets: A 2024 Update

ICLE White Paper I. Introduction In mid-2021, the International Center for Law & Economics (ICLE) published a white paper on the state of broadband competition in the United . . .

I. Introduction

In mid-2021, the International Center for Law & Economics (ICLE) published a white paper on the state of broadband competition in the United States,[1] which concluded that:

  • The U.S. broadband market was generally healthy and competitive, with 95.6% of the population having access to high-speed broadband;
  • Concentration metrics are poor predictors of competitiveness—broadband markets can be dynamic and competitive even with only a few providers. Indeed, in some cases, increased concentration can result from efficiency gains and innovation, benefiting consumers through better services; and
  • Municipal broadband often requires significant taxpayer subsidies or cross-subsidies from other municipal enterprises, and is thus an example of “predatory entry,” rather than market competition.[2]

Rather than repeat the analysis conducted in the 2021 report, in this report, we investigate the extent to which broadband competition has evolved over the past three years. We find that it has been a rapid evolution:

  • More households are connected to the internet;
  • Broadband speeds have increased, while prices have fallen;
  • More households are served by multiple providers; and
  • New technologies like satellite and 5G have expanded internet access and intermodal competition among providers.

When the 2021 ICLE white paper was published, the worst of the COVID-19 pandemic appeared to be over, but the virus’ Delta variant was surging.[3] With pandemic precautions keeping people at home to work, go to school, visit health-care providers, or be entertained, broadband access and use was seen by many as a necessity, rather than a luxury. At the time, Congress considered whether to devote significant federal resources toward promoting broadband access in underserved communities. Toward this end, in November 2021, Congress passed the Infrastructure Investment and Jobs Act (IIJA), which includes three key provisions to foster greater broadband access:[4]

  1. The COVID-era Emergency Broadband Benefit’s temporary subsidy was extended indefinitely and renamed the Affordable Connectivity Program (ACP). The IIJA allocated an additional $14 billion to provide subsidies of $30 a month to eligible households;
  2. The IIJA also created and funded the Broadband Equity, Access, and Deployment Program (BEAD), which provides $42 billion to expand high-speed internet access to “unserved” and “underserved” locations; and
  3. The law required the Federal Communications Commission (FCC) to adopt final rules to prevent “digital discrimination” in broadband access based on income level, race, ethnicity, color, religion, or national origin, while also instructing the commission to consider issues of technical and economic feasibility.

These three policies were intended to intertwine in order to foster greater broadband competition. ACP subsidies are intended to boost consumer demand for broadband and generate revenue to support providers’ profitable deployment of broadband investments.[5] BEAD investments are intended to reduce the costs of broadband deployment.[6] The law’s digital-discrimination provisions were intended to prevent discrimination by broadband providers that serves to deny or limit consumers’ access to broadband internet.[7]

Alas, today, we find that each of these provisions faces headwinds. With Congress failing to extend appropriations beyond a May 31 deadline, the ACP has run out of funding.[8] States attempting to implement the BEAD program have complained of tight timelines, restrictive rules, limited coordination, and administrative burdens that may undermine effectiveness.[9] Providers and local jurisdictions report that BEAD’s Buy America rules are particularly onerous.[10] Smaller internet service providers say BEAD’s financial requirements exclude them from projects they would otherwise be able to complete successfully.[11] Complying with Buy America rules regarding attaching equipment to utility poles and railroad crossings also threatens deployment timelines.[12] And, in November 2023, the FCC approved rules to apply a disparate-impact approach toward the IIJA’s digital-discrimination mandate, which could raise constitutional issues over the major questions doctrine.[13]

In addition to these programs, the FCC appears dead set to regulate more stringently much of the broadband-internet industry. First, the agency’s sweeping digital-discrimination rules cover nearly every aspect of the deployment and delivery of internet services and nearly every entity associated—even tangentially—with deployment and delivery.[14] Next, the agency approved Title II common-carrier regulation with its recently adopted Safeguarding and Securing the Open Internet Order.[15],[16]

The current state of broadband competition policy appears to be one of confusion. Some policies foster competition, while others hinder it. Programs such as the ACP and BEAD could do much to encourage competition by simultaneously generating demand for broadband and helping to build out supply. At the same time, these programs—especially BEAD—attempt to micromanage competition with stifling conditions and de facto rate regulation. Similarly, the FCC’s digital-discrimination rules explicitly subject broadband pricing to ex post scrutiny and enforcement. The FCC’s reclassification of broadband internet-access services under Title II of the Communications Act raises the specter of common-carrier rate regulation that will hang over the industry unless either vacated by the courts, or a future administration once again reverses course.

Put simply, broadband competition in the United States is currently robust, innovative, and successful. But this state of vibrant competition is at risk from recent and forthcoming regulations. Without a course correction, we are likely to see slowing or shrinking broadband investment, reduced innovation, and the exit of small and rural providers.

II. The Broadband Market Is Competitive and Dynamic

By all relevant measures, U.S. broadband competition is vibrant and has increased dramatically since the COVID-19 pandemic. Since 2021, more households are connected to the internet, broadband speeds have increased, prices have fallen, more households are served by more than a single provider, and new technologies like satellite and 5G have expanded internet access and intermodal competition among providers.

A. Access and Adoption

By any reasonable measure today’s U.S. broadband market is an incredible success. Nearly the entire country has access to at-home internet, a vast majority has access to high-speed internet, and much of the country has access to these speeds from three or more providers. Nevertheless, criticisms of the current state of broadband deployment claim that too few Americans have affordable access to adequate broadband speed and capacity and that this, in turn, is the result of insufficient competition among broadband providers.[17] For example, in her speech announcing the FCC’s most recent process to regulate internet services under Title II, Chair Rosenworcel claimed that 80% of the country faces a monopoly or duopoly for download speeds of 100 Mbps or greater.[18] These claims are belied by widespread broadband adoption and competitive markets.

FIGURE 1: US At-Home Internet Access and Adoption, 2021

SOURCE: U.S. Census Bureau, American Community Survey

The U.S. Census Bureau’s American Community Survey reports that 97.6% of households have access to at-home internet and 92.6% use the internet at home (Figure 1).[19] While a large majority with at-home internet get it through a broadband subscription, a substantial minority access the internet from their mobile wireless providers. A small number (2.3%) claim they can access the internet at home without paying for a subscription. This likely includes multi-family units, as well as student and senior housing in which broadband access is included in the rent. Among the 7.4% who do not use an at-home internet connection, two-thirds indicate that internet access is available, but they have chosen not to adopt it.[20]

In 2021, approximately 97 percent of 3- to 18-year-olds had home internet access, according to the National Center for Education Statistics. This represents a five-percentage-point increase since 2016.[21]

Until March 2024, the FCC defined high-speed broadband as internet service that offered speeds of at least 25/3 Mbps.[22] The IIJA defines a location as “unserved” if it has no internet connection available or only has a connection offering speeds of less than 25/3 Mbps.[23] A location is considered “underserved” if the only options available offer speeds of less than 100/20 Mbps.[24]

As shown in Figure 2, smaller households with relatively simple needs can generally access the internet productively with download speeds of less than 100 Mbps, or even 25 Mbps. The third iteration of the National Broadband Map, released in November 2023, indicated:[25]

  • 8% of locations have access to connections of 25/3 Mbps or greater;
  • 5% of locations have access to speeds of 200/25 Mbps or greater;
  • 5% of locations have access to 1000/100 Mbps speeds; and
  • Only 6.2% of locations are unserved, and 2.6% are “underserved” with connections of less than 100/20 Mbps, as those terms are defined in the IIJA.

FIGURE 2: FCC Recommended Internet Speeds and US Household Access, 2021

SOURCE: Allconnect, ‘Everything You Need to Know;’ FCC, ‘Fixed Broadband Deployment’

FIGURE 3: Typical Maximum Download Speed by Connection Type, 2021 (Mbps)

SOURCE: HighSpeedInternet.com, ‘What Type of Internet Do You Have?’

The FCC reports that more than 90% of U.S. households have access to speeds of 100 Mbps or greater, and nearly 90% have access to 1 Gbps or greater (Table 1).[26] Fewer than 4% of U.S. households lack access to at least 30 Mbps download speeds via fixed broadband.

TABLE 1: US Household Internet Access by Download Speed, 2021

SOURCE: FCC, ‘International Broadband Data Report’[27]

Some note that, while high-speed connections are available across nearly the entire country, in many cases, only a single provider offers such speeds. This, such critics assert, suggests insufficient competition among providers of high-speed internet. For example, regarding 100 Mbps service, FCC Chair Rosenworcel claimed that “only half of us can get it from more than a single provider. Only one-fifth of the country has more than two choices at this speed.”[28]

This provides a misleading sense of the rate of high-speed broadband deployment and the scope of availability. The most recent information from the FCC on broadband deployment across the United States suggests that 90% of the population in 2021 was served by one or more providers offering 250/25 Mbps or higher speeds (Table 2).[29] That is more than double the population share five years earlier, when only 44% of the population had access to such speeds.[30] In 2019, the FCC did not report the share of population with access to 1,000/100 Mbps speeds or greater. By 2021, 28% of the population had access to gigabit download speeds.[31]

Moreover, Table 2 shows that, in 2021, more than 85% of the population was covered by two or more fixed-broadband providers offering 25/3 Mbps or greater speeds, and more than 60% of the country was covered by three or more providers providing such speeds. Moreover, if satellite and 5G providers are included, close to 100% of the country is served by two or more high-speed providers.

TABLE 2: US Population Fixed-Broadband Access by Number of Providers, 2021

SOURCE: FCC, ‘Fixed Broadband Deployment’

At the same time, the evidence indicates that broadband competition has increased over time, as measured by the number of competing high-speed providers (Figure 4).[32]

  • In 2018, 73.0% of households had access to 25/3 Mbps speeds from only one or two fixed-broadband providers, and only 21.6% had access from three or more providers. In 2021, only 29.1% of households had access from one or two providers while 69.3% were served by three or more providers. Thus, the number of households served by three or more providers increased by 47.7 percentage points from 2018 through 2021.
  • In 2018, 11.6% of households had no access to 100/20 Mbps speeds and 14.8% had access from three or more fixed broadband providers. In 2021, 5.4% of households had no access, while 21.3% were served by three or more providers. Thus, the number households served by three or more providers increased by 6.5 percentage points from 2018 through 2021.

FIGURE 4: Percentage of US Households Living in Census Blocks with Multiple Provider Options for Fixed-Terrestrial Services (2018 vs 2021)

SOURCE: FCC, ‘2022 Communications Marketplace Report’

Additionally, intermodal competition among providers is only improving. Starlink satellite service has been made available to all locations in the United States.[33] Starlink’s reported speeds are between 25/5 Mbps and 220/25 Mbps.[34] And Project Kuiper has successfully launched its first test satellites,[35] with commercial service expected to begin in the second half of 2024.[36]

B. Broadband Prices Continued to Fall, Even as Speeds Increased and Demand Grew During the Pandemic

After accounting for speed and data usage, the United States has some of the lowest broadband prices in the world. Even so, critics of the current state of U.S. broadband competition claim that U.S. prices are among the highest in the developed world because, they claim, the U.S. market is not as competitive as other jurisdictions. For example, the Community Tech Network asks rhetorically, “[s]o why does the internet cost so much more in the U.S. than in other countries? One possible answer is the lack of competition.”[37] Their article included a graphic in which U.S. internet service is described as “expensive and slow” while Australia is categorized as “fast and cheap.” Yet none of these claims hold up under scrutiny, such as adjusting for consumption and download speeds.

It’s true the United States has the third-highest average monthly broadband costs among OECD countries, according to Cable.co.uk (Figure 5). Australia, however, has the seventh-highest.[38] On a cost-per-megabit basis, Australia has the second-highest costs in the OECD, while the United States is in the bottom third of the distribution (Figure 6).[39] Speedtest’s Global Index of median speeds reports that the United States has the second-fastest median speed, and Australia the third-slowest median speed, among OECD countries (Figure 7).[40]

FIGURE 5: Average Monthly Cost of Broadband (OECD, in $US)

SOURCE: Cable.co.uk, ‘Global Broadband Pricing League Table 2023’

FIGURE 6: Average Monthly Cost of Broadband (OECD, Per Megabit $US)

SOURCE: Cable.co.uk, ‘Global Broadband Pricing League Table 2023’

FIGURE 7: Median Download Speed (OECD, Mbps)

SOURCE: Speedtest, Global Index

Cross-country comparisons of broadband pricing are especially fraught, due to country-by-country variations in factors that drive the costs of delivering broadband and the prices paid by consumers.[41] Deployment costs are driven largely by population density and terrain, as well as each country’s unique regulatory and tax policies.[42] Consumer choices often drive the prices paid by subscribers. These include choices regarding the mix of fixed broadband and mobile, speed preferences, and data consumption.[43]

For example, Figure 8 demonstrates a clear relationship between the average monthly cost for broadband and the monthly cost per megabit; a higher monthly cost tends to be associated with a higher cost per megabit. But there are outliers. The United States is well below the trendline, but Canada is well above it. While the average monthly cost in the two countries is similar, the information provided by Cable.co.uk suggests that U.S. consumers use 9-10 times more megabits per month than Canadian consumers. In addition, as shown in Figure 7, the median U.S. download speed is about 35% faster than the median in Canada.

FIGURE 8: Relationship Between Average Monthly Cost of Broadband and Cost Per-Megabit Per-Month (OECD, in $US)

SOURCE: Cable.co.uk, ‘Global Broadband Pricing League Table 2023’

FIGURE 9: Relationship Between Average Monthly Cost of Broadband and Median Download Speed

SOURCE: Cable.co.uk, ‘Global Broadband Pricing League Table 2023’; Speedtest, Global Index

A broadband-pricing index published annually by USTelecom reports that inflation-adjusted broadband prices for the most popular speed tiers fell by 54.7% from 2015 to 2023, or 5.6% annually.[44] Prices for the highest speed tiers fell by 55.8% over the same period. The Producer Price Index for residential internet-access services fell by 11.2% from 2015 through July 2023.[45] The median fixed-broadband connection in the United States delivers more than 207 Mbps download service, an 80% increase over pre-pandemic median speeds (Figure 10).[46]

FIGURE 10: Median Download Speed in the US (Mbps)

SOURCE: Speedtest, Global Index (July of each year)

Evidence from large surveys suggests that price is not a dominant factor driving adoption for the currently unconnected. For example, among the 7% of households who do not use the internet at home, more than half of Current Population Survey respondents indicated that they “don’t need it or [are] not interested.”[47] About one-third of respondents indicated that price is a factor, with responses such as “can’t afford it” or “not worth the cost.”[48]

Of course, cost and interest are not mutually exclusive factors.[49] A common response to CPS surveys among those who do not subscribe to internet service is that it is “not worth the cost.” This is an unhelpful response to guide policymakers because it doesn’t answer whether the cost is “too high,” the value is “too low,” or a combination of both. Another common response is “not interested.” This, too, is unhelpful, as it does not identify the price at which a potential consumer might become interested, if such a price exists. For example, surveys suggest that some nonadopters may become interested in subscribing to internet services or find it worth the cost at a price of zero.

  • A National Telecommunications and Information Administration (NTIA) survey of internet use reported the average monthly price that offline households wanted to pay for internet access was approximately $10 per month; roughly 75% of households gave $0 or “none” as their answer.[50]
  • Another NTIA publication reports that households with “no need/interest” in home internet are willing to pay about $6 a month, while those who indicate it is “too expensive” are willing to pay approximately $16 a month.[51]

In addition, as shown in Figure 1, about a quarter of households without a broadband or smartphone subscription claim that they can access the internet at home without paying for a subscription.

Jamie Greig & Hannah Nelson note that low-income households are more likely to use smartphones than computers for internet access.[52] According to Pew Research, 19% of adults who do not have at-home broadband report that their smartphone does everything they need to do online.[53] Colin Rhinesmith et al. summarize the response of a Detroit focus group participant: “[I]f he had to choose between home access and mobile access, the latter is more desirable as it allows him to be reachable and flexible for job interviews and the like”[54]

C. Investment by Broadband Providers Has Remained High

When the FCC issued the Open Internet Order (OIO) in 2015 to reclassify broadband internet-access service under Title II, opponents claimed the policy would diminish broadband investment. Similarly, when the FCC repealed the reclassification in 2018, opponents claimed the repeal would diminish broadband investment. While U.S. broadband capital expenditures have been relatively stable for the past two decades, there was a noticeable drop in the wake of the 2015 OIO (Figure 11).[55]

FIGURE 11: US Broadband Provider Capital Expenditures ($B)

SOURCE: USTelecom

Recent peer-reviewed econometric research from economist Wolfgang Briglauer and his coauthors—indicates that net-neutrality rules do, in fact, slow broadband investment, as measured by the number of fiber connections deployed.[56] The study analyzed 2000-2021 data across OECD countries. Thus, it includes both 2015’s imposition of Title II regulations in the United States and the 2017 repeal. It found that introducing net-neutrality rules was associated with a 22-25% decrease in fiber investments.

Briglauer’s study isolated the effects of net neutrality from other factors that might have affected investment, such as general economic conditions. It focused on new fiber connections as representing growth in network capacity, rather than short-term fluctuations in spending. Even controlling for other variables, net neutrality had an independent negative relationship with fiber deployments.

ICLE’s 2021 white paper argued that broadband markets are dynamic and characterized by ongoing innovation in technologies and business models. Investment and innovation do not solely come from new entrants, as incumbents often are important sources of innovation while they try to stay competitive and avoid disruption. In this way, providers compete through new product introductions and disruption, not just on price. Because of these dynamics, mergers and increased concentration can sometimes be associated with increased investment, in that they may allow firms to achieve greater economies of scale and scope.[57] In addition, firms make long-term investments to upgrade networks and deploy new technologies even amid just a few competitors.[58]

Since ICLE’s white paper, Kenneth Flamm & Pablo Varas published research examining the relationship between the change in a territory’s number of providers and changes in service-plan quality (e.g., upload and download speeds).[59] They examine Census blocks that were served by only two “legacy” broadband providers in 2014, which they define as cable and digital subscriber line (DSL) providers. Their study tracked entry and exit of providers in these blocks through 2018, and evaluated the change in maximum download speeds available in those blocks over time. They find that blocks with no entry or exit (what they call “unchanged duopoly”) experienced an increase of 750 Mbps in maximum download speeds (Figure 12). Blocks that transitioned from duopoly to monopoly experienced a relatively modest 430 Mbps increase, while blocks that transitioned from two to three providers experienced an 810 Mbps increase. Blocks that transitioned from three to four providers experienced an 854 Mbps increase.

They also noted that internet providers may be highly motivated to introduce new, higher-quality speed tiers as technology improves. These results comport with research summarized in the 2021 ICLE white paper, which found the most significant incremental benefits in broadband quality came from adding a second service provider (relative to monopoly), with some marginal benefit from adding a third provider, and a much smaller benefit from adding a fourth.

FIGURE 12: Increase in Maximum Download Speed Associated with Cable or Digital Subscriber Line Provider Entry or Exit, 2014-2018 (Mbps)

SOURCE: Flamm & Varas (2022)

Another recent study is Andrew Kearns’ analysis of the Seattle market.[60] In contrast to Flamm & Varas, Kearns concluded that competition among broadband providers might weaken the incentive to increase quality, which he measured as a provider upgrading a Census block to fiber. He argued that improvements in quality often require significant investment, and the returns on this investment may be uncertain in a competitive market. Thus, in a competitive market, providers may prioritize attracting customers with lower prices and a wider range of product options, rather than investing in improvements to the quality of their service. Even so, Kearns concluded that increased competition offers substantial benefits to consumers related to increased product choice and lower prices.

The latest published research supports ICLE’s earlier observation that whether adding or removing a competitor is associated with more or less investment depends greatly on various factors, including the market’s initial conditions.[61] Thus, a case can be made that competition (as judged by counting the number of competitors in a market) may be, in and of itself, of only lesser importance relative to other factors that guide investment decisions, such as population density, terrain, and demand, as well as the local regulatory and tax environment.[62]

III. Current and Anticipated Policies Affecting Broadband Competition

Broadband internet has become a service that many Americans—and U.S. policymakers—consider essential. But new and forthcoming regulations imposed in an effort to promote equal access to broadband may actually risk dampening innovation and investment in this critical sector. In this section, we discuss the Affordable Connectivity Program and Broadband Equity, Access, and Deployment subsidy programs, which could foster broadband competition by stimulating both demand and supply. Even so, administration of both of these programs have erected significant hurdles that may damage their effectiveness if not remedied by Congress or the regulatory agencies.

We also discuss other programs that are likely to reduce broadband competition by diminishing the incentives to invest and innovate. Though motivated by a desire to prevent discriminatory access, rigid rules to correct “disparate impact” in broadband-deployment decisions fail to account for the dynamic efficiencies of differentiated service models calibrated to consumer demand. At the same time, attempts to impose common-carrier obligations on broadband providers ignore the truly competitive nature of modern broadband markets, which are thriving under light-touch regulation.

Going forward, policymakers should resist the temptation to micromanage a sector as dynamic as U.S. broadband internet. Instead, they should focus their attention on interventions to address genuinely unfair or anticompetitive conduct, while trusting that innovation and investment will be maximized when companies retain the flexibility to respond to consumer demand, while constrained by economic and technical realities.

A. ACP More Effective at Reducing Broadband Costs Than Connecting the Unconnected

The ACP is a federal subsidy program that provides eligible low-income households with monthly broadband-service discounts of up to $30, or up to $75 for households on tribal lands.[63] It also provides a one-time $100 discount for the purchase of a computer or tablet. ICLE has argued that well-designed subsidies targeted to underserved consumers can be an effective way to increase broadband deployment and adoption.[64] Subsidies help make providing service in high-cost, low-density areas more financially viable for providers. They also make broadband more affordable for lower-income consumers, stimulating demand.[65]

Proponents of the ACP identify two main goals for the program:

  1. to increase at-home internet adoption by unconnected households; and
  2. to maintain internet connections for low-income households at risk of “unadoption” due to unaffordability.[66]

Through the ACP, the federal government absorbs part of the cost of providing broadband service to these households, making them more financially attractive customers for broadband providers. The program also creates an incentive for providers to expand their networks to reach eligible households, as they can now potentially recover more revenue from serving those users.[67] For example, if ACP subsidies stimulate consumer demand, providers may find it profitable to deploy broadband to areas that would not otherwise generate a sufficient return on investment to justify deployment. In some cases, a new provider might be able to offer services to a market currently served by a single incumbent firm.

To date, however, the ACP and its predecessors do not appear to have been as successful in increasing at-home internet adoption by unconnected households as was hoped when such programs were created. Due to what appears to be inelastic demand, ACP has faced difficulties in stimulating sufficient interest among the 5% of unconnected households who could access the internet, but fail to take up service.[68] These households may not be aware of the program or may lack digital literacy; may be able to access the internet without a subscription; or may have no interest in subscribing to an internet service at any price.

On the other hand, the ACP’s subsidies appear to have successfully enabled already-subscribed households to maintain at-home internet service through the COVID-19 pandemic, thereby proving effective in enabling economically vulnerable inframarginal consumers to remain connected. More than 23 million U.S. households (about 17%) were enrolled in the ACP before the program lapsed at the end of May 2024.[69] It is currently unknown how many of these households will unsubscribe now that ACP subsidies are unavailable. In turn, it’s also unknown how providers will respond should large number of households unsubscribe from their internet services.

In March 2024, the FCC announced that April 2024 would be the program’s last fully funded month, with partial subsidies through May 2024.[70] Without ACP subsidies, one expects some households will unsubscribe from internet service, and the decreased demand may even lead to consolidation in some markets through exits or mergers. Moreover, Congress’ failure to renew the ACP risks other long-term policy responses that could waste already-invested funds.

In the face of another economic downturn, the inframarginal households that unadopt internet service will likely spur future rounds of congressional appropriations to bring these households back online. This turmoil, meanwhile, stands to erode providers’ investment incentives, due to lack of demand. This threatens to create a vicious cycle that requires periodic reinvestment from Congress just to stand these programs back up. Over the long term, it would almost certainly be more efficient to extend and focus the ACP program to ensure that truly needy households receive the subsidy (including those that would otherwise unadopt), rather than construing the program as strictly focused on convincing the last 5% of households with inelastic demand to adopt.

B. Red Tape and Regulation May Stymie BEAD’s Efforts to Expand Broadband Access

In 2023, the NTIA awarded more than $42 billion in grants to state governments under the Broadband Equity, Access, and Deployment (BEAD) program,[71] whose primary purpose is to expand high-speed internet access in areas that currently lack it.[72] Congress focused the BEAD program on connecting “unserved” and “underserved” territories. The law requires that those areas lacking connections with speeds of at least 100/20 Mbps must be helped first before addressing other priorities, such as upgrades, adoption programs, and middle-mile infrastructure.[73] Funding is distributed directly to states, which are required to develop plans tailored to connect their unserved and underserved locations.[74]

But much of that congressional intent got muddled in the NTIA’s implementation of BEAD funding. The NTIA’s notice of funding opportunity (“NOFO”) introduced conflicting priorities beyond connecting the unserved. These additional priorities include “middle-class affordability” requirements, the provision of “low cost” plans, and a ban on data caps.[75] The NOFO also gave clear preference to fiber networks over wireless and satellite providers, and to governmental and municipal providers over private companies.[76]

The NTIA’s NOFO prompted each participating U.S. state or territory to include a “middle-class affordability plan to ensure that all consumers have access to affordable high-speed internet” (emphasis in original).[77] The notice provided several examples of how this could be achieved, including:

  1. Requiring providers to offer low-cost, high-speed plans to all middle-class households using the BEAD-funded network; and
  2. Providing consumer subsidies to defray subscription costs for households ineligible for the Affordable Connectivity Benefit or other federal subsidies.

Despite the IIJA’s explicit prohibition of price regulation, the NTIA’s approval process appears to envision exactly this. The first example provided above is clear rate regulation. It specifies a price (“low-cost”); a quantity (“all middle-class households”); and imposes a quality mandate (“high-speed”). Toward these ends, the notice provides an example of a “low-cost” plan that would be acceptable to NTIA:

  • Costs $30 per month or less, inclusive of all taxes, fees, and charges, with no additional non-recurring costs or fees to the consumer;
  • Allows the end user to apply the Affordable Connectivity Benefit subsidy to the service price;
  • Provides download speeds of at least 100 Mbps and upload speeds of at least 20 Mbps, or the fastest speeds the infrastructure is capable of if less than 100 Mbps/20 Mbps;
  • Provides typical latency measurements of no more than 100 milliseconds; and
  • Is not subject to data caps, surcharges, or usage-based throttling.[78]

A policy bulletin published by the Phoenix Center for Advanced Legal & Economic Public Policy Studies notes that the NTIA did not conclude that broadband was unaffordable for middle-class households.[79] George Ford, the bulletin’s author, collected data on broadband adoption by income level. The data indicate that, in general, internet-adoption rates increase with higher income levels (Figure 12). Higher-income households have higher adoption rates (97.3%) than middle-income households (92.9%), which in turn have higher adoption rates than lower-income households (78.1%).

FIGURE 13: Internet Adoption and Income

SOURCE: Adapted from Ford (2022), Table 2 and Figure 2.

For each of the 50 states and the District of Columbia, the Phoenix bulletin finds that middle-income internet-adoption rates are, to a statistically significant degree, higher than lower-income adoption. Thus, the Phoenix bulletin concludes that broadband currently is “affordable” to middle-class households and that “no direct intervention is required” to ensure affordability to the middle class.[80]

John Mayo, Greg Rosston, & Scott Wallsten point out that BEAD’s key purpose of providing high-speed internet access to locations that lack it (presumably because it’s too expensive to deploy to these areas without investment subsidies) conflicts with NTIA’s focus on affordability:

A substantial portion of the unserved and underserved areas of the country that are the likely targets of the BEAD program, however, are rural, low-population density areas where deployment costs will be high. These high deployment costs may seem to indicate that even “cost-based” rates—normally seen as an attractive competitive benchmark—may be high, violating the IIJA’s “affordability” standard.[81]

The only effective way to simultaneously reduce broadband prices, increase access, and improve quality is to increase supply. But the NTIA’s attempts at rate regulation work at cross-purposes with BEAD’s objective to increase supply. Therefore, attempts to use BEAD funding to impose price controls may act to reduce broadband competition, rather than preserve or increase it.

The potential harm to competition is worsened by NTIA’s preference for government or municipal providers over private providers, which we discuss in more detail in Section III.G. The NTIA’s funding notice required states to ensure the participation of “non-traditional broadband providers,” such as municipalities and cooperatives. Municipal broadband networks might make sense in some rare cases where private providers are unable to deploy, but such systems have generally mired taxpayers in expensive projects that failed to deliver on promises.

In addition to these challenges, BEAD applications must come with a letter of credit issued by a qualified bank for 25% of the grant amount.[82] This is a guarantee to the grant administrator (e.g., a state broadband office) that there is liquid cash in an account that it can claw back should the applicant not deliver on their grant requirements. To receive a letter of credit, applicants will be required by the issuing bank to provide collateral—which could be cash or cash equivalents equal to the full value of the letter of credit. The letter-of-credit requirement is separate and in addition to BEAD’s match requirement, which demands that applicants contribute a minimum 25% of the total build cost. The letter-of-credit and matching requirements may hinder competition by favoring large and well-capitalized providers over smaller internet-service providers (ISPs) that may be better positioned to serve rural areas.

In November 2023, NTIA released a waiver for the letter-of-credit requirement because of industry concerns about how the rule may prevent smaller ISPs from participating in the BEAD program.[83] The “programmatic waiver” describes several alternatives to the letter of credit. For example, subgrantees can obtain the letter of credit from a credit union instead of a bank. The expectation is that credit unions would offer lower interest rates for loans and lower fees. Alternatively, applicants can provide a performance bond “equal to 100% of the BEAD subaward amount.” In addition, the NTIA is allowing states and territories to reduce the percentage requirement of the performance bond or letter of credit over time, as service providers meet certain project milestones.

Congress set an ambitious goal with BEAD: To expand high-speed internet access in areas that currently lack it. The $42 billion appropriated for the program could have been used to deploy broadband to underserved areas and to foster broadband implementation. However, NTIA’s implementation of the program appears designed to dampen private investment and stifle competition among broadband, wireless, and satellite providers.

C. Digital-Discrimination Rules

One of the most problematic new regulations to hit the broadband sector is the FCC’s digital-discrimination rules. While well-intentioned, these rules are virtually certain to curtail broadband investment and adoption. In late 2023, the FCC adopted final rules facilitating equal access to broadband internet under Section 60506 of the IIJA.[84] The statutory text directs the FCC to prevent discrimination in broadband access based on income level, race, ethnicity, color, religion, or national origin, while also directing the commission to consider issues of technical and economic feasibility.

The rules prohibit digital discrimination of access, which is defined as policies or practices that differentially affect or are intended to differentially affect consumers’ access to broadband internet-access service based on their income level, race, ethnicity, color, religion or national origin, unless justified by genuine issues of technical or economic feasibility.[85] The are two key provisions that will disrupt broadband competition, namely:

  1. Adopting a disparate-impact standard to define “digital discrimination of access;” and
  2. Subjecting a “broad range” of service characteristics to digital-discrimination rules, including pricing, promotional conditions, terms of service, and quality of service.

The rules apply to entities that provide, facilitate, and/or affect consumer access to broadband internet-access service. This includes typical broadband providers, as well as entities that “affect consumer access to broadband internet access service.”[86] Under this broad definition, local governments, nonprofits, and even apartment-building owners all may be subject to the FCC’s digital-discrimination rules.

The rules also revise the commission’s informal consumer-complaint process to accept complaints of digital discrimination of access, and to authorize the commission to initiate investigations and impose penalties and remedies for violations of the rules.[87]

The FCC also proposed additional rules that would require providers to submit annual reports on their major deployment, upgrade, and maintenance projects, and to establish and maintain internal compliance programs to assess whether their policies and practices advance or impede equal access to broadband internet-access service within their service areas.[88] In essence, these proposed rules would require providers to prepare their own disparate-impact analysis every year.

Because of the expansive definition of covered entities and services subject to the digital-discrimination rules, providers will face legal uncertainty and litigation risks.[89] The most obvious of these involve the likelihood of complaints or investigations based on allegations of disparate impact, which may be difficult to disprove. Comments to the FCC from the U.S. Chamber of Commerce highlight these concerns:[90]

These policies would render it impossible for businesses and the marketplace to make rational investment decisions. The scope of the services that the Draft covers is so broad that it does not provide meaningful guidance for how to comply. And because the Draft fails to grant sufficient guidance, it does not give fair notice of how to avoid liability. Consequently, investment in broadband innovation would disappear and consumers would have to pay higher costs for less efficient services.

The digital-discrimination rules also may discourage innovation and differentiation in broadband service offerings, as providers could avoid service offerings that may be perceived as discriminatory or having a differential impact on certain consumers or communities. Providers could also be reluctant to invest in new technologies or platforms that, while improving broadband service quality or availability, might also create disparities in service characteristics among consumers or areas. As FCC Commissioner Brendan Carr has noted:[91]

Another telling last minute addition is a new advisory opinion process. This is the very definition of swapping out permissionless innovation for a mother-may-I pre-approval process. What’s more? The FCC undermines whatever value that type of process could provide because, to the extent the FCC does—at some point in the future—authorize your conduct, the Order says that the agency reserves the right to rescind an advisory opinion at any time and on a moment’s notice. At that time, the covered provider “must promptly discontinue” the practice or policy. That does not provide the confidence necessary to invest and innovate.

Private, public, and nonprofit entities may even face allegations of intentional discrimination for policies and practices designed to increase internet adoption and use by protected groups. In particular, programs intended to increase broadband adoption among low-income and price-sensitive consumers could run afoul of the digital-discrimination rules. George Ford provides an example of such a program:[92]

For example, Cox Communications offers 100 Mbps broadband service for $49.99 per month, but ACP eligible households can get the same service for $30 per month. Higher-income households may not avail themselves of the discounted price.

In Tennessee, Hamilton County Schools’ EdConnect program offers free high-speed internet access to eligible students, where eligibility is based on income level—i.e., students who receive free or reduced-cost lunch, attend any school where every student receives free or reduced-cost lunch, or whose family participates in the Supplemental Nutrition Assistance Program (SNAP) or other economic-assistance programs.[93] Both the school district and the nonprofit that runs the program would also be covered entities. The fact that the price (free) is available only to those of a certain income level is explicit, intentional discrimination.

The FCC’s digital-discrimination rules will almost surely increase the regulatory burden and compliance costs for providers. Small and rural providers may be disproportionately burdened, as these providers tend to have more limited resources and face technical and economic challenges in deploying and maintaining broadband networks in unserved and underserved areas. The FCC’s proposal that broadband providers submit an annual report on their substantial broadband projects could likewise give larger providers an advantage, as they are more likely to have the resources to comply with this requirement. For example, the Wireless Internet Service Providers Association commented to the FCC:[94]

Annual reporting and record retention rules and the requirement to adopt and certify to the existence and compliance with an internal digital discrimination compliance plan would impose significant burdens on broadband providers, especially smaller providers that may not track investment data and lack the resources to develop a compliance program with ongoing obligations. The burdens are overly egregious given that smaller providers do not have any record of engaging in digital discrimination.

Further complicating the evaluation of digital-discrimination claims based on income is that, not only is income a key factor influencing whether a given consumer will adopt broadband, but it is also highly correlated with race, ethnicity, national origin, age, education level, and home-computer ownership and usage. The FCC’s digital-discrimination rules fail to recognize this “income conundrum” and will invite costly and time-consuming litigation based on allegations of digital discrimination either where it does not exist or where it is excused by economic-feasibility considerations. Moreover, by specifying pricing as an area subject to digital-discrimination scrutiny, the FCC’s rules allow for ex-post regulation of rates, prompting Commissioner Carr to characterize the agency’s digital-discrimination rules and Title II rules as “fraternal twins.”[95]

D. Title II and Net Neutrality

In 2015, the FCC issued the Open Internet Order (OIO), which reclassified broadband internet-access service as a telecommunications service subject to Title II of the Communications Act. Proponents of the OIO contend that the Title II classification was necessary to ensure net neutrality—that is, that internet service providers (ISP) would treat all internet traffic equally. In 2018, the Title II classification was repealed by the FCC’s Restoring Internet Freedom Order (RIFO).

One month after ICLE’s white paper was published in 2021, President Joe Biden issued an executive order that “encouraged” the FCC to “[r]estore Net Neutrality rules undone by the prior administration.” Last year, Anna Gomez was confirmed as an FCC commissioner, providing the commission a 3-2 Democratic majority. One day after her confirmation, FCC Chair Rosenworcel announced the agency’s proposal to reimpose Title II regulation on internet services. Soon thereafter, the FCC issued its “Notice of Proposed Rulemaking for the Safeguarding and Securing the Open Internet Order,” which would again reclassify broadband under Title II.[96] On April 25, 2024, the commission approved the order on a 3-2 party-line vote.[97]

While the FCC provides several reasons for reclassifying broadband, most of the justifications are built on the same underlying premise: That broadband is an essential public utility and should be regulated as such. Of course, many other essentials—shelter, food, clothing—are provided by various suppliers in competitive markets. Utilities are considered distinct because they tend to have significant economies of scale such that:

  1. a single monopoly provider can provide the goods or services at a lower cost than multiple competing firms; and/or
  2. market demand is insufficient to support more than a single supplier.[98]

Under this definition, water, sewer, electricity, and natural gas constitute examples typically cited as “natural” monopolies.[99] In some cases, not only are these industries treated as regulated monopolies, but their monopoly status is solidified by laws forbidding competition.

At one time, local and long-distance telephone services were similarly treated as natural monopolies, as was cable television.[100] Various innovations eroded the “natural” monopolies in telephone and cable service over time.[101] As of the year 2000, 94% of U.S. households had a landline telephone, while only 42% had a mobile phone.[102] By 2018, those numbers flipped. In 2015, 73% of households subscribed to cable or satellite television service.[103] Today, fewer than half of U.S. households subscribe.[104] Much of that transition has been due to the enormous improvements in broadband speed, reliability, and affordability discussed in Section II. Similarly, innovations in 5G, fixed wireless, and satellite are eroding the already-tenuous claims that broadband internet service is akin to a utility.

The FCC’s latest reclassification of broadband under Title II prohibits blocking, throttling, or engaging in paid or affiliated prioritization arrangements.[105] In addition, it imposes “a general conduct standard that would prohibit unreasonable interference or unreasonable disadvantage to consumers or edge providers.” Under the OIO, the FCC invoked the general conduct standard to scrutinize providers’ “zero rating” programs.[106] Although Title II regulation explicitly allows for rate regulation of covered entities, the 2024 order forebears rate regulation.[107]

Critics of Title II regulation have argued that some of the conduct prohibited under the FCC’s proposal may be pro-competitive practices that benefit consumers. For example, Hyun Ji Lee & Brian Whitacre found that low-income consumers were willing to pay for an extra GB of data each month, but were not willing to pay extra for a higher speed.[108] This data-speed tradeoff suggests those consumers would benefit from a plan that offered a larger data allowance, but throttled speeds if the allowance is exceeded. In 2014 comments to the FCC, ICLE and TechFreedom described a pro-competitive benefit of paid prioritization:[109]

Prioritization at least requires content providers to respond to incentives—to take congestion into account instead of using up a common resource without regard to cost. It also allows the gaming company to buy better service, which isn’t an option at all with neutrality, under which it just has to suffer congestion. The truth is that, if the game developer can’t afford to pay for clear access, then it may have a bad business model if it is built on an expectation that it will have unfettered, free access to a scarce, contestable resource.

Aside from the likely pro-competitive effects of the conduct the FCC seeks to prohibit, in the face of robust competition, consumers can readily switch away from providers who charge anticompetitive prices or impose harmful terms and conditions. In its 2019 Mozilla decision, the U.S. Circuit Court of Appeals for the D.C. Circuit concluded:[110]

[M]any customers can access edge provider’s content from multiple sources (i.e., fixed and mobile). In this way, there is no terminating monopoly. Additionally, the Commission argued that even if a terminating monopoly exists for some edge providers the commenters did not offer sufficient evidence in the record to demonstrate that the resulting prices will be inefficient. Given these reasons, we reject Petitioners’ claim that the Commission’s conclusion on terminating monopolies is without explanation.

In addition, the court noted:[111]

More importantly, the Commission contends that low churn rates do not per se indicate market power. Instead, they could be a function of competitive actions taken by broadband providers to attract and retain customers. And such action to convince customers to switch providers, the Commission argues, is indicia of material competition for new customers.

Regardless of the FCC’s intent in imposing Title II regulation, the effect will be a stifling of innovation in the delivery and pricing of broadband-internet service. In tandem with the agency’s digital-discrimination rules, the proposed “net neutrality” rules attempt to transition broadband to a commodity service with little differentiation between providers. In so doing, the FCC is eliminating, piece-by-piece, the dimensions among which broadband providers compete, resulting in both higher prices for consumers and lower returns for providers. Rather than a “virtuous cycle” of growth and innovation, the U.S. broadband market may instead experience a “doom loop” of stagnant internet adoption, depressed investment in deployment, and diminished broadband competition.

E. De-Facto Rate Regulation

Rate regulation—any mechanism whereby government intervenes in the pricing process—has long been a contentious issue in the realm of broadband services.[112] Historically, the FCC has been deeply involved in rate regulation, tasked with ensuring fair rates, reliable service, and universal access to telecommunications since 1934.[113] As the telecommunications landscape has evolved, however, so too has the FCC’s approach, increasingly moving toward deregulatory approaches. That is, until recently.[114] Unfortunately, there are multiple ways that rates can be regulated, and—despite public disavowals—policymakers already appear to be implementing some forms of rate regulation on broadband providers.

Explicit rate regulation manifests primarily in two forms: price ceilings and floors.[115] Price ceilings limit the maximum price that can be charged, a common example being rent control. Price floors, on the other hand, set a minimum price, akin to minimum wage laws. Each of these forms impacts the broadband sector differently, potentially altering market dynamics and influencing consumer access and provider revenues.[116]

Policymakers can also resort to less-obvious means of regulating prices—de-facto rate regulation—such as rent stabilization or inflation-linked wage increases, which control the rate of price changes rather than the prices themselves.[117] Moreover, as discussed further infra, price controls are sometimes introduced laterally as requirements to participate in various federal programs, with the effect remaining that government agents assume broad control over prices. Still other regulations may not explicitly regulate rates, but act in much the same way as direct rate regulation, as explained by Jonathan Nuechterlein and Howard Shelanski:[118]

Finally, but no less important, the line between “price” and “non-price” regulation is thin, and regulatory obligations can amount to rate regulation even when regulators do not perceive themselves as setting rates at either the retail or wholesale level.

The FCC’s 2015 OIO, while explicitly eschewing rate regulation, indirectly influenced pricing strategies in the broadband market.[119] By imposing common-carriage obligations, the OIO impacted how ISPs invested and priced their services. In this respect, the FCC’s 2024 rules are identical to the 2015 rules. But this time, Title II regulation will work hand-in-hand with the agency’s digital-discrimination rules. While the proposed common-carrier rules explicitly eschew ex-ante rate regulation through forbearance, the digital-discrimination rules explicitly subject pricing policies and practices to ex-post discrimination scrutiny.

In some ways, the FCC may be imposing among the worst of possible rate-regulation regimes. Under an ex-ante approach to rate regulation, providers have—at a minimum—a framework to form their expectations about whether and how rates will be regulated. As discussed in Section III.C, however, under the ex-post approach that the FCC has adopted in its digital-discrimination rules, providers and any other “covered entity” lack any meaningful framework regarding how the agency may regulate rates or how to avoid liability.

Specifically, the FCC’s Digital Discrimination Order states:

The Commission need not prescribe prices for broadband internet access service, as some commenters have cautioned against, in order to determine whether prices are “comparable” within the meaning of the equal access definition. The record reflects support for the Commission ensuring pricing consistency as between different groups of consumers. We also find that the Commission is well situated to analyze comparability in pricing, as we must already do so in other contexts.[120]

While assessing the comparability of prices is not explicit rate regulation, a policy that holds entities liable for those disparities, such that an ISP must adjust its prices until it matches the FCC definitions of “comparable” and “consistency,” is tantamount to setting that rate.[121]

In addition to the FCC digital-discrimination and Title II rules, recent developments in broadband policy have introduced other forms of de-facto rate regulation. The BEAD program itself mandates a “low-cost” option be made available to recipients of the Affordable Connectivity Program by providers that receive a BEAD grant.[122] The NTIA’s NOFO for the BEAD program further mandates that participating states include an affordability plan that ensures access to affordable high-speed internet for all middle-class consumers.[123] This initiative might require providers to offer low-cost plans or to provide consumer subsidies. Similarly, the U.S. Department of Agriculture’s (USDA) ReConnect Loan and Grant Program awards funding preferences to applicants that adhere to net-neutrality rules and offer “affordable” options.[124] New York’s Affordable Broadband Act is another example of broadband rules that mandate ISPs provide low-cost internet-access plans to qualifying low-income households.[125]

Rate regulation, de facto or otherwise, has a major effect on providers’ ability to enter new markets and to improve service in those markets in which they already operate. Rate regulations lead to market distortions. By capping prices below the market rate, such regulations can increase demand without a corresponding increase in supply, potentially leading to shortages and discouraging providers from making output-improving investments.[126] For broadband providers, this can translate into reduced investment in network expansion and quality improvement, particularly in less profitable or more challenging areas. Moreover, binding rate regulations can lower the returns on investment, thereby discouraging deployments and slowing overall broadband expansion. Quality and service also may suffer under rate regulation. A regulated provider, constrained by price ceilings, cannot fully reap the benefits of service-quality improvements, leading to a reduced incentive to enhance that service quality.[127]

F. Pole Attachments

The importance of pole attachments cannot be overstated in the context of expanding broadband connectivity, even if utility-pole issues often fly under the radar. This is particularly true due to their implications for competition in the relevant local broadband markets. Access to physical infrastructure is critical, and where providers cannot readily access this physical infrastructure, it can delay deployment or make it more costly.

The FCC has recognized the crucial role of pole attachments in a pending proceeding that seeks to address inefficiencies in access to pole attachments that lead to cost overruns and delays in deployment.[128] In December 2023, in an effort to expedite broadband deployment, the commission adopted several important pole-attachment reform measures.[129] These included introducing a streamlined process to resolve utility-pole attachment disputes, which could be pivotal to hasten broadband rollouts, especially in underserved areas.[130] The FCC also mandated that utilities provide comprehensive pole-inspection information to broadband attachers, which is expected to facilitate more informed planning and to reduce delays.[131] The commission has also refined its procedural rules to foster quicker resolutions through mediation and expedited adjudication via the Accelerated Docket.[132]

The FCC is on the right track: ensuring timely access to pole infrastructure is crucial to ensure that broadband markets remain competitive, and that the substantial investments in broadband infrastructure directed by programs like BEAD yield the intended benefits.

The goal of pole-attachment rules should be to equitably assess costs in ways that avoid inefficient rent extraction and ensure the smooth deployment of broadband infrastructure.[133] The FCC’s current rules, however, can impose on a requesting attacher the entire cost of pole replacement, which is economically suboptimal.[134] There is therefore a need to revisit the current formula to ensure that the incremental costs and benefits are appropriately allocated to each relevant party. In its recent order, the FCC expanded the definition of what constitutes as a “red tagged” pole in need of replacement.[135] The extent to which this works in practice will, however, depend on how the FCC processes applications under its new “red tag” policy.

One critical concern is the emergence of hold-up and hold-out problems.[136] Section 224 of the Communications Act authorizes the FCC to ensure that the costs of pole attachments are just and reasonable.[137] This provision, however, also allows pole owners to deny access when there is insufficient capacity, creating a potential imbalance in bargaining power.[138] This imbalance is exacerbated by the pole owners’ superior knowledge of their cost structures and their ability to impose “take it or leave it” offers on prospective attachers.[139] Consequently, attachers might be, at the margin, discouraged from deploying in areas with capacity-constrained poles. Further, the “last attacher pays” model can inadvertently create a disincentive for pole owners to replace or upgrade poles until a new attacher is obligated to bear the full cost. This scenario may lead to delays in broadband deployment, especially in areas where the cost of deployment is already high. The recent FCC order aims to address these concerns by clarifying cost-causation principles and ensuring more equitable cost sharing for pole replacements and modifications.[140] But there again remains interpretive room within the framework the commission has established. Thus, it remains to be seen how effectively the new rules will mitigate the problem.

Any reconsideration of pole-attachment rules also must account for the fact that the pole market is highly regulated.[141] The actual cost for pole replacements in a free market, without regulatory intervention, would likely be some middle ground between the total replacement cost and the new rental price charged to attachers. The FCC must judiciously leverage its ability to set reasonable rental rates to approach the ideal price that would otherwise be discovered through market mechanisms.

Toward this end, the upfront “make-ready” charges for pole replacement should be limited to a pole owner’s incremental cost.[142] This approach acknowledges that early replacements simply shift the timing of the expense, rather than adding additional costs. The formula could incorporate the depreciated value of the pole being replaced and allocate the costs associated with increased capacity across all beneficiaries, including new attachers as well as the pole owner, who may realize additional revenue from the increased capacity.

Beyond disputes over privately owned poles, a lacuna in the FCC’s authority over poles owned by certain public entities threatens to erect large roadblocks to deployment. This is particularly the case for poles owned by the Tennessee Valley Authority (TVA).[143]  Such common TVA practices as refusing reasonable and nondiscriminatory pole-attachment agreements risk significantly slowing the deployment of broadband, especially in the rural areas the TVA services.[144]

The source of this problem is a provision of Section 224 of the Communications Act that exempts municipal and electric-cooperative (coop) pole owners from FCC oversight.[145] This exemption allows the TVA to set its own rates for pole attachments, which are notably higher than FCC rates, and often sidestep access requirements typically mandated by states and the FCC.[146]

Municipally owned electricity distributors constitute what economists call state-owned enterprises. As such, they face significantly different restraints than privately owned enterprises.[147] Private businesses must pass the profit-and-loss test on the market, while state-owned enterprises are not similarly constrained. Municipally owned electricity distributors are usually monopolies, either because private competitors are not allowed to compete, or because they receive government benefits not available to potential private competitors. As a result, they may pursue other goals in the “public interest,” such as providing their products and services at below-market prices.[148] This includes the ability to leverage their electricity monopolies to enter into broadband provision. The problem is that these municipally owned electricity distributors also have strong incentives to refuse to deal with private competitors in the broadband market who need access to the electric poles they own.[149]

Rural electric cooperatives (RECs), particularly those distributing electricity from the TVA, also hold a privileged position that allows them to act in potentially anticompetitive ways toward broadband providers seeking pole attachments. Unlike municipally owned electricity distributors, RECs need to earn sufficient revenues to remain operational. They are also, however, much more like state-owned enterprises in the governmental benefits they receive, including the immense difficulty of normal oversight from the market for corporate control.[150] This similarly incentivizes them to act anticompetitively, particularly as many enter or plan to enter the broadband market.[151]

These circumstances often lead RECs to refuse to deal with private broadband providers, thereby stifling competition and deployment in rural areas.[152] Furthermore, RECs often face little oversight from rate regulators regarding pole attachments, leading to significantly higher costs for broadband companies seeking to attach to poles owned by co-ops and municipalities outside FCC jurisdiction.[153]

This regulatory loophole not only leads to higher costs for broadband providers, but also raises concerns about the application of antitrust laws to these entities. Sen. Mike Lee (R-Utah) has argued that the U.S. Justice Department (DOJ) should examine the antitrust implications of these practices, emphasizing that these government-owned entities should be subject to antitrust laws when acting as market participants.[154] And FCC Commissioner Brendan Carr has noted ongoing concerns about delays and costs associated with attaching to poles owned by municipal and cooperative utilities.[155] Addressing this loophole is crucial to bridge the digital divide and ensure that the IIJA’s goals are met effectively.

G. Municipal/Co-Op Broadband

As previously noted, despite persistent interest in some quarters to promote municipal broadband,[156] there are many challenges that contribute to such projects’ poor record. In particular, the financial prospects of municipal networks are typically dim, as many such projects generate negative cash flow and are unsustainable without substantial improvements in operations.[157] Only a small subset of municipalities—usually those with existing municipal-power utilities—might be well-positioned to venture into municipal broadband, due to potential cross-subsidization opportunities.[158] Even among those municipal-broadband projects that have been deemed successful, however, the repayment of project costs is daunting, often requiring substantial subsidies and cross-subsidization.[159] The prospects for municipal broadband have not improved since ICLE’s 2021 white paper.

In a study by Christopher Yoo et al., the authors examine the financial performance of every municipal fiber project operating in the United States from 2010 through 2019 that provided annual financial reports for its fiber operations.[160] Each of the 15 projects was located in an urban area, as defined by the U.S. Census Bureau. In addition, each project was built in areas already served by one or more private broadband providers—none were designed to serve previously unserved areas. In every case, the municipality issued revenue bonds to fund construction and initially expected the projects to repay their construction and operating costs from project revenues, rather than from taxes or interfund transfers. In some cases, the cities anticipated the projects would generate surpluses that would, in turn, allow the cities to lower taxes.

In contrast to these expectations, every project either needed infusions of cash from outside sources or debt relief through refinancing. Three projects defaulted on their debt, two of which were liquidated at significant losses.

Yoo et al. employed two measures of financial performance:

  1. adjusted net cash flow (ANCF), which measures the actual cash collected and spent by a fiber project; and
  2. net present value of cash flow from operations (NPV), which discounts cash flow using the project’s weighted average cost of capital.

Based on ANCF, only two of the 15 projects have broken even or are expected to break even by the time their initial debt matures. Based on NPV, more than half of the projects were not on track to break even—even assuming a theoretical best-case performance in terms of capital expenditures and debt service.

Municipalities that are unable to cover their broadband projects’ costs of debt and operations must make up the shortfall from general tax revenues or default on their debt. Making up a shortfall from tax revenues means the city must enact some combination of tax increases or service cuts. A default will result in a downgrade in the municipality’s bond rating, which will increase the costs of financing all of the city’s operations, not just the broadband project. These additional costs must ultimately be paid the municipality’s taxpayers.

In a separate analysis, George Ford notes that many municipal-broadband projects are located in cities that operate their own electric utilities.[161] Such an arrangement allows the broadband network’s debt and other expenses to be placed on the electric utility’s books, thereby improving the apparent financial condition of the broadband network. As electricity rates are based on cost of service, Ford argues that a shift of broadband costs to the electric utility would be expected to increase electricity rates.

To evaluate this hypothesis, he compares municipal electricity rates among four Tennessee cities that own and operate municipal broadband. Two cities financed the projects with general-obligation bonds funded by tax revenues and other sources of the municipality’s income. The other two cities used electric-utility profits to cover the broadband project’s financial losses. One of these cities is Chattanooga, which received $111 million in subsidies and in which the city’s electric utility assumed $162 million of debt to construct the broadband network and made $50 million of loans to the broadband division.

Ford’s statistical analysis calculates broadband projects are associated with a 5.4% increase in electricity rates in cities with utility-funded projects, relative to cities that issued general-obligation bonds. It should be emphasized that the higher rates are imposed on all electricity ratepayers, not just those who subscribe to the city’s broadband. These higher electricity rates are used to cross-subsidize municipal-broadband subscribers. For example, Ford reports that, in Chattanooga, the average monthly revenue per broadband subscriber was $147 in 2015. In addition, the average subscriber was associated with a monthly subsidy of $30. Thus, cross-subsidies from electricity ratepayers account for about 17% of the average monthly broadband-subscriber cost.

The conclusions from ICLE’s 2021 white paper remain valid today. Proposals to offer municipal broadband as a means to increase broadband adoption—either by attempting to increase supply, or to suppress prices—put the cart before the horse. That’s because private supply and demand conditions are usually sufficient to guarantee creation of adequate broadband networks throughout most of the country.

Some uneconomic locations (i.e., the unserved areas) may require interventions to ensure broadband access. In some cases, municipal broadband may be an effective option to subsidize hard-to-reach consumers. Municipal broadband should not, however, be considered the best or only option. Indeed, the evidence demonstrates that municipal broadband might best be considered a solution of last resort, used only when no private provider finds it economically viable to serve a particular area.

IV. Conclusion

By most measures, U.S. broadband competition is vibrant and has increased dramatically since the COVID-19 pandemic. Since 2021, more households are connected to the internet; broadband speeds have increased, while prices have declined; more households are served by more than a single provider, and new technologies like satellite and 5G have expanded internet access and intermodal competition among providers.

Broadband competition policy currently appears to be in a state of confusion: Some policies foster competition, while others hinder it. Programs such as the ACP and BEAD could do much to encourage competition by simultaneously increasing the demand for broadband and facilitating the buildout of supply. At the same time, some facets of these programs’ implementation act to stifle competition with onerous rules, reporting requirements, and—in some cases—de-facto rate regulation.

In addition, the FCC’s digital-discrimination rules explicitly subject broadband pricing and other dimensions of competition to ex-post scrutiny and enforcement. In reclassifying broadband internet-access services under Title II of the Communications Act, the FCC has rendered nearly every aspect of broadband deployment and delivery subject to its regulation or scrutiny.

Put simply, today, U.S. broadband competition is robust, innovative, and successful. At the same time, new and forthcoming regulations threaten broadband competition by eliminating or proscribing the policies and practices by which providers compete. As a result, the United States is at risk of slowing or shrinking broadband investment—thereby reducing innovation and harming the very consumers that policymakers claim they seek to help.

[1] Geoffrey A. Manne, Kristian Stout, & Ben Sperry, A Dynamic Analysis of Broadband Competition: What Concentration Numbers Fail to Capture, Int’l Ctr. for L. & Econ. (Jun. 2021), available at https://laweconcenter.org/wp-content/uploads/2021/06/A-Dynamic-Analysis-of-Broadband-Competition.pdf.

[2] See id. at 2-3; 35-37.

[3] CDC Museum COVID-19 Timeline, Ctr. for Disease Control and Prevention (Mar. 15, 2023), https://www.cdc.gov/museum/timeline/covid19.html.

[4] H.R. 3684, 117th Cong. (2021).

[5] Eric Fruits & Kristian Stout, Finding Marginal Improvements for the ‘Good Enough’ Affordable Connectivity Program, Int’l Ctr. for L. & Econ. (Sep. 15, 2023), available at https://laweconcenter.org/wp-content/uploads/2023/09/ACP-Subsidies-Paper.pdf.

[6] Eric Fruits & Geoffrey A. Manne, Quack Attack: De Facto Rate Regulation in Telecommunications, Int’l Ctr. for L. & Econ. (Mar. 30, 2023), available at https://laweconcenter.org/wp-content/uploads/2023/03/De-Facto-Rate-Reg-Final-1.pdf.

[7] Eric Fruits & Kristian Stout, The Income Conundrum: Intent and Effects Analysis of Digital Discrimination, Int’l Ctr. for L. & Econ. (Nov. 14, 2022), available at https://laweconcenter.org/wp-content/uploads/2022/11/The-Income-Conundrum-Intent-and-Effects-Analysis-of-Digital-Discrimination.pdf.

[8] Wireline Competition Bureau Announces the Final Month of the Affordable Connectivity Program, WC Docket No. 21-450 (Mar. 4, 2024), available at https://docs.fcc.gov/public/attachments/DA-24-195A1.pdf; see also Brian Fung, FCC Ends Affordable Internet Program Due to Lack of Funds, CNN (May 31, 2024), https://www.cnn.com/2024/05/31/tech/fcc-affordable-connectivity-program-acp-close/index.html.

[9] Anthony Hennen, More Money, More Problems for National Broadband Expansion, The Center Square (Aug. 15, 2023), https://www.thecentersquare.com/pennsylvania/article_3124e98c-3bb3-11ee-ad87-7361f3872110.html.

[10] Lindsay McKenzie, BEAD Waiver Information Coming This Summer, NTIA Says, StateScoop (Aug. 17, 2023), https://statescoop.com/bead-broadband-waiver-summer-2023-ntia.

[11] BEAD Letter of Credit Concerns, $4.3M in ACP Outreach Grants, FCC Waives Rules for Hawaii Wildfires, Broadband Breakfast (Aug. 21, 2023), https://broadbandbreakfast.com/2023/08/bead-letter-of-credit-concerns-4-3m-in-acp-outreach-grants-fcc-waives-rules-for-hawaii-wildfires.

[12] Eric Fruits, Red Tape and Headaches Plague BEAD Rollout, Truth on the Market (Aug. 17, 2023), https://truthonthemarket.com/2023/08/17/red-tape-and-headaches-plague-bead-rollout.

[13] Fruits & Stout, supra note 6; see also Eric Fruits, Kristian Stout, & Ben Sperry, ICLE Reply Comments on Prevention and Elimination of Digital Discrimination, Notice of Proposed Rulemaking, In the Matter of Implementing the Infrastructure, Investment, and Jobs Act: Prevention and Elimination of Digital Discrimination, No. 22-69, at Part III, Int’l Ctr. for L. & Econ. (Apr. 20, 2023), https://laweconcenter.org/resources/icle-reply-comments-on-prevention-and-elimination-of-digital-discrimination.

[14] FCC, Report and Order and Further Notice of Proposed Rulemaking on Implementing the Infrastructure Investment and Jobs Act: Prevention and Elimination of Digital Discrimination, GN Docket No. 18-238, FCC 19-44 (Nov. 20, 2023), available at https://docs.fcc.gov/public/attachments/FCC-23-100A1.pdf [hereinafter “Digital Discrimination Order”]. See also Eric Fruits, Everyone Discriminates Under the FCC’s Proposed New Rules, Truth on the Market (Oct. 30, 2023), https://truthonthemarket.com/2023/10/30/everyone-discriminates-under-the-fccs-proposed-new-rules (reporting that, under the rules, “broadband service” includes every element of a consumer’s broadband-internet experience, including speeds, data caps, pricing, and discounts, and that the rules broadly apply to broadband providers as well as to “entities outside the communications industry” that “provide services that facilitate and affect consumer access to broadband,” which may include municipalities and property owners).

[15] Notice of Proposed Rulemaking, Safeguarding and Securing the Open Internet, WC Docket No. 23-320 (Sep. 28, 2023). [hereinafter “Title II NPRM”]

[16] Declaratory Ruling, Order, Report and Order, and Order on Reconsideration, Safeguarding and Securing the Open Internet, WC Docket No. 23-320, WC Docket No. 17-108 (adopted Apr. 25, 2024), available at https://docs.fcc.gov/public/attachments/DOC-401676A1.pdf [hereinafter “SSOIO” or “2024 Order”].

[17] See, e.g., Karl Bode, Colorado Eyes Killing State Law Prohibiting Community Broadband Networks, TechDirt (Mar. 30, 2023), https://www.techdirt.com/2023/03/30/colorado-eyes-killing-state-law-prohibiting-community-broadband-networks (local broadband monopolies are a “widespread market failure that’s left Americans paying an arm and a leg for what’s often spotty, substandard broadband access.”).

[18] FCC Chair Rosenworcel on Reinstating Net Neutrality Rules, C-Span (Sep. 25, 2023), https://www.c-span.org/video/?530731-1/fcc-chair-rosenworcel-reinstating-net-neutrality-rules (“Only one-fifth of the country has more than two choices at [100 Mbps download] speed. So, if your broadband provider mucks up your traffic, messes around with your ability to go where you want and do what you want online, you can’t just pick up and take your business to another provider. That provider may be the only game in town.”).

[19] U.S. Census Bureau, 2021 American Community Survey 1-Year Estimates, Table Id. S2801 (2021); U.S. Census Bureau, ACS 1-Year Estimates Public Use Microdata Sample 2021, Access to the Internet (ACCESSINET) (2021).

[20] In contrast, a 2021 NTIA survey reports that 14.4% of households do not use the internet at home, with three-quarters of these households indicating they have “no need/interest” and one quarter indicating it is “too expensive.” See, Michelle Cao & Rafi Goldberg, Switched Off: Why Are One in Five U.S. Households Not Online?, National Telecommunications and Information Administration (2022), https://ntia.gov/blog/2022/switched-why-are-one-five-us-households-not-online.

[21] National Center for Education Statistics, Children’s Internet Access at Home, Condition of Education, (U.S. Department of Education, Institute of Education Sciences, Aug. 2023), https://nces.ed.gov/programs/coe/indicator/cch.

[22] See FCC, 2015 Broadband Progress Report (2015), https://www.fcc.gov/reports-research/reports/broadband-progressreports/2015-broadband-progress-report (upgrading the standard speed from 4/1 Mbps to 25/3 Mbps). In March 2024, the FCC approved a report increasing the fixed-speed benchmark to 100/20 Mbps and setting an “aspirational goal” of 1 Gbps/500 Mbps. See, FCC, In the Matter of Inquiry Concerning the Deployment of Advanced Telecommunications Capability to All Americans in a Reasonable and Timely Fashion, GN Docket No. 22-270 (Mar. 14, 2024), available at https://docs.fcc.gov/public/attachments/DOC-400675A1.pdf. In November 2023, FCC Chair Jessica Rosenworcel proposed reaching a 1 Gbps/500 Mbps benchmark by the year 2030. See Eric Fruits, Gotta Go Fast: Sonic the Hedgehog Meets the FCC, Truth on the Market (Nov. 3, 2023), https://truthonthemarket.com/2023/11/03/gotta-go-fast-sonic-the-hedgehog-meets-the-fcc.

[23] Infrastructure Investment and Jobs Act, Pub. L. No. 117-58, § 60102 (a)(1)(A)(ii), 135 Stat. 429 (Nov. 15, 2021), available at https://www.congress.gov/117/plaws/publ58/PLAW-117publ58.pdf; Jake Varn, What Makes a Community “Unserved” or “Underserved” by Broadband?, Pew Charitable Trusts (May 3, 2023), available at https://www.pewtrusts.org/-/media/assets/2023/06/un–and-underserved-definitions-ta-memo-pdf.pdf.

[24] Id., IIJA.

[25] Mike Conlow, New FCC Broadband Map, Version 3, Mike’s Newsletter (Nov. 20, 2023), https://mikeconlow.substack.com/p/new-fcc-broadband-map-version-3.

[26] FCC, Communications Marketplace Report, GN Docket No 22-203, FCC 22-103, Appendix G (Dec. 20, 2022), https://www.fcc.gov/document/2022-communications-marketplace-report.

[27] Pursuant to the IIJA, the FCC and providers are working to provide new broadband-coverage maps. These numbers will change over time, but FCC Chair Jessica Rosenworcel noted: “Looking ahead, we expect that any changes in the number of locations will overwhelmingly reflect on-the-ground changes such as the construction of new housing.” See Brad Randall, FCC’s Updated Broadband Map Shows Increasing National Connectivity, Broadband Communities (Nov. 27, 2023), https://bbcmag.com/fccs-new-broadband-map-shows-increasing-national-connectivity.

[28] FCC Chair Rosenworcel on Reinstating Net Neutrality Rules, C-Span (Sep. 26, 2023), https://www.c-span.org/video/?530731-1/fcc-chair-rosenworcel-reinstating-net-neutrality-rules.

[29] FCC, Fixed Broadband Deployment (Jun. 2021), https://broadband477map.fcc.gov/#/area-summary?version=jun2021&type=nation&geoid=0&tech=acfw&speed=25_3&vlat=27.480205324799257&vlon=-41.52925368904516&vzoom=5.127403622197149.

[30] FCC, 2019 Broadband Deployment Report, GN Docket No. 18-238, FCC 19-44 at Fig. 4 (May 29, 2019), available at https://docs.fcc.gov/public/attachments/FCC-19-44A1.pdf.

[31] The FCC does not explain the differences between the information summarized in Table 1 and Table 2. The differences likely reflect different methodologies. For example, Table 1 may be at the household level and Table 2 at the population level.

[32] 2022 Communications Marketplace Report, GN Docket No. 22-203 (Dec. 30, 2022) at Fig. II.A.28, available at https://docs.fcc.gov/public/attachments/FCC-22-103A1.pdf.

[33] Dan Heming, Starlink No Longer Has a Waitlist for Standard Service, and 10 MPH Speed Enforcement Update, Mobile Internet Resource Center (Oct. 3, 2023), https://www.rvmobileinternet.com/starlink-no-longer-has-a-waitlist-for-standard-service-and-10-mph-speed-enforcement-update/#:~:text=In%20the%20latest%20update%2C%20the,order%20anywhere%20in%20the%20USA.

[34] Starlink Specifications, Starlink, https://www.starlink.com/legal/documents/DOC-1400-28829-70.

[35] Amazon Shares an Update on How Project Kuiper’s Test Satellites Are Performing, Amazon (Oct. 16, 2023), https://www.aboutamazon.com/news/innovation-at-amazon/amazon-project-kuiper-test-satellites-space-launch-october-2023-update.

[36] Kuiper Service to Start by End of 2024: Amazon, Communications Daily (Oct. 12, 2023), https://communicationsdaily.com/news/2023/10/12/Kuiper-Service-to-Start-by-End-of-2024-Amazon-2310110007.

[37] Why Is the Internet More Expensive in the USA than in Other Countries?, Community Tech Network (Feb. 2, 2023), https://communitytechnetwork.org/blog/why-is-the-internet-more-expensive-in-the-usa-than-in-other-countries.

[38] Dan Howdle, Global Broadband Pricing League Table 2023, Cable.co.uk (2023), https://www.cable.co.uk/broadband/pricing/worldwide-comparison, data available at https://www.cable.co.uk/broadband/worldwide-pricing/2023/broadband_price_comparison_data.xlsx.

[39] This is qualitatively consistent with the FCC’s finding that United States has the seventh-lowest prices per gigabit of data consumption, and that Australia has the 12th-lowest among OECD countries. FCC, 2022 Communications Marketplace Report, Docket No. 22-103, Appendix G (Dec. 30, 2022), available at https://docs.fcc.gov/public/attachments/FCC-19-44A1.pdf.

[40] Median Country Speeds, Speedtest Global Index (Oct. 2023), https://www.speedtest.net/global-index (last visited Dec. 7, 2023).

[41] See Christian Dippon, et al., Adding a Warning Label to Rewheel’s International Price Comparison and Competitiveness Rankings (Nov. 30, 2020), available at https://laweconcenter.org/wp-content/uploads/2020/11/Rewheel_Review_Final.pdf.

[42] Fruits & Stout, supra note 6; see also Giuseppe Colangelo, Regulatory Myopia and the Fair Share of Network Costs: Learning from Net Neutrality’s Mistakes, Int’l Ctr. for L. & Econ. (Comments to European Commission Exploratory Consultation, The Future of the Electronic Communications Sector and Its Infrastructure, May 18, 2023), https://laweconcenter.org/resources/regulatory-myopia-and-the-fair-share-of-network-costs-learning-from-net-neutralitys-mistakes.

[43] Id. at 14.

[44] Arthur Menko Business Planning Inc., 2023 Broadband Pricing Index, USTelecom (Oct. 2023), available at https://ustelecom.org/wp-content/uploads/2023/10/USTelecom-2023-BPI-Report-final.pdf.

[45] U.S. Bureau of Labor Statistics, Producer Price Index by Commodity: Telecommunication, Cable, and Internet User Services: Residential Internet Access Services [WPU374102], retrieved from FRED, Federal Reserve Bank of St. Louis (Aug. 29, 2023), https://fred.stlouisfed.org/series/WPU374102.

[46] United States Median Country Speeds July 2023, Speedtest Global Index (2023), https://www.speedtest.net/global-index/united-states. Prior years retrieved from Internet Archive. See also Camryn Smith, The Average Internet Speed in the U.S. Has Increased by Over 100 Mbps since 2017, Allconnect (Aug. 4, 2023), https://www.allconnect.com/blog/internet-speeds-over-time (average download speed in the United States was 30.7 Mbps in 2017 and 138.9 Mbps in the first half of 2023).

[47] George S. Ford, Confusing Relevance and Price: Interpreting and Improving Surveys on Internet Non-adoption, 45 Telecomm. Pol’y 102084 (2021).

[48] Smaller surveys and focus groups that allow more opportunities for follow-up questions, however, suggest that price may be more important than is suggested by Census Bureau surveys. For example, one study in Detroit, Michigan, used surveys and focus groups to examine internet adoption and use in three low-income urban neighborhoods. Participants who reported lacking at-home internet mentioned lack of interest and high costs at roughly equal rates. See, Colin Rhinesmith, Bianca Reisdorf, & Madison Bishop, The Ability to Pay For Broadband, 5 Comm. Res. Pract. 121 (2019).

[49] Ford, supra note 9.

[50] Michelle Cao & Rafi Goldberg, New Analysis Shows Offline Households Are Willing to Pay $10-a-Month on Average for Home Internet Service, Though Three in Four Say Any Cost Is Too Much, National Telecommunications and Information Administration (Oct. 6, 2022), https://ntia.gov/blog/2022/new-analysis-shows-offline-households-are-willing-pay-10-month-average-home-internet.

[51] Michelle Cao & Rafi Goldberg, Switched Off: Why Are One in Five U.S. Households Not Online?, National Telecommunications and Information Administration (2022), https://ntia.gov/blog/2022/switched-why-are-one-five-us-households-not-online.

[52] Jamie Greig & Hannah Nelson, Federal Funding Challenges Inhibit a Twenty-First Century “New Deal” for Rural Broadband, 37 Choices 1 (2022).

[53] Andrew Perrin, Mobile Technology and Home Broadband 2021, Pew Research Center (Jun. 3, 2021), https://www.pewresearch.org/internet/2021/06/03/mobile-technology-and-home-broadband-2021.

[54] Rhinesmith, et al., supra note 10.

[55] 2022 Broadband Capex Report, USTelecom (Sep. 8, 2023), available at https://ustelecom.org/wp-content/uploads/2023/09/2022-Broadband-Capex-Report-final.pdf.

[56] Wolfgang Briglauer, Carlo Cambini, Klaus Gugler, & Volker Stocker, Net Neutrality and High-Speed Broadband Networks: Evidence from OECD Countries, 55 Eur. J. L. & Econ. 533 (2023).

[57] Eric Fruits, Justin (Gus) Hurwitz, Geoffrey A. Manne, Julian Morris, & Alec Stapp, Static and Dynamic Effects of Mergers: A Review of the Empirical Evidence in the Wireless Telecommunications Industry, (OECD Directorate for Financial and Enterprise Affairs Competition Committee, Global Forum on Competition, DAF/COMP/GF(2019)13, Dec. 6, 2019), available at https://one.oecd.org/document/DAF/COMP/GF(2019)13/en/pdf.

[58] Manne, Stout, & Sperry, supra note 1.

[59] Kenneth Flamm & Pablo Varas, Effects of Market Structure on Broadband Quality in Local U.S. Residential Service Markets, 12 J. Info. Pol’y 234 (2022).

[60] Andrew Kearns, Does Competition From Cable Providers Spur the Deployment of Fiber? (Jul. 27, 2023), https://ssrn.com/abstract=4523529 or http://dx.doi.org/10.2139/ssrn.4523529.

[61] Manne, Stout, & Sperry, supra note 1.

[62] Fruits, et al., supra note 55.

[63] FCC, Affordable Connectivity Program (Oct. 2, 2023), https://www.fcc.gov/acp.

[64] Eric Fruits & Kristian Stout, Finding Marginal Improvements for the ‘Good Enough’ Affordable Connectivity Program (Int’l. Ctr. for L. & Econ. Issue Brief, Sep. 15, 2023), available at https://laweconcenter.org/wp-content/uploads/2023/09/ACP-Subsidies-Paper.pdf.

[65] See Paul Winfree, Bidenomics Goes Online: Increasing the Costs of High-Speed Internet, Econ. Pol’y Innovation Ctr (Jan. 8, 2024), available at https://epicforamerica.org/wp-content/uploads/2024/01/Bidenomics-Goes-Online_01.08.24-1.pdf (Finding ACP subsidies are associated with higher prices for all broadband plans, especially lower-speed plans, but these costs are more than offset by the subsidies for those who receive them. Thus, the ACP provides lower prices net of subsidy to ACP beneficiaries, but higher prices for those who are not.).

[66] Id.

[67] Id.

[68] Fruits & Stout, supra note 4.

[69] Universal Service Administrative Co., ACP Enrollment and Claims Tracker (Feb. 8, 2024), https://www.usac.org/about/affordable-connectivity-program/acp-enrollment-and-claims-tracker. Beginning Feb. 8, 2024, the ACP ceased enrollment.

[70] Wireline Competition Bureau Announces the Final Month of the Affordable Connectivity Program, WC Docket No. 21-450 (Mar. 4, 2024), available at https://docs.fcc.gov/public/attachments/DA-24-195A1.pdf.

[71] Biden-Harris Administration Announces State Allocations for $42.45 Billion High-Speed Internet Grant Program as Part of Investing in America Agenda, Nat’l Telecomms and Info. Admin. (Jun. 26, 2023), https://www.ntia.gov/press-release/2023/biden-harris-administration-announces-state-allocations-4245-billion-high-speed.

[72] Id.

[73] U.S. Dep’t of Com., Internet For All Frequently Asked Questions and Answers Draft Answers Version 2.0 Broadband, Equity, Access, and Deployment (BEAD) Program, Nat’l Telecomms and Info. Admin. (Sep. 2022), available at https://broadbandusa.ntia.doc.gov/sites/default/files/2022-09/BEAD-Frequently-Asked-Questions-%28FAQs%29_Version-2.0.pdf.

[74] Infrastructure Investment and Jobs Act Overview, BroadbandUSA, https://broadbandusa.ntia.doc.gov/resources/grant-programs (last visited Dec. 7, 2023).

[75] U.S. Dep’t of Com., Notice of Funding Opportunity, Broadband Equity, Access, and Deployment Program, NTIA-BEAD-2022, Nat’l Telecomms and Info. Admin. (May 2022), available at https://broadbandusa.ntia.doc.gov/sites/default/files/2022-05/BEAD%20NOFO.pdf. [hereinafter “BEAD NOFO”]

[76] Id. See also, Ted Cruz, Red Light Report, Stop Waste, Fraud, and Abuse in Federal Broadband Funding, U.S. S. Comm. on Com., Science, and Transp. (Sep. 2023), https://www.commerce.senate.gov/services/files/0B6D8C56-7DFD-440F-8BCC-F448579964A3.

[77] U.S. Dep’t of Com., Notice of Funding Opportunity, Broadband Equity, Access, and Deployment Program, NTIA-BEAD-2022, NTIA (May 2022), available at https://broadbandusa.ntia.doc.gov/sites/default/files/2022-05/BEAD%20NOFO.pdf (note that the IIJA itself did not include this requirement, as it was an addition by NTIA as part of the NOFO process; thus, it is unclear the extent to which this represents a valid requirement by NTIA under the BEAD program).

[78] Id. at 67.

[79] George S. Ford, Middle-Class Affordability of Broadband: An Empirical Look at the Threshold Question, Phoenix Ctr. for Adv. Leg. & Econ. Pub. Pol’y Stud., Pol’y Bull. No. 61 (Oct. 2022), available at https://phoenix-center.org/PolicyBulletin/PCPB61Final.pdf.

[80] Id.

[81] John W. Mayo, Gregory L. Rosston & Scott J. Wallsten, From a Silk Purse to a Sow’s Ear? Implementing the Broadband, Equity, Access and Deployment Act, Geo. U. McDonough Sch. of Bus. Ctr. for Bus. & Pub. Pol’y (Aug. 2022), https://georgetown.app.box.com/s/yonks8t7eclccb0fybxdpy3eqmw1l2da?mc_cid=95d011c7c1&mc_eid=dc30181b39.

[82] BEAD Letter of Credit Concerns, $4.3M in ACP Outreach Grants, FCC Waives Rules for Hawaii Wildfires, Broadband Breakfast (Aug. 21, 2023), https://broadbandbreakfast.com/2023/08/bead-letter-of-credit-concerns-4-3m-in-acp-outreach-grants-fcc-waives-rules-for-hawaii-wildfires.

[83] NTIA, Ensuring Robust Participation in the BEAD Program (Nov. 1, 2023), https://www.internetforall.gov/blog/ensuring-robust-participation-bead-program.

[84] FCC, Report and Order and Further Notice of Proposed Rulemaking, GN Docket No. 22-69, FCC 23-100 (Nov. 20, 2023), available at https://docs.fcc.gov/public/attachments/FCC-23-100A1.pdf

[85] Id. at 3.

[86] Id.

[87] Id.

[88] Id.

[89] Fruits, supra note 13.

[90] U.S. Chamber of Commerce, In the Matter of Implementing the Infrastructure Investment and Jobs Act: Prevention and Elimination of Digital Discrimination, GN Docket No. 22-69 (Nov. 6, 2023), https://www.fcc.gov/ecfs/document/110620347626/2 (citations omitted).

[91] FCC, Dissenting Statement of Commissioner Brendan Carr Regarding the Implementing the Infrastructure Investment and Jobs Act: Prevention and Elimination of Digital Discrimination, GN Docket No. 22-69, Report and Order and Further Notice of Proposed Rulemaking, FCC 23-100 (2023), available at https://docs.fcc.gov/public/attachments/FCC-23-100A3.pdf.

[92] George S. Ford, Will Digital Discrimination Policies End Discount Plans for Low-Income Consumers? (Phoenix Ctr. for Advanced Legal & Econ. Pub. Pol’y Stud., Nov. 1, 2023), https://www.fcc.gov/ecfs/document/1103079827403/5.

[93] HCS EdConnect, Welcome to HCS EdConnect (2023), https://www.edconnect.org.

[94] WISPA, In the Matter of Implementing the Infrastructure Investment and Jobs Act: Prevention and Elimination of Digital Discrimination, GN Docket No. 22-69 (Nov. 8, 2023), https://www.fcc.gov/ecfs/document/1108944918538/1.

[95] Testimony of Brendan Carr, Commissioner, Federal Communications Commission, Before the Subcommittee on Communications and Technology of the United States House of Representatives Committee on Energy and Commerce, “Oversight of President Biden’s Broadband Takeover” (Nov. 30, 2023), available at https://d1dth6e84htgma.cloudfront.net/11_30_23_Carr_Testimony_3163ea4363.pdf.

[96] Title II NPRM, supra note 14.

[97] SSOIO, supra note 16.

[98] See, Paul Krugman & Robin Wells, Economics (4th ed. 2015) at 389 (“So the natural monopolist has increasing returns to scale over the entire range of output for which any firm would want to remain in the industry—the range of output at which the firm would at least break even in the long run. The source of this condition is large fixed costs: when large fixed costs are required to operate, a given quantity of output is produced at lower average total cost by one large firm than by two or more smaller firms.”)

[99] Id. (“The most visible natural monopolies in the modern economy are local utilities—water, gas, and sometimes electricity. As we’ll see, natural monopolies pose a special challenge to public policy.”)

[100] Richard H. K. Vietor, Contrived Competition (1994) at 167 (“[I]n the early part of the twentieth century, American Telephone and Telegraph (AT&T) set itself the goal of providing universal telephone services through an end-to-end national monopoly. … By [the 1960s], however, the distortions of regulatory cross-subsidy had diverged too far from the economics of technological change.”). Thomas W. Hazlett, Cable TV Franchises as Barriers to Video Competition, 2 Va. J.L. & Tech. 1 (2007) (“Traditionally, municipal cable TV franchises were advanced as consumer protection to counter “natural monopoly” video providers. …  Now, marketplace changes render even this weak traditional case moot. … [V]ideo rivalry has proven viable, with inter-modal competition from satellite TV and local exchange carriers (LECs) offering “triple play” services.”)

[101] Id.

[102] Share of United States Households Using Specific Technologies, Our World in Data (n.d.), https://ourworldindata.org/grapher/technology-adoption-by-households-in-the-united-states.

[103] Edward Carlson, Cutting the Cord: NTIA Data Show Shift to Streaming Video as Consumers Drop Pay-TV, NTIA (2019), https://www.ntia.gov/blog/2019/cutting-cord-ntia-data-show-shift-streaming-video-consumers-drop-pay-tv.

[104] Karl Bode, A New Low: Just 46% Of U.S. Households Subscribe to Traditional Cable TV, TechDirt (Sep. 18, 2023), https://www.techdirt.com/2023/09/18/a-new-low-just-46-of-u-s-households-subscribe-to-traditional-cable-tv. See also, Shira Ovide, Cable TV Is the New Landline, New York Times (Jan. 6, 2022), https://www.nytimes.com/2022/01/06/technology/cable-tv.html.

[105] SSOIO, supra, note 16.

[106] FCC, Wireless Telecommunications Bureau Report: Policy Review of Mobile Broadband Operators’ Sponsored Data Offerings for Zero-Rated Content and Services (Jan. 2017), available at https://transition.fcc.gov/Daily_Releases/Daily_Business/2017/db0111/DOC-342987A1.pdf.

[107] SSOIO, supra, note 16.

[108] Hyun Ji Lee & Brian Whitacre, Estimating Willingness-to-Pay for Broadband Attributes among Low-Income Consumers: Results from Two FCC Lifeline Pilot Projects, 41 Telecomm. Pol’y. 769 (Oct. 2017).

[109] Geoffrey A. Manne, Ben Sperry, Berin Szóka, & Tom Struble, ICLE & TechFreedom Policy Comments (Jul. 14, 2014), available at https://laweconcenter.org/images/articles/icle-tf_nn_policy_comments.pdf.

[110] Mozilla Corp. v. Fed. Commc’ns Comm’n, 940 F.3d 1 (D.C. Cir. 2019) (citations omitted).

[111] Id.

[112] In 2015, when the FCC voted to enact the 2015 Open Internet Order, Chair Tom Wheeler promised to forebear from applying such rate regulation, stating flatly that “we are not trying to regulate rates.” FCC Reauthorization: Oversight of the Commission, Hearing Before the Subcommittee on Communications and Technology, Committee on Energy and Commerce, House of Representatives, 114 Cong. 27 (Mar. 19, 2015) (Statement of Tom Wheeler). Standing as a nominee to the FCC, Gigi Sohn was asked during a 2021 confirmation hearing before the U.S. Senate Commerce Committee if she would support the agency’s regulation of broadband rates. She responded: “No. That was an easy one.” David Shepardson, FCC Nominee Does Not Support U.S. Internet Rate Regulation, Reuters (Dec. 1, 2021), https://www.reuters.com/world/us/fcc-nominee-does-not-support-us-internet-rate-regulation-2021-12-01. In September 2023, in a speech announcing the FCC’s proposal to regulate broadband internet under Title II of the Communications Act, Chair Jessica Rosenworcel was emphatic: “They say this is a stalking horse for rate regulation. Nope. No how, no way.” FCC Chair Rosenworcel on Reinstating Net Neutrality Rules, C-Span (Sep. 26, 2023), https://www.c-span.org/video/?530731-1/fcc-chair-rosenworcel-reinstating-net-neutrality-rules.

[113] Vietor, supra note 89.

[114] Id. See also, Illinois Economic and Fiscal Commission, Telecommunications Deregulation Issues and Impacts: A Special Report (Apr. 2001), available at https://www.ilga.gov/commission/cgfa/archives/telecom_dereg.PDF and Kevin J. Martin, Balancing Deregulation and Consumer Protection, 17 Commlaw Conspectus (2008), available at https://transition.fcc.gov/commissioners/previous/martin/MartinSpeech011609.pdf.

[115] Fruits & Manne, supra note 5, at 1.

[116] Id.

[117] Id. at 7.

[118] Jonathan E. Nuechterlein & Howard Shelanski, Building on What Works: An Analysis of U.S. Broadband Policy, 73 Fed. Comm. L.J. 219 (2021)

[119] Fruits & Manne, supra note 5, at 13.

[120] Digital Discrimination Order, supra note 15 [emphasis added].

[121] Brief of the International Center for Law & Economics and the Information Technology & Innovation Foundation as Amici Curiae in Support of Petitioners and Setting Aside the Commission’s Order, Minnesota Telecom Alliance v. FCC, No. 24-1179 (8th Cir. Apr. 29, 2024) available at https://laweconcenter.org/wp-content/uploads/2024/04/2024-04-29-ICLE-ITIF-Amicus-Brief.pdf.

[122] IIJA 60102 (h)(4)(B).

[123] U.S. Dep’t of Com., supra note 66, at 66. States have begun to follow this lead by prescribing obligations to local providers for quality and price on deployments that have speeds and capabilities far above what BEAD and the FCC consider as the baseline for a “served” household. See, e.g., ConnectLA, BEAD Initial Proposal, vol. 2 (Aug. 2023), available at https://connect.la.gov/media/3gylvrgc/bead-vol-2-final.pdf (prescribing a complex system for preferencing providers that deploy “affordable” fiber and other high-speed service to middle-class homes).

[124] RUS Vol. 87, No. 149, Notice of Availability of the Draft Programmatic Environmental Assessment for the Partnerships for Climate-Smart Commodities Funding Opportunity, Docket No. NRCS–2022–0009 (U.S.D.A., Aug. 4, 2022), https://www.federalregister.gov/documents/2022/08/04/2022-16694/rural-econnectivity-program and RD, Preparing for ReConnect Round 4, (USDA) available at https://www.rd.usda.gov/sites/default/files/Preparing-for-ReConnect-Round-4.pdf.

[125] New York State Telecommunications Association, Inc. v. James, No. 21-1075 (2nd Cir. Apr. 26, 2024), available at https://www.courthousenews.com/wp-content/uploads/2024/04/ny-broadband-law-opinion-second-circuit.pdf. See also, Randolph J. May & Seth L. Cooper, Second Circuit Hears Preemption Challenge to New York’s Broadband Rate Regulation Law, FedSoc Blog (Feb. 7, 2023), https://fedsoc.org/commentary/fedsoc-blog/second-circuit-hears-preemption-challenge-to-new-york-s-broadband-rate-regulation-law.

[126] Fruits & Manne, supra note 5, at 16.

[127] Id. at 1.

[128] FCC, Fourth Report and Order, Declaratory Ruling, and Third Further Notice of Proposed Rulemaking Accelerating Wireline Broadband Deployment by Removing Barriers to Infrastructure Investment, WC Docket No. 17-84 (Dec. 15, 2023), available at https://docs.fcc.gov/public/attachments/FCC-23-109A1.pdf [hereinafter “Poles Order”].

[129] Id.

[130] Id. at ¶ 7.

[131] Id.

[132] Id.

[133] Kristian Stout & Eric Fruits, Reply Comments of the International Center for Law & Economics, In the Matter of Accelerating Wireline Broadband Deployment by Removing Barriers to Infrastructure Investment, WC Docket No. 17-84 at 4 (submitted Aug. 26, 2022), available at https://laweconcenter.org/wp-content/uploads/2022/08/Pole-Attachments-Reply-Comments-2022-08-27-v2.pdf.

[134] Id.

[135] See Poles Order at ¶ 42.

[136] Id.

[137] Id. at 8.

[138] Id. at 9.

[139] Id.

[140] See Poles Order at ¶ 42.

[141] Id.

[142] Id. at 10.

[143] Ben Sperry, Geoffrey A. Manne, & Kristian Stout, The Role of Antitrust and Pole-Attachment Oversight in TVA Broadband Deployment (Int’l Ctr. for L. & Econ. Issue Brief 2023-09-04, 2023), available at https://laweconcenter.org/wp-content/uploads/2023/08/TVA-Pole-Attachments-Issue-Brief.pdf.

[144] Id. at 2.

[145] Id. at 3.

[146] Id.

[147] Id. at 4.

[148] Id.

[149] Id.

[150] Id. at 6-9.

[151] Id. at 10.

[152] Id.

[153] Id. at 11.

[154] Sen. Michael S. Lee, Letter to DOJ Re: Tennessee Valley Authority (TVA) – Supporting Broadband Deployment (June 22, 2023), in Ben Sperry, Geoffrey A. Manne, & Kristian Stout, The Role of Antitrust and Pole-Attachment Oversight in TVA Broadband Deployment (Int’l. Ctr. for L. & Econ. Issue Brief, Sep. 4, 2023) available at https://laweconcenter.org/wp-content/uploads/2023/08/TVA-Pole-Attachments-Issue-Brief.pdf.

[155] Sperry, Manne, & Stout, supra note 124, at 16.

[156] See, e.g., BEAD NOFO, supra note 71.

[157] Manne, Stout, & Sperry, supra note 1.

[158] Id.

[159] Id.

[160] Christopher S. Yoo, Jesse Lambert & Timothy P. Pfenninger, Municipal Fiber in the United States: A Financial Assessment, 46 Telecomm. Pol. 102292 (Jun. 2022).

[161] George S. Ford, Electricity Rates and the Funding of Municipal Broadband Networks: An Empirical Analysis, 102 Energy Econ. 105475 (2021).

Continue reading
Telecommunications & Regulated Utilities

Brief of ICLE and ITIF to 8th Circuit in Minnesota Telecom Alliance v FCC

Amicus Brief STATEMENTS OF INTEREST The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center that builds intellectual foundations for . . .

STATEMENTS OF INTEREST

The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center that builds intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law and economics methodologies and economic learning to inform policy debates and has longstanding expertise evaluating law and policy.

ICLE scholars have written extensively in the areas of telecommunications and broadband policy. This includes white papers, law journal articles, and amicus briefs touching on issues related to the provision and regulation of broadband Internet service.

The FCC’s final rule by Report and Order adopted on January 22, 2024  concerning “digital discrimination” (the Order) constitutes a significant change to an economic policy. Broadband alone is a $112 billion industry with over 125 million customers. If permitted to stand, the FCC’s broad Order will be harmful to the dynamic marketplace for broadband that presently exists in the United States.

The Information Technology and Innovation Foundation (“ITIF”) is an independent non-profit, non-partisan think tank. ITIF’s mission is to formulate, evaluate, and promote policy solutions that accelerate innovation and boost productivity to spur growth, opportunity, and progress. To that end, ITIF strives to provide policymakers around the world with high-quality information, analysis, and recommendations they can trust. ITIF adheres to the highest standards of research integrity, guided by an internal code of ethics grounded in analytical rigor, policy pragmatism, and independence from external direction or bias.

ITIF’s mission is to advance public policies that accelerate the progress of technological innovation. ITIF believes that innovation can almost always be a force for good. It is the major driver of human advancement and the essential means for improving societal welfare. A robust rate of innovation makes it possible to achieve many other goals—including increases in median per-capita income, improved health, transportation mobility, and a cleaner environment. ITIF engages in policy and legal debates, both directly and indirectly, by presenting policymakers, courts, and other policy influencers with compelling data, analysis, arguments, and proposals to advance effective innovation policies and oppose counterproductive ones.

The FCC’s Order will have a significant impact on the speed and adoption of technological innovation in the United States. The Order not only raises the cost of deployment investments, but it also increases the risk of liability for discrimination, thereby increasing the uncertainty of the investments’ returns. As a result, the Order will not only stifle new deployment to unserved areas, but also will delay network upgrades and maintenance out of fear of alleged disparate effects.

Pursuant to Federal Rule of Appellate Procedure 29(a)(2), ICLE and ITIF have obtained consent of the parties to file the instant Brief of the International Center for Law & Economics and the Information Technology and Innovation Foundation as Amici Curiae In Support of Petitioners.

INTRODUCTION AND SUMMARY OF ARGUMENT

The present marketplace for broadband in the United States is dynamic and generally serves consumers well. See Geoffrey A. Manne, Kristian Stout, & Ben Sperry, A Dynamic Analysis of Broadband Competition: What Concentration Numbers Fail to Capture (ICLE White Paper, Jun. 2021), https://laweconcenter.org/wp-content/uploads/2021/06/A-Dynamic-Analysis-of-Broadband-Competition.pdf. Broadband providers acting in the marketplace have invested $2.1 trillion in building, maintaining, and improving their networks since 1996, including $102.4 billion in 2022 alone. See USTelecom, 2022 Broadband Capex Report (Sept. 8, 2023), https://www.ustelecom.org/research/2022-broadband-capex/. The FCC’s own data suggests that 91% of Americans have access to high-speed broadband under its new and faster definition. See 2024 706 Report, FCC 24-27, GN Docket No. 22-270, at paras. 20, 22 (Mar. 18, 2024).

Despite this, there are areas in the country, primarily due to low population density, where serving consumers is prohibitively expensive. Moreover, affordability remains a concern for some lower-income groups. To address these concerns, Congress passed the Infrastructure Investment and Jobs Act (IIJA), Pub. L. No. 117-58, 135 Stat. 429, which invested $42.5 billion in building out broadband to rural areas through the Broadband Equity, Access, and Deployment (BEAD) Program, and billions more in the Affordable Connectivity Program (ACP), which provided low-income individuals a $30 per month voucher. Congress’s passage of the IIJA was consistent with sustaining the free and dynamic market for broadband.

In addition, to address concerns that broadband providers could engage in discriminatory behavior in deployment decisions, Section 60506(b) of IIJA requires that “[n]ot later than 2 years after November 15, 2021, the Commission shall adopt final rules to facilitate equal access to broadband internet access services, taking into account the issues of technical and economic feasibility presented by that objective, including… preventing digital discrimination of access based on income level, race, ethnicity, color, religion, or national origin.” Pub. L. No. 117-58, § 60506(b)(1), 135 Stat. 429, 1246.

The FCC adopted the final rule by Report and Order in the Federal Register on January 22, 2024. See 89 Fed. Reg. 4128 (Jan. 22, 2024) [hereinafter “Order”] attached as the Addendum to Petitioners’ Brief (“Pet. Add.”). But the digital discrimination rule issued in this Order is inconsistent with the IIJA, so expansive as to claim regulatory authority over major political and economic questions, and is arbitrary and capricious. As a result, this Court must vacate it.

The FCC could have issued a final rule consistent with the statute and the dynamic broadband marketplace. Such a rule would have recognized the limited purpose of the statute was to outlaw intentional discrimination by broadband providers in deployment decisions, in a way that would treat a person or group of persons less favorably than others because of a listed protected trait. This rule would be workable, leaving the FCC to focus its attention on cases where broadband providers fail to invest in deploying networks due to animus against those groups.

Instead, the FCC chose to create an expansive regulatory scheme that gives it essentially unlimited discretion over anything that would affect the adoption of broadband. It did this by adopting a differential impact standard that applies not only to broadband providers, but to anyone that could “otherwise affect consumer access to broadband internet access service,” see 47 CFR §16.2 (definition of “Covered entity”), which includes considerations of price among the “comparable terms and conditions.” See Pet. Add. 59, Order at para. 111 (“Indeed, pricing is often the most important term that consumers consider when purchasing goods and services… this is no less true with respect to broadband internet access services.”). Taken together, these departures from the text of Section 60506 would give the FCC nearly unlimited authority over broadband providers, and even a great deal of authority over other entities that can affect broadband access.

To interpret Section 60506 to encompass a “differential impact” standard, as the agency has done here, leads to a situation in which covered entities that have no intent to discriminate or even take active measures to help protected classes could still be found in violation of the rules. This standard opens nearly everything to FCC review because of the correlation of profit-maximizing motivations not covered by the statute with things that are covered by the statute.

Income level, race, ethnicity, color, religion, and national origin are often incidentally associated with some other non-protected factor important for investment decisions. Specifically, population density is widely recognized as one of the determinants of expected profitability for broadband deployment. See Eric Fruits & Kristian Stout, The Income Conundrum: Intent and Effects Analysis of Digital Discrimination (ICLE Issue Brief 2022-11-14) available at https://laweconcenter.org/wp-content/uploads/2022/11/The-Income-Conundrum-Intent-and-Effects-Analysis-of-Digital-Discrimination.pdf citing U.S. Gov’t Accountability Office, GAO-06-426, Telecommunications Broadband Deployment Is Extensive Throughout the United States, but It Is Difficult to Assess the Extent of Deployment Gaps in Rural Areas 19 (2006) (population density is the “most frequently cited cost factor affecting broadband deployment” and “a critical determinant of companies’ deployment decisions”). But population density is also correlated with income level, with higher density associated with higher incomes. See Daniel Hummel, The Effects of Population and Housing Density in Urban Areas on Income in the United States, 35 Loc. Econ. 27, Feb. 7, 2020, (showing statistically significant positive relationship between income and both population and housing density). Higher population density is also correlated with greater racial, ethnic, religious, and national origin diversity. See, e.g., Barrett A. Lee & Gregory Sharp, Diversity Across the Rural-Urban Continuum, 672 Annals Am. Acad. Pol. & Soc. Sci. 26 (2017).

Consider a hypothetical provider who eschews discrimination against any of the protected traits in its deployment practices by prioritizing its investments solely on population density, deploying to high-density areas first then lower-density areas later. If higher-density areas are also areas with higher incomes, then it would be relatively easy to produce a statistical analysis showing that lower-income areas are associated with lower rates of deployment. Similarly, because of the relationships between population density and race, ethnicity, color, religion, and national origin, it would be relatively easy to produce a statistical analysis showing disparate impacts across these protected traits.

With so many possible spurious correlations, it is almost impossible for any covered entity to know with any certainty whether its policies or practices could be actionable for differential impacts. Nobel laureate, Ronald Coase, is reported to have said, “If you torture the data long enough, it will confess.” Garson O’Toole, If You Torture the Data Long Enough, It Will Confess, Quote Investigator (Jan. 18, 2021), https://quoteinvestigator.com/2021/01/18/confess. The FCC’s Order amounts to an open invitation to torture the data.

While it is possible that the FCC could determine that the costs of deployment due to population density or another profit-relevant reason go to “technical or economic feasibility,” the burden to prove infeasibility are on the covered entity by a preponderance of the evidence standard. See 47 CFR §16.5(c)-(d). This may include “proof that available, less discriminatory alternatives were not reasonably achievable.” See 47 CFR §16.5(c). In its case-by-case review process, there is no guarantee that the Commission will agree that “technical or economic feasibility” warrants an exception in any given dispute. See 47 CFR §16.5(e). This rule will put a great deal of pressure on covered entities to avoid possible litigation by getting all plans pre-approved by the FCC through its advisory opinion authority. See 47 CFR §16.7. This sets up the FCC to be a central planner for nearly everything related to broadband, from deployment to policies and practices that affect even adoption itself, including price of the service. This is inconsistent with preserving the ability of businesses to make “practical business choices and profit-related decisions that sustain a vibrant and dynamic free-enterprise system.” Texas Dep’t of Hous. & Cmty. Affs. v. Inclusive Communities Project, Inc., 576 U.S. 519, 533 (2015). The Order will thus dampen investment incentives because “the specter of disparate-impact litigation” will cause private broadband providers to “no longer construct or renovate” their networks, leading to a situation where the FCC’s rule “undermines its own purpose” under the IIJA “as well as the free market system.” Id. at 544.

ARGUMENT

The FCC’s Order is unlawful. First, the Order’s interpretation of Section 60506 is inconsistent with the structure of the IIJA. Second, the Order is inconsistent with the clear meaning of Section 60506. Third, the Order raises major questions of political and economic significance by giving the FCC nearly unlimited authority over broadband deployment decisions, including price. Fourth, the Order is arbitrary and capricious because it fails to adopt a rule that is reasonable insofar as it will end up reducing investment incentives of broadband providers to deploy and improve broadband service, which is inconsistent with the purpose of the IIJA. Finally, the Order’s vagueness leaves a person of ordinary intelligence no ability to know whether they are subject to the law and thus gives the FCC the ability to engage in arbitrary and discriminatory enforcement.

I. The Order’s Interpretation of Section 60506 Is Inconsistent with the Structure of the IIJA

“It is a fundamental canon of statutory construction that the words of a statute must be read in their context and with a view to their place in the overall statutory scheme.” Davis v. Michigan Dept. of Treasury, 489 U.S. 803, 809 (1989). The structure of the IIJA as a whole, as well as the fact that Section 60506, in particular, was not placed within the larger Communications Act (47 U.S.C. §150 et seq.) that gives the FCC authority, suggests that the Order claims authority far beyond what Congress has granted the FCC.

The IIJA divided broadband policy priorities between different agencies and circumscribes the scope of each program or rulemaking it delegates to agencies. Section 60102 addressed the issue of universal broadband deployment by creating the Broadband Equity, Access, and Deployment (BEAD) Program. See IIJA §60102. The statute designated the National Telecommunication and Information Administration (NTIA) to administer this $42.45 billion program with funds to be first allocated to deploy broadband service to all areas that currently lack access to high-speed broadband Internet. See IIJA §60102(b), (h). BEAD is, therefore, Congress’s chosen method to remedy disparities in broadband deployment due to cost-based barriers like low population density. Section 60502 then created the Affordable Connectivity Program (ACP), which provided low-income individuals a $30 per month voucher, and delegated its administration to the FCC. See IIJA §60502. ACP is, therefore, Congress’s chosen method to remedy broadband affordability for households whose low income is a barrier to broadband adoption. Title V of Division F of the IIJA goes on to create several more broadband programs, each with a specific and limited scope. See IIJA § 60101 et seq.

In short, Congress was intentional about circumscribing the different problems with broadband deployment and access, as well as the scope of the programs it designed to fix them. Section 60506’s authorization for the FCC to prevent “digital discrimination” fits neatly into this statutory scheme if it targets disparate treatment in deployment decisions based upon protected status—i.e., intentional harmful actions that are distinct from deployment decisions based on costs of deployment or projected demand for broadband service. But the FCC’s Order vastly exceeds this statutory scope and claims authority over virtually every aspect of the broadband marketplace, including infrastructure deployment decisions due to cost generally and the potential market for the networks once deployed.  Indeed, the FCC envisions scenarios in which its rules conflict with other federal funding programs but nevertheless says that compliance with them is no safe harbor from liability for disparate impacts that compliance creates. See Pet. Add. 69-70, Order at para. 142. The Order thus dramatically exceeds the boundaries Congress set in Section 60506. Congress cannot have meant for section 60506 to remedy all deployment disparities or all issues of affordability because it created BEAD and ACP for those purposes.

Moreover, Section 60506 was not incorporated into the Communications Act, unlike other parts of the IIJA. In other words, the FCC’s general enforcement authority doesn’t apply to the regulatory scheme of Section 60506. The IIJA was not meant to give the FCC vast authority over broadband deployment and adoption by implication. The FCC must rely on Section 60506 alone for any authority it was given to combat digital discrimination.

II. The Order Is Inconsistent with the Clear Meaning of the Text of Section 60506

The text of Section 60506 plainly shows that the intention of Congress to combat digital discrimination was through the use of circumscribed rules aimed at preventing intentional discrimination in deployment decisions by broadband providers. The statute starts with a statement of policy in part (a) and then gives the Commission direction to fulfill that purpose in parts (b) and (c).

The statement of policy in Section 60506(a) is exactly that: a statement of policy. Courts have long held that statutory sections like Section 60506(a)(1) and (a)(3) using words like “should” are “precatory.” See Emergency Coal. to Def. Educ. Travel v. U.S. Dep’t of Treasury, 498 F. Supp. 2d 150, 165 (D.D.C. 2007) (“Courts have repeatedly held that such ‘sense of Congress’ language is merely precatory and non-binding.”), aff’d, 545 F.3d 4 (D.C. Cir. 2008). While the statement of policy helps illuminate the goal of the provision at issue, it does not actually give the FCC authority. The goal of the statute is clear: to make sure the Commission prevents intentional discrimination in deployment decisions. For instance, Section 60506(c) empowers the Commission (and the Attorney General) to ensure federal policies promote equal access by prohibiting intentional deployment discrimination. See Section 60506(c) (“The Commission and the Attorney General shall ensure that Federal policies promote equal access to robust broadband internet access service by prohibiting deployment discrimination…”). Moreover, the definition of equal access as “equal opportunity to subscribe,” see 47 U.S.C. §1754(a)(2), does not imply a disparate impact analysis. See Brnovich v. Democratic Nat’l Comm., 141 S. Ct. 2321, 2339 (2021) (“[T]he mere fact there is some disparity in impact does not necessarily mean… that it does not give everyone an equal opportunity.”)

There is no evidence that IIJA’s drafters intended the law to be read as broadly as the Commission has done in its rules. The legislative record on Section 60506 is exceedingly sparse, containing almost no discussion of the provision beyond assertions that “broadband ought to be available to all Americans,” 167 Cong. Rec. 6046 (2021), and also that the IIJA was not to be used as a basis for the “regulation of internet rates.”167 Cong. Rec. 6053 (2021). The FCC argues that since “there is little evidence in the legislative history… that impediments to broadband internet access service are the result of intentional discrimination,” Congress must have desired a disparate impact standard. See Pet. Add. 25, Order at para. 47. But the limited nature of the problem suggests a limited solution in the form of a framework aimed at preventing such discrimination. Given the sparse evidence on legislative intent, Section 60506 should be read as granting a limited authority to the Commission.

With Section 60506(b), Congress gave the Commission a set of tools to identify and remedy acts of intentional discrimination by broadband providers in deployment decisions. As we explain below, under both the text of Section 60506 and the Supreme Court’s established jurisprudence, the Commission was not empowered to employ a disparate-impact (or “differential impact”) analysis under its digital discrimination rules.

Among the primary justifications for disparate-impact analysis is to remedy historical patterns of de jure segregation that left an indelible mark on minority communities. See Inclusive Communities, 576 at 528-29. While racial discrimination has not been purged from society, broadband only became prominent in the United States well after all forms of de jure segregation were made illegal, and after Congress and the courts had invested decades in rooting out impermissible de facto discrimination. In enacting its rules that give it presumptive authority over nearly all decisions related to broadband deployment and adoption, the FCC failed to adequately take this history into account.

Beyond the policy questions, however, Section 60506 cannot be reasonably construed as authorizing disparate-impact analysis. While the Supreme Court has allowed disparate-impact analysis in the context of civil-rights law, it has imposed some important limitations. To find disparate impact, the statute must be explicitly directed “to the consequences of an action rather than the actor’s intent.”  Inclusive Communities., 576 U.S. at 534. There, the Fair Housing Act made it unlawful:

To refuse to sell or rent after the making of a bona fide offer, or to refuse to negotiate for the sale or rental of, or otherwise make unavailable or deny, a dwelling to any person because of race, color, religion, sex, familial status, or national origin.

42 U.S.C. §3604(a) (emphasis added). The Court noted that the presence of language like “otherwise make unavailable” is critical to construing a statute as demanding an effects-based analysis. Inclusive Communities., 576 U.S. at 534. Such phrases, the Court found, “refer[] to the consequences of an action rather than the actor’s intent.” Id. Further, the structure of a statute’s language matters:

The relevant statutory phrases… play an identical role in the structure common to all three statutes: Located at the end of lengthy sentences that begin with prohibitions on disparate treatment, they serve as catchall phrases looking to consequences, not intent. And all [of these] statutes use the word “otherwise” to introduce the results-oriented phrase. “Otherwise” means “in a different way or manner,” thus signaling a shift in emphasis from an actor’s intent to the consequences of his actions.

Id. at 534-35.

Previous Court opinions help parse the distinction between statutes limited to intentional discrimination claims and those that allow for disparate impact claims. Particularly relevant here, the Court looked at language from Section 601 of the Civil Rights Act stating that “[n]o person in the United States shall, on the ground of race, color, or national origin, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any program or activity receiving Federal financial assistance,” 42 U.S.C. §2000d (emphasis added), and found it “beyond dispute—and no party disagrees—that [it] prohibits only intentional discrimination.”  Alexander v. Sandoval, 532 U.S. 275, 280 (2001).

Here, the language of Section 60506” (“based on”) mirrors the language of Section 601 of the Civil Rights Act (“on the ground of”). Moreover, it is consistent with the reasoning of Inclusive Communities that determines when a statute allows for disparate impact analysis. Inclusive Communities primarily based its opinion on the “otherwise make unavailable” language at issue, with a particular focus on “otherwise” creating a more open-ended inquiry. See Inclusive Communities, 576 U.S. at 534 (“Here, the phrase ‘otherwise make unavailable’ is of central importance to the analysis that follows”). Such language is absent in Section 60506. Moreover, the closest analogy for Section 60506’s “based on” language is the “on the ground of” language of Title VI of the Civil Rights Act, which also does not include the “otherwise” language found to be so important in Inclusive Communities. Compare 42 U.S.C. §2000d with Inclusive Communities, 576 U.S. at 534-35 (focusing on how “otherwise” is a catch-all phrase looking to consequences instead of intent). If the Court has found “grounded on” means only intentional discrimination, then it is hard to see how “based on” wouldn’t lead to the same conclusion.

Thus, since Section 60506 was drafted without “results-oriented language” and instead frames the prohibition against digital discrimination as “based on income level, race, ethnicity, color, religion, or national origin,” this would put the rule squarely within the realm of prohibitions on intentional discrimination. That is, to be discriminatory, the decision to deploy or not to deploy must have been intentionally made based on or grounded on the protected characteristic. Mere statistical correlation between deployment and protected characteristics is insufficient.

In enacting the IIJA, Congress was undoubtedly aware of the Court’s history with disparate-impact analysis. Had it chosen to do so, it could have made the requirements of Section 60506 align with the requirements of that precedent. But it chose not to do so.

III. Congress Did Not Clearly Authorize the FCC to Decide a Major Question in this Order

To read Section 60506 of the IIJA as broadly as the FCC does in the Order invites a challenge under the major-questions doctrine. There are “extraordinary cases” where the “history and the breadth of the authority” that an agency asserts and the “economic and political significance” of that asserted authority provide “reason to hesitate before concluding that Congress” meant to confer such authority. See West Virginia v. EPA, 597 U.S. 697, 721 (2022) (quoting FDA v. Brown & Williamson, 529 U.S. 120, 159-60 (2000)). In such cases, “something more than a merely plausible textual basis for agency action is necessary. The agency instead must point to ‘clear congressional authorization’ for the power it claims.” Id. at 723 (quoting Utility Air Regulatory Group v. EPA, 573 U.S. 302, 324 (2014).

Here, the FCC has claimed dramatic new powers over the deployment of broadband Internet access, and it has exercised that alleged authority to create a process for inquiry into generalized civil rights claims. Such a system is as unprecedented as it is important to the political and economic environment of the country. The FCC itself implicitly recognizes this fact when it emphasizes the critical importance of Internet access as necessary “to meet basic needs.” Broadband alone is a $112 billion industry with over 125 million customers. See The History of US Broadband, S&P Global (last accessed May 11, 2023), https://www.spglobal.com/marketintelligence/en/news-insights/research/the-history-of-us-broadband. This doesn’t even include all the entities covered by this Order, which also includes all those who could “otherwise affect consumer access to broadband internet access service.” See 47 CFR §16.2. There is, therefore, no doubt that the Order is of great economic and political significance.

This would be fine if the statute clearly delegated such power to the FCC. But the only potential source of authority for the Order is Section 60506. Since the text of Section 60506 can be (and is better) read as not giving the FCC such authority, it simply can’t be an unambiguous delegation of authority.

As argued above, Congress knows how to write a disparate-impact statute in light of Supreme Court jurisprudence. Put simply, Congress did not write a disparate-impact statute here because there is no catch-all language comparable to what the Supreme Court has pointed to in statutes like the FHA. Cf. Inclusive Communities, 576 U.S. at 533 (finding a statute includes disparate-impact liability when the “text refers to the consequences of actions and not just the mindset of actors”). At best, Section 60506 is ambiguous in giving the authority to the FCC to use disparate impact analysis. That is simply not enough when regulating an area of great economic and political significance.

In addition to the major question of whether the FCC may enact its vast disparate impact apparatus, the FCC claims vast authority over the economically and politically significant arena of broadband rates despite no clear authorization to do so in Section 60506. In fact, in the legislative record, Congress explicitly wanted to avoid the possibility that the IIJA would be used as the basis for the “regulation of internet rates.” 167 Cong. Rec. 6053 (2021). The FCC disclaims the authority to engage in rate regulation, but it does claim authority for “ensuring pricing consistency.” See Pet. Add. 56-57, Order at para. 105. While the act of assessing the comparability of prices is not rate regulation in the sense that the Communications Act contemplates, a policy that holds entities liable for those disparities such that an ISP must adjust its prices until it matches an FCC definition of “comparable” is tantamount to setting that rate. See Eric Fruits & Geoffrey Manne, Quack Attack: De Facto Rate Regulation in Telecommunications (ICLE Issue Brief 2023-03-30), available at https://laweconcenter.org/wp-content/uploads/2023/03/De-Facto-Rate-Reg-Final-1.pdf (describing how the FCC often engages in rate regulation in practice even when it doesn’t call it that).

Furthermore, the Order could also allow the FCC to use the rule to demand higher service quality under the “comparable terms and conditions” language, even if consumers may prefer lower speeds for less money. That increased quality comes at a cost that will necessarily increase the market price of broadband. In this way, the Order would allow the FCC to set a price floor even if it never explicitly requires ISPs to submit their rates for approval.

The elephant of rate regulation is not hiding in the mousehole of Section 60506. Cf. Whitman v. American Trucking Assns., Inc., 531 U.S. 457, 468 (2001). Indeed, the FCC itself forswears rate regulation in an ongoing proceeding in which the relevant statute would clearly authorize it. See Safeguarding and Securing the Open Internet, 88 Fed. Reg. 76048 (proposed Nov. 3, 2023) (to be codified at 47 CFR pts. 8, 20). Nevertheless, the FCC recognized that rate regulation is inappropriate for the broadband marketplace and has declined its application in that proceeding. Even here, the FCC has denied that including pricing within the scope of the rules is “an attempt to institute rate regulation.” See Pet. Add. 59, Order at para. 111. But despite its denials, the FCC’s claim of authority would allow it to regulate prices despite nothing in Section 60506 granting it authority to do so. The FCC should not be able to recognize a politically significant consensus against rate regulation one minute and then smuggle that disfavored policy in through a statute that never mentions it the next.

Finally, as noted above, since many of the protected characteristics, but especially income, can be correlated with many factors relevant to profitability, it would be no surprise that almost any policy or practice of a covered entity under the Order could be subject to FCC enforcement. And since there is no guarantee that the FCC would agree in a particular case that technical or economic feasibility justifies a particular policy or practice, nearly everything a broadband provider or other covered entities do would likely need pre-approval under the FCC’s advisory opinion process. This would essentially make the FCC a central planner of everything related to broadband. In other words, the FCC has clearly claimed authority far beyond what Congress could have imagined without any clear authorization to do so.

IV. The Order Is Arbitrary and Capricious Because It Will Produce Results Inconsistent with the Purpose of the Statute

As noted above, the purposes of the broadband provisions of the IIJA are to encourage broadband deployment, enhance broadband affordability, and prevent discrimination in broadband access. Put simply, the purpose is to get more Americans to adopt more broadband, regardless of income level, race, ethnicity, color, religion, or national origin. The FCC’s Order should curtail discrimination, but the aggressive and expansive police powers the agency grants itself will surely diminish investments in broadband deployment and efforts to encourage adoption. We urge the Court to vacate the Order and require the FCC to adopt rules limited to preventing intentional discrimination in deployment by broadband Internet access service providers. More narrowly tailored rules would satisfy Section 60506’s mandates while preserving incentives to invest in deployment and encourage adoption. Cf. Cin. Bell Tel. Co. v. FCC, 69 F.3d 752, 761 (6th Cir. 1995) (“The FCC is required to give [a reasoned] explanation when it declines to adopt less restrictive measures in promulgating its rules.”). But the current Order is arbitrary and capricious because the predictable results of the rules would be inconsistent with the purpose of the IIJA in promoting broadband deployment. See Motor Vehicle Mfrs. Ass’n v. State Farm Mutual Auto. Ins. Co., 463 U.S. 29, 43 (1983) (“[A]n agency rule would be arbitrary and capricious if the agency has… offered an explanation for its decision that runs counter to the evidence before the agency, or is so implausible that it could not be ascribed to a difference in view of the product of agency expertise”).

The Order spans nearly every aspect of broadband deployment, including, but not limited to network infrastructure deployment, network reliability, network upgrades, and network maintenance. Pet. Add. 58, Order ¶ 108. In addition, the Order covers a wide range of policies and practices that while not directly related to deployment, affect the profitability of deployment investments, such as pricing, discounts, credit checks, marketing or advertising, service suspension, and account termination. Pet. Add. 58, Order ¶ 108.

Like all firms, broadband providers have limited resources with which to make their investments. While profitability (i.e., economic feasibility) is a necessary precondition for investment, not all profitable investments can be undertaken. Among the universe of economically feasible projects, firms are likely to give priority to those that promise greater returns on investment relative to those with lower returns. Returns on investment in broadband depend on several factors. Population density, terrain, regulations, and taxes are all important cost factors, while a given consumer population’s willingness to adopt and pay for broadband are key demand-related factors. Anything that raises the cost of expected cost deployment or reduces the demand for service can turn a profitable investment into an unprofitable prospect or downgrade its priority relative to other investment opportunities.

The Order not only raises the cost of deployment investments, but it also increases the risk of liability for discrimination, thereby increasing the uncertainty of the investments’ returns. Because of the well-known and widely accepted risk-return tradeoff, firms that face increased uncertainty in investment returns will demand higher expected returns from the investments they pursue. This demand for higher returns means that some projects that would have been pursued under more limited digital discrimination rules will not be pursued under the current Order.

The Order will not only stifle new deployment to unserved areas, but also will delay network upgrades and maintenance out of fear of alleged disparate effects. At the extreme, providers will be faced with the choice to upgrade everyone or upgrade no one. Because they cannot afford to upgrade everyone, then they will upgrade no one.

It might be argued that providers could avoid some of the ex post regulatory risk by ex ante seeking pre-approval under the FCC’s advisory opinion process. Such processes are costly and are not certain to result in approval. Even if approved, the FCC reserves to right to rescind the pre-approval. See Pet. Add. 75, Order ¶ 156 (“[A]dvisory opinions will be issued without prejudice to the Enforcement Bureau’s or the Commission’s ability to reconsider the questions involved, and rescind the opinion. Because advisory opinions would be issued by the Enforcement Bureau, they would also be issued without prejudice to the Commission’s right to later rescind or revoke the findings.”). Under the Order’s informal complaint procedures, third parties can allege discriminatory effects associated with pre-approved policies and practices that could result in the recission of pre-approval. The result is an unambiguous increase in deployment and operating costs, even with pre-approval.

Moreover, by imposing liability for disparate impacts outside the control of covered broadband providers, the Order produces results inconsistent with the purpose of the IIJA because parties cannot conform their conduct to the rules. Among the 7% of households who do not use the internet at home, more than half of Current Population Survey (CPS) respondents indicated that they “don’t need it or [are] not interested.” George S. Ford, Confusing Relevance and Price: Interpreting and Improving Surveys on Internet Non-adoption, 45 Telecomm. Pol’y, Mar. 2021. ISPs sell broadband service, but they cannot force uninterested people to buy their product.

Only 2-3% of U.S. households that have not adopted at-home broadband indicate it is because of a lack of access. Eric Fruits & Geoffrey Manne, Quack Attack: De Facto Rate Regulation in Telecommunications (ICLE Issue Brief 2023-03-30) at Table 1, available at https://laweconcenter.org/wp-content/uploads/2023/03/De-Facto-Rate-Reg-Final-1.pdf. And even this tiny fraction is driven by factors such as topography, population density, and projected consumer demand. Differences in these factors will be linked to differences in broadband deployment, but there is little that an ISP can do to change them. If the FCC’s command could make the mountainous regions into flat plains, it would have done so already. It is nonsensical to hold liable a company attempting to overcome obstacles to deployment because they do not do so simultaneously everywhere. And it is not a rational course of action to address a digital divide by imposing liability on entities that cannot fix the underlying causes driving it.

Punishment exacted on an ISP will not produce the broadband access the statute envisions for all Americans. In fact, it will put that access further out of reach by incentivizing ISPs to reduce the speed of deployments and upgrades so that they do not produce inadvertent statistical disparities. Given the statute’s objective of enhancing broadband access, the FCC’s rulemaking must contain a process for achieving greater access. The Order does the opposite and, therefore, cannot be what Congress intended. Cf. Inclusive Communities, 576 U.S. at 544 (“If the specter of disparate-impact litigation causes private developers to no longer construct or renovate housing units for low-income individuals, then the FHA would have undermined its own purpose as well as the free-market system.”).

The Order will result in less broadband investment by essentially making the FCC the central planner of all deployment and pricing decisions. This is inconsistent with the purpose of Section 60506, making the rule arbitrary and capricious.

V. The Order’s Vagueness Gives the FCC Unbounded Power

The Order’s digital discrimination rule is vague because it does not have “sufficient definiteness that ordinary people can understand what conduct is prohibited.” Kolender v. Lawson, 461 U.S. 352, 357 (1983). As a result, the FCC has claimed unbounded power to engage in “arbitrary and discriminatory enforcement.” Id. As argued above, the disparate impact standard means that anything that is correlated with income, which includes many things that may be benignly relevant to deployment and pricing decisions, could give rise to a possible violation of the Order.

While a covered entity could argue that there are economic or technical feasibility reasons for a policy or practice, the case-by-case nature of enforcement outlined in the Order means that no one can be sure of whether they are on the right side of the law. See 47 CFR §16.5(e) (“The Commission will determine on a case-by-case basis whether genuine issues of technical or economic feasibility justified the adoption, implementation, or utilization of a [barred] policy or practice…”).

This vagueness is not cured by the presence of the Order’s advisory opinion process because the FCC retains the right to bring an enforcement action anyway after reconsidering, rescinding, or revoking it. See 47 CFR §16.5(e) (“An advisory opinion states only the enforcement intention of the Enforcement Bureau as of the date of the opinion, and it is not binding on any party. Advisory opinions will be issued without prejudice to the Enforcement Bureau or the Commission to reconsider the questions involved, or to rescind or revoke the opinion. Advisory opinions will not be subject to appeal or further review”). In other words, there is no basis for concluding a covered entity has “the ability to clarify the meaning of the regulation by its own inquiry, or by resort to an administrative process.” Cf. Village of Hoffman Estates v. Flipside, Hoffman Estates, Inc., 455 U.S. 489, 498 (1982). The FCC may engage in utterly arbitrary and discriminatory enforcement under the Order.

Moreover, the Order’s expansive definition of covered entities to include any “entities that provide services that facilitate and affect consumer access to broadband internet access service,” 47 CFR § 16.2 (definition of “Covered entity”, which includes “Entities that otherwise affect consumer access to broadband internet access service”), also leads to vagueness as to whom the digital discrimination rules apply. This would arguably include state and local governments and nonprofits, as well as multi-family housing owners, many of whom may have no idea they are subject to the FCC’s digital discrimination rules nor any idea of how to comply.

The Order is therefore void for vagueness because it does not allow a person of ordinary intelligence to know whether they are complying with the law and gives the FCC nearly unlimited enforcement authority.

CONCLUSION

For the foregoing reasons, ICLE and ITIF urge the Court to set aside the FCC’s Order.

Continue reading
Telecommunications & Regulated Utilities

ICLE/ITIF Amicus Brief Urges Court to Set Aside FCC’s Digital-Discrimination Rules

TOTM The Federal Communications Commission (FCC) recently adopted sweeping new rules designed to prevent so-called “digital discrimination” in the deployment, access, and adoption of broadband internet . . .

The Federal Communications Commission (FCC) recently adopted sweeping new rules designed to prevent so-called “digital discrimination” in the deployment, access, and adoption of broadband internet services. But an amicus brief filed by the International Center for Law & Economics (ICLE) and the Information Technology & Innovation Foundation (ITIF) with the 8th U.S. Circuit Court of Appeals argues that the rules go far beyond what Congress authorized.

It appears to us quite likely the court will vacate the new rules, because they exceed the authority Congress granted the FCC and undermine the very broadband investment and deployment that Congress wanted to encourage. In effect, the rules would set the FCC up as a central planner of all things broadband-related. In combination with the commission’s recent reclassification of broadband as a Title II service, the FCC has stretched its authority far beyond the breaking point.

Read the full piece here.

Continue reading
Telecommunications & Regulated Utilities

ICLE Reply Comments to FCC Re: Customer Blackout Rebates

Regulatory Comments I. Introduction The International Center for Law & Economics (“ICLE”) thanks the Federal Communications Commission (“FCC” or “the Commission”) for the opportunity to offer reply . . .

I. Introduction

The International Center for Law & Economics (“ICLE”) thanks the Federal Communications Commission (“FCC” or “the Commission”) for the opportunity to offer reply comments to this notice of proposed rulemaking (“NPRM”), as the Commission proposes to require cable operators and direct-broadcast satellite (DBS) providers to grant their subscribers rebates when those subscribers are deprived of video programming they expected to receive during programming blackouts that resulted from failed retransmission-consent negotiations or failed non-broadcast carriage negotiations.[1]

As noted in the NPRM, the Communications Act of 1934 requires that cable operators and satellite-TV providers obtain a broadcast TV station’s consent in order to lawfully retransmit that station’s signal to subscribers. Commercial stations or networks may either (1) demand carriage pursuant to the Commission’s must-carry rules or (2) elect for carriage consent and negotiate for compensation in exchange for carriage. If a channel elects for retransmission consent but is unable to reach agreement for carriage, the cable operator or DBS provider loses the right to carry that signal. As a result, the cable operator or DBS provider’s subscribers typically lose access entirely to the channel’s signal unless and until the parties are able to reach an agreement, a situation that is often described as a “blackout.”

Blackouts tend to generate eye-catching headlines and often annoy affected consumers.[2] This annoyance is amplified when consumers don’t receive a rebate for the loss of signal, especially when they believe that they are merely bystanders in the dispute between the cable operator or DBS provider and the channel.[3] The Commission appears to echo theses concerns, concluding that its proposed rebate mandate would ensure “subscribers are made whole when they face interruptions of service that are outside their control” and would prevent subscribers “from being charged for services for the period that they did not receive them.”[4]

This framing, however, oversimplifies retransmission-consent negotiations and mischaracterizes consumers’ agency in subscribing to and using multichannel-video-programming distributors (“MVPDs”). Moreover, there are numerous questions raised by the NPRM regarding the proposal’s feasibility, including how to identify which consumers would qualify for rebates, how those rebates would be calculated, and how they would be distributed. Several comments submitted in this proceeding suggest that any implementation of this proposal would be arbitrary and unfair to cable operators, DBS providers, and consumers. In particular:

  • Blackouts result from a temporary or permanent failure to reach an agreement in negotiations between channels and either cable operators or DBS providers. The Commission’s proposal explicitly and unfairly assigns liability for blackouts to the cable operator or DBS provider. As a result, the proposal would provide channels with additional negotiating leverage relative to the status quo. Smaller cable operators may be especially disadvantaged.
  • Each consumer is unique in how much they value a particular channel and how much they would be economically harmed by a blackout. For example, in the event of a cable or DBS blackout, some consumers can receive the programming via an over-the-air antenna or a streaming platform and would suffer close to no economic harm. Other consumers may assign no value to the blacked-out channel’s programming and would likewise suffer no harm.
  • Complexities and confidentiality in programming contracts would make it impossible to accurately or fairly calculate the price or cost associated with any given channel over some set period of time. For example, cable operators and DBS providers typically sell bundles of channels, not a la carte offerings, making it impossible to calculate an appropriate rebate for one specific channel or set of channels.
  • Even if it were possible to calculate an appropriate rebate, any mandated rebate based on such calculations would constitute prohibited rate regulation.

These reply comments respond to many of the issues raised in comments on this matter. We conclude that the Commission is proposing a set of unworkable and arbitrary rules. Even if rebates could be reasonably and fairly calculated, the amount of such rebates would likely be only a few dollars and may be as little as a few pennies. In such cases, the enormous cost to the Commission, cable operators, and DBS providers would be many times greater than the amount of rebates provided to consumers. It would be a much better use of the FCC’s and MVPD providers’ resources to abandon this rulemaking process and refrain from mandating rebates for programming blackouts.

II. Who Is to Blame for Blackouts?

As discussed above, it appears the FCC’s view is that consumers who experience blackouts are mere bystanders in a dispute, as the Commission invokes “consumer protection” and “customer service” as justifications for the proposed rules mandating rebates.[5] If we believe both that consumers are bystanders and that they are harmed by blackouts, then it is crucial to identify the parties to whom blame should be assigned for those blackouts. A key principle of the law & economics approach is that the party better-positioned to avoid the blackout should bear more—or, in some cases, all—of its costs.[6]

In comments submitted by Dish Network, William Zarakas and Jeremy Verlinda note that: “Programming fees are established through bilateral negotiations between content providers and MVPDs, and depend in large part on the relative bargaining position of the two sides.”[7] This comment illustrates the obvious but important fact that both content providers and MVPD operators must reach agreement and, in any given negotiation, either side may have more bargaining power. Because of this reality, it is impossible to draw general conclusions about which party will be the least-cost avoider of blackouts, as borne out in the submitted comments.

On the one hand, the ATVA argues that programmers are the cause of blackouts: “Blackouts happen to cable and satellite providers and their subscribers.”[8] NTCA supports this claim and reports that “[s]mall providers lack negotiating power in retransmission consent discussions.”[9] On the other hand, the NAB claims the “leading cause of such disruptions” is “the pay TV industry’s desire to use consumers as pawns to push for a change in law” and that MVPDs have a “strategy of creating negotiating impasses” in order to obtain a policy change.[10] Writing in Truth on the Market, Eric Fruits concludes:

With the wide range of programming and delivery options, it’s probably unwise to generalize who has the greater bargaining power in the current system. But if one had to choose, it seems that networks and, to a lesser extent, local broadcasters are in a slightly superior position. They have the right to choose must carry or retransmission and, in some cases, have alternative outlets (such as streaming) to distribute their programming.[11]

Peer-reviewed published research by Eun-A Park, Rob Frieden, and Krishna Jayakar attempts to identify the “predictors” of blackouts using a database of nearly 400 retransmission agreements executed between 2011 and 2018.[12] The authors identify three factors associated with more blackouts and longer blackouts:

  1. Cable, satellite, and other MVPDs with larger customer bases are associated with more frequent and longer blackouts;
  2. Multi-station broadcaster groups with network affiliations are associated with more frequent but shorter blackouts; and
  3. The National Football League (“NFL”) season (g., “must see” real-time programming) has no significant relationship with blackout frequency, but when blackouts occur during the season, they are significantly shorter.

The simplistic takeaway is both that everyone is to blame, and no one is to blame. Ultimately, Park and her co-authors conclude that “the statistical analysis is not able to identify the parties or the tactics responsible for blackouts.”[13] Based on this research, it is not clear which parties in given negotiations are more likely to be the least-cost avoider of blackouts.

Nevertheless, the Commission’s proposal explicitly assigns liability for blackouts to cable operators and DBS providers.[14] Under the proposed rules, not only would cable operators and DBS providers suffer financial consequences, but they also would be made to face reputational harms stemming from a federal agency suggesting the fault for any retransmission-consent or carriage-agreement blackouts falls squarely on their shoulders.

Such reputational damage is almost certain to increase subscriber churn and impose additional subscriber-acquisition and retention costs on cable operators and DBS providers.[15] In comments on the Commission’s proposed rules for cable-operator and DBS-provider billing practices, ICLE reported that these costs are substantial and that, in addition to these costs, churn increases the uncertainty of cable-operator and DBS-provider revenues and profits.[16]

III. Consumers Are Not Bystanders

As noted earlier in these comments, the Commission’s proposal appears to be rooted in the belief that, when consumers experience a blackout, they are mere bystanders in a dispute between channels and cable operators or DBS providers. The Commission further seems to believe that the full force of the federal government is needed for these consumers to be “made whole.”[17] The implication is that consumers lack the foresight to anticipate the possibility of blackouts or the ability to respond to blackouts when they occur.

As the NPRM notes, subscribers are often informed of the risk of blackouts—and their consequences—in their service agreements with cable operators or DBS providers.[18] This is supported in ATVA’s comments:

Cable and satellite carriers make this quite clear in the contracts they offer subscribers—existing contracts which the Commission seeks to abrogate here. This language also makes clear that cable and satellite operators can and do change the programming offered in those bundles from time to time. … Cable and satellite providers add and subtract programming from their offerings to consumers frequently, and subscription agreements do not promise that all channels in a particular tier will be carried in perpetuity, let alone (with limited exception) assign a specific value to particular programming.[19]

The NPRM asks, “if a subscriber initiates service during a blackout, would that subscriber be entitled to a rebate or a lower rate?”[20] The question implicitly acknowledges that, for these subscribers, blackouts are not just a possibility, but a certainty. Yet they nonetheless enter into such agreements, knowing they may not be compensated for the interruption of service.

Many cable operators and DBS providers do offer credits[21] or other accommodations[22] to requesting subscribers affected by a blackout. In addition, many consumers have a number of options to circumvent a blackout by obtaining the programming elsewhere. Comments in this proceeding indicate that these options include the use of over-the-air antennas[23] or streaming services.[24] Given the many alternatives available in so many cases, it is unlikely that a blackout would deprive these consumers of the desired programming and any economic harm to them would be de minimis.

If cable or DBS blackouts are (or become) widespread or pernicious, consumers also have the ability to terminate service and switch providers, including by switching to streaming options. This is demonstrated by the well-known and widespread phenomenon of “cord cutting.” ATVA’s comments note that, in the third quarter of 2023, nearly one million subscribers canceled their traditional linear-television service, with just under 55% of occupied households now subscribing, the lowest share since 1989.[25] NYPSC concludes that, if the current trend of cord-cutting continues, “any final rules adopted here could become obsolete over time.”[26]

Due in part to cord cutting, ATVA reported that last year “several cable television companies either had already shut down their television services or were in the process of doing so.”[27] NTCA reports that nearly 40% of surveyed rural providers indicated they are not likely to continue service or already have plans to discontinue service, with many of them blaming the “difficulty negotiating retransmission consent agreements.”[28]

The fact that so many consumers are switching to alternatives to cable and DBS is a clear demonstration that they have the opportunity and ability obtain programming from a wide range of competitive providers. This places them in the driver’s seat, rather than suffering as helpless bystanders. It is telling that neither the NPRM nor any of the comments submitted to date offer any estimate of the cost to consumers associated with blackouts from retransmission-consent or carriage negotiations. This is likely because any costs are literally incalculable (i.e., impossible to calculate) or so small as to discourage any efforts at estimation. In either case, the Commission’s proposal to mandate and enforce blackout rebates looks to be a costly and time-consuming exercise that would yield little to no noticeable consumer benefits.

IV. Mandatory Rebates Will Increase Programmer Bargaining Power and Increase Prices to Cable and DBS Subscribers

A common theme of comments submitted in this matter is that the proposed rules would “place a thumb on the scale” in favor of channels relative to cable operators and DBS providers.[29] Without delving deeply into the esoteric details of bargaining theory, the comments identify two key factors that have, over time, improved programmers’ bargaining position relative to cable operators and DBS providers:

  1. Increased competition among MVPD providers, which has reduced cable and DBS bargaining power;[30] and
  2. Consolidation in the broadcast industry, which has increased programmer bargaining power.[31]

The Commission’s proposed rules are intended and designed to impose an additional cost on cable operators and DBS providers who do not reach an agreement with stations and networks, thereby diminishing the providers’ relative bargaining position. As profit-maximizing enterprises, it would be reasonable to expect stations and networks to exploit this additional bargaining power to extract higher retransmission fees or other concessions.

Jeffrey Eisenach notes that the first “significant” retransmission agreement to involve monetary compensation from a cable provider to a broadcaster occurred in 2005.[32] By 2008, retransmission fees totaled $500 million, according to Variety.[33] By 2020, S&P Global reported that annual retransmission fees were approximately $12 billion.[34] This represents an average annual increase of 30% between 2008 and 2020. This is in line with Zarakas & Verlinda’s estimate that retransmission fees charged by local network stations have increased at annual growth rates of 9.8% to 61.0% since 2009.[35] According to information reported by the Pew Research Center, revenues from retransmission fees for local stations now nearly equal those stations’ advertising revenues (Figure 1).

[36]

Dish Network indicated that programmers have been engaged in an “aggressive campaign of imposing steep retransmission and carriage price increases on MVPDs.”[37] Simultaneous with these steep increases in retransmission fees, networks began imposing “reverse transmission compensation” on their affiliates.[38] Previously, networks paid local affiliates for airtime in order to run network advertisements during their programming. The new arrangements have reversed that flow of compensation, such that affiliates are now expected to compensate the networks, as explained in Variety:

Station owners also face increased pressure to secure top fees for their retrans rights because their Big Four network partners now demand that affiliate stations fork over a portion of their retrans windfall to help them pay for pricey franchises like the NFL, “American Idol” and high-end scripted series.[39]

Dish Network concludes: “While MVPDs and OVDs compete aggressively with each other, the programming price increases will likely be passed through to consumers despite that competition. The reason is that all MVPDs will face the same programming price increase.”[40] NCTA further notes that increased programming costs are “borne by the cable operator or passed onto the consumer.”[41]

The most recent research cited in the comments reports that MVPDs pass through approximately 100% of retransmission-fee increases in the form of higher subscription prices.[42] Aaron Heresco and Stephanie Figueroa provided examples of how increased retransmission fees are passed on to subscribers:

On the other side of the simplified ESPN transaction are MVPD ranging from global conglomerates like Spectrum/Time Warner to small local or independent cable carriers. These MVPD pay ESPN $7.21/subscriber/month for the right to carry/transmit ESPN content to subscribing households. MVPD, with a keen eye on profits and shareholder value, pass through the costs to consumers (irrespective of if subscribers actually watch ESPN or any other network) in the form of increased monthly cable bills. Not only does this suggest that the “free lunch” of TV programming isn’t free, it also indicates that the dynamic of revenue generation via viewership is changing. As another example, consider the case of the Weather Channel, which in 2014 asked for a $.01 increase in retransmission fees despite a 20% drop in ratings (Sahagian 2014). Viewers may demand access to the channel in case of weather emergencies but may only tune in to the channel a handful of times per year. Nonetheless, the demand for access to channels drive up retransmission revenue even if the day-to-day or week-to-week ratings are weak.[43]

In some cases, however, increased retransmission fees cannot be passed on in the form of higher subscription prices. As we noted above, NTCA reports that nearly 40% of surveyed rural providers indicated they are unlikely to continue service or already have plans to discontinue service, with many of them blaming the “difficulty negotiating retransmission consent agreements.”[44] The Commission’s proposed rules would not only lead to higher prices for consumers, but they may also reduce MVPD options for some consumers, as cable operators exit the industry.

V. Proposed Rebate Mandate Would be Arbitrary and Unworkable

The NPRM asks for comments on how to implement the proposed rebate mandate. In doing so, the NPRM identifies numerous factors that illustrate the arbitrary and unworkable nature of the Commission’s proposal:[45]

  • Should cable operators and DBS providers be required to pay rebates or provide credits?
  • Should rebates apply to any channel that is blacked out?
  • What if the parties never reach an agreement for carriage? For example, should subscribers be entitled to rebates in perpetuity?
  • How should rebates be calculated when terms of the retransmission-consent agreements are confidential?
  • Should the rebate be based on the cost that the cable operator or DBS provider paid to the programmer to retransmit or carry the channel prior to the carriage impasse?
  • How should rebates account for bundling?
  • If a subscriber initiates or renews a contract during a blackout, should the subscriber receive a rebate?
  • Should the Commission deem unenforceable service agreements that explicitly specify that the cable operator or DBS provider is not liable for credits or refunds if programming becomes unavailable? Should existing service agreements be abrogated?
  • How should rebates account for (g.) advertising time as a component of the retransmission-consent agreement?

As we note above, when blackouts occur, many cable operators and DBS providers offer credits or other accommodations to requesting subscribers affected by a blackout.[46] The NPRM “tentatively concludes” there is no legal distinction between “rebates,” “refunds,” and “credits.”[47] If the Commission approves rules mandating rebates in the event of blackouts, the rules should be sufficiently flexible to allow credits or other accommodations—such as providing over-the-air antennas or programming upgrades—to satisfy the rules.

The NPRM asks whether the proposed rebate rules should apply to any channel that is blacked out,[48] citing to news stories regarding The Weather Channel.[49] The NPRM provides no context for these citations, but the cited articles suggest that The Weather Channel is of minimal value to most consumers. The channel had 105,000 primetime viewers in February 2024, which was slightly less than PopTV and slightly more than Disney Junior and VH1.[50] The Deadline article cited in the NPRM indicates that The Weather Channel averages 13 cents per-subscriber per-month across pay-TV systems.[51] Much of the channel’s content is freely available on its website (weather.com) and app, and similar weather content is freely available across numerous sources and media.

The NPRM’s singling out of the Weather Channel highlights several flaws with the Commission’s proposal. The channel has low viewership, numerous competing substitutes for content, and is relatively low-cost. During a blackout, few subscribers would notice. Even fewer would suffer any harm and, if they did, the harm would be about 13 cents a month. It seems a waste of valuable resources to impose a complex regulatory regime to “make consumers whole” to the tune of pennies a month.

The NPRM asks whether the Commission should require rebates if the parties never reach a carriage agreement and, if so, whether those rebates should be provided in perpetuity.[52] NCTA points out that it would be impossible for any regulator to determine whether any particular blackout is the result of a negotiation impasse or business decision by the cable operator or DBS provider to no longer carry the channel.[53] For example, a channel may be dropped because of changes to the programming available on the channel.[54] Indeed, the programming offered at the beginning of a retransmission-consent agreement may be very different from the content provided at the time of renegotiation.[55] Moreover, it would be impossible to know with any certainty whether any carriage termination is temporary or permanent.[56] Verizon is correct to call this inquiry “absurd,”[57] as it proposes a “Hotel California” approach to carriage agreements, in which cable operators and DBS providers can check out, but they can never leave.

To illustrate the challenges of calculating a reasonable and economically coherent rebate, Dish Network offered a hypothetical set of three options for carriage of a local station and the Tennis Channel, both owned by Sinclair.[58]

  1. $4 for the local station on a tier serving all subscribers, no carriage of Tennis Channel;
  2. $2 for the local station and $2 for the Tennis Channel, both on tiers serving all subscribers; or
  3. $2 for the local station on a tier serving all subscribers and $4 for the Tennis Channel on a tier serving 50% of subscribers.

In this hypothetical, the cable operator or DBS provider is indifferent to the details of how the package is priced. Similarly, consumers are indifferent to the pricing details of the agreement. Under the Commission’s proposal, however, these details become critical to how a rebate would be calculated. In the event of a Tennis Channel blackout, either no subscriber would receive a rebate, every subscriber would receive a $2 rebate, or half of all subscribers would receive a $4 rebate—with the amount of rebate depending on how the agreement’s pricing was structured.

Dish Network’s hypothetical demonstrates another consequence of the Commission’s proposal: the easiest way to avoid the risk of paying a rebate is to forgo carrying the channel. The hypothetical assumes a cable operator “does not particularly want to carry” the Tennis Channel, but is willing to do so in exchange for an agreement with Sinclair for the local station.[59] Under the Commission’s proposed rules, the risk of incurring the cost of providing rebates introduces another incentive to eschew carriage of the Tennis Channel.

One reason Dish Network presented a hypothetical instead of an “actual” example is because, as noted in several comments, carriage agreements are subject to confidentiality provisions.[60] Separate and apart from the impossibility of allocating a rebate across the various terms of an agreement, even if the terms were known, such an exercise would require abrogating these confidentiality agreements between the negotiating parties.

The NPRM asks whether it would be reasonable to require a cable operator or DBS provider to rebate the cost that it paid to the programmer to retransmit or carry the channel prior to the carriage impasse.[61] The NPRM cites Spectrum Northeast LLC v. Frey, a case involving early-termination fees in which the 1st U.S. Circuit Court of Appeals stated that “[a] termination event ends cable service, and a rebate on termination falls outside the ‘provision of cable service.’”[62] In the NPRM, the Commission “tentatively conclude[s] that the courts’ logic” in Spectrum Northeast “applies to the rebate requirement for blackouts.”[63]

If the Commission accepts the court’s logic that a termination event ends service on the consumer side, then it would be reasonable to conclude that the end of a retransmission or carriage agreement similarly ends service. To base a rebate on a prior agreement would mean basing the rebate on a fiction—an agreement that does not exist.

To illustrate, consider Dish Network’s hypothetical. Assume the initial agreement is Option 2 ($2 for the local station and $2 for the Tennis Channel, both on tiers serving all subscribers). The negotiations stall, leading to a blackout. Assume the parties eventually agree to Option 1, in which the Tennis Channel is no longer carried. Would subscribers be due a rebate for a channel that is no longer carried? Or, if the parties instead agree to Option 3 ($2 for the local station on a tier serving all subscribers and $4 for the Tennis Channel on a tier serving 50% of subscribers), would all subscribers be due a $2 rebate for the Tennis Channel, or would half of subscribers be due a $4 rebate? There is no “good” answer because any answer is necessarily arbitrary and devoid of economic logic.

As noted above, many retransmission and carriage agreements involve “bundles” of programming,[64] as well as “a wide range of pricing and non-pricing terms.”[65] Moreover, ATVA reports that subscribers purchase bundled programming, rather than individual channels, and that consumers are well-aware of bundling when they enter into service agreements with cable operators and DBS providers.[66] NCTA reports that bundling complicates the already-complex challenge of allocating costs across specific channels over specific periods of time.[67] Thus, any attempt to do so with an eye toward mandating rebates during blackouts is likewise arbitrary and devoid of economic logic.

In summary, the Commission is proposing a set of unworkable and arbitrary rules to distribute rebates to consumers during programming blackouts. Even if such rebates could be reasonably and fairly calculated, the sums involved would likely be only a few dollars, and may be as little as a few pennies. In these cases, the enormous costs to the Commission, cable operators, and DBS providers would be many times greater than the rebates provided to consumers. It would be a much better use of the FCC’s and MVPD providers’ resources to abandon this rulemaking process and refrain from mandating rebates for programming blackouts.

[1] Notice of Proposed Rulemaking, In the Matter of Customer Rebates for Undelivered Video Programming During Blackouts, MB Docket No. 24-20 (Jan. 17, 2024), available at https://docs.fcc.gov/public/attachments/FCC-24-2A1.pdf [hereinafter “NPRM”], at para. 1.

[2] See id. at n. 5, 7.

[3] Eric Fruits, Blackout Rebates: Tipping the Scales at the FCC, Truth on the Market (Mar. 6, 2024), https://truthonthemarket.com/2024/03/06/blackout-rebates-tipping-the-scales-at-the-fcc.

[4] NPRM, supra note 1 at para. 10.

[5] NPRM, supra note 1 at para. 13 (proposed rules “provide basic protections for cable customers”) and ¶ 7 (“How would requiring cable operators and DBS providers to provide rebates or credits change providers’ current customer service relations during a blackout?”).

[6] This is known as the “least-cost avoider” or “cheapest-cost avoider” principle. See Harold Demsetz, When Does the Rule of Liability Matter?, 1 J. Legal Stud. 13, 28 (1972); see generally Ronald Coase, The Problem of Social Cost, 3 J. L. & Econ. 1 (1960).

[7] Comments of DISH Network LLC, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/1030975783920/1 [hereinafter “DISH Comments”], Exhibit 1, Declaration of William Zarakas & Jeremy Verlinda [hereinafter “Zarakas & Verlinda”] at ¶ 8.

[8] Comments of the American Television Alliance, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/103082522212825/1 [hereinafter “ATVA Comments”] at i and 2 (“Broadcasters and programmers cause blackouts. This is, of course, true as a legal matter, as cable and satellite providers cannot lawfully deliver programming to subscribers without the permission of the rightsholder. It makes no sense to say that a cable or satellite provider has ‘blacked out’ programming by failing to obtain permission to carry it. A programmer ‘blacks out’ programming by declining to grant such permission.”).

[9] Comments of NTCA—The Rural Broadband Association, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/10308589412414/1 [hereinafter “NTCA Comments”] at 2.

[10] Comments of the National Association of Broadcasters, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/1030894019700/1 [hereinafter “NAB Comments”] at 4-5.

[11] Fruits, supra note 4.

[12] Eun-A Park, Rob Frieden, & Krishna Jayakar, Factors Affecting the Frequency and Length of Blackouts in Retransmission Consent Negotiations: A Quantitative Analysis, 22 Int’l. J. Media Mgmt. 117 (2020).

[13] Id. at 131.

[14] NPRM, supra note 1 at paras. 4, 6 (“We seek comment on whether and how to require cable operators and DBS providers to give their subscribers rebates when they blackout a channel due to a retransmission consent dispute or a failed negotiation for carriage of a non-broadcast channel.”); id. at para. 9 (“We tentatively conclude that sections 335 and 632 of the Act provide us with authority to require cable operators and DBS providers to issue a rebate to their subscribers when they blackout a channel.”) [emphasis added].

[15] See Zarakas & Verlinda supra note 7 at para. 14 (blackouts are costly “in the form of lost subscribers and higher incidence of retention rebates”).

[16] Comments of the International Center for Law & Economics, MB Docket No. 23-405 (Feb. 5, 2024), https://www.fcc.gov/ecfs/document/10204246609086/1 at 9-10 (“In its latest quarterly report to the Securities and Exchange Commission, DISH Network reported that it incurs ‘significant upfront costs to acquire Pay-TV’ subscribers, amounting to subscriber acquisition costs of $1,065 per new DISH TV subscriber. The company also reported that it incurs ‘significant’ costs to retain existing subscribers. These retention costs include upgrading and installing equipment, as well as free programming and promotional pricing, ‘in exchange for a contractual commitment to receive service for a minimum term.’”)

[17] See NPRM, supra note 1 at paras. 4, 8, 10 (using “make whole” language)

[18] See id. at n. 7, citing Spectrum Residential Video Service Agreement (“In the event particular programming becomes unavailable, either on a temporary or permanent basis, due to a dispute between Spectrum and a third party programmer, Spectrum shall not be liable for compensation, damages (including compensatory, direct, indirect, incidental, special, punitive or consequential losses or damages), credits or refunds of fees for the missing or omitted programming. Your sole recourse in such an event shall be termination of the Video Services in accordance with the Terms of Service.”) and para. 6 (“To the extent that the existing terms of service between a cable operator or DBS provider and its subscriber specify that the cable operator or DBS provider is not liable for credits or refunds in the event that programming becomes unavailable, we seek comment on whether to deem such provisions unenforceable if we were to adopt a rebate requirement.”)

[19] ATVA Comments, supra note 8 at 11.

[20] NPRM, supra note 1 at para. 6.

[21] See ATVA Comments, supra note 8 at 3 (“The Commission seeks information on the extent to which MVPDs grant rebates today. The answer is that, in today’s competitive marketplace, many ATVA members provide credits, with significant variations both among providers and among classes of subscribers served by individual providers. This, in turn, suggests that cable and satellite companies already address the issues identified by the Commission, but in a more nuanced and individualized manner than proposed in the Notice.”). See also id. at 5-6 (reporting DIRECTV provides credits to existing customers and makes the offer of credits easy to find online or via customer service representatives). See also id. at 7 (reporting DIRECTV and DISH provide credits to requesting subscribers and Verizon compensates subscribers “in certain circumstances”).

[22] See Zarakas & Verlinda, supra note 7 at para. 21 (“DISH provides certain offers to requesting customers in the case of programming blackouts, which may include a $5 per month credit, a free over-the-air antenna for big 4 local channel blackouts, or temporary free programming upgrades for cable network blackouts.”).

[23] See id. at para. 21.

[24] See ATVA Comments, supra note 8 at 4 (“If Disney blacks out ESPN on a cable system, for example, subscribers still have many ways to get ESPN. This includes both traditional competitors to cable (which are losing subscribers) and a wide array of online video providers (which are gaining subscribers).”); Comments of Verizon, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/10308316105453/1 [hereinafter “Verizon Comments”] at 12 (“In today’s competitive marketplace, consumers have many options for viewing broadcasters’ content in the event of a blackout — they can switch among MVPDs, or forgo MVPD services altogether and watch on a streaming platform or over the air. And when a subscriber switches or cancels service, it is extremely costly for video providers to win them back.”); DISH Comments, supra note 7 at 7 (“[L]ocal network stations have also been able to use another lever: the phenomenal success of over-the-top video streaming and the emergence of several online video distributors (‘OVDs’), some of which have begun incorporating local broadcast stations in their offerings.”); Comments of the New York State Public Service Commission, MB Docket 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/10308156370046/1 [hereinafter “NYPSC Comments”] at 2 (identifying streaming services and Internet Protocol Television (IPTV) providers such as YouTube TV, Sling, and DirecTV Stream as available alternatives).

[25] See ATVA Comments, supra note 8 at 4.

[26] NYPSC Comments, supra note 22 at 2.

[27] ATVA Comments, supra note 8 at 4-5.

[28] NTCA Comments, supra note 9 at 3; see Luke Bouma, Another Cable TV Company Announces It Will Shut Down Its TV Service Because of “Extreme Price Increases from Programmers,” Cord Cutters News (Dec. 10, 2023), https://cordcuttersnews.com/another-cable-tv-company-announces-it-will-shut-down-its-tv-service-because-of-extreme-price-increases-from-programmers (reporting the announced shutdown of DUO Broadband’s cable TV and streaming TV services because of increased programming fees, affecting several Kentucky counties).

[29] ATVA Comments, supra note 8 at note 15; DISH Comments, supra note 7 at 3, 8; NAB Comments, supra note 10 at 5; Comments of NCTA—The Internet & Television Alliance, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/1030958439598/1 [hereinafter “NCTA Comments”] at 2, 11.

[30] See ATVA Comments, supra note 8 at n. 19 (“With more distributors, programmers ‘lose less’ if they fail to reach agreement with any individual cable or satellite provider.”); Zarakas & Verlinda, supra note 7 at para. 6 (“This bargaining power has been further exacerbated by the increase in the number of distribution platforms coming from the growth of online video distributors. The bargaining leverage of cable networks has also received a boost from the proliferation of distribution platforms.”); id. at para. 13 (“Growth of OVDs has reduced MVPD bargaining leverage”).

[31] See DISH Comments, supra note 7 at 6 (“For one thing, the consolidation of the broadcast industry over the last ten years has exacerbated the imbalance further. This consolidation, fueled itself by the broadcasters’ interest in ever-steeper retransmission price increases, has effectively been a game of “and then there were none,” with small independent groups of two or three stations progressively vanishing from the picture.”); Zarakas & Verlinda, supra note 7 at para. 6 (concluding consolidation among local networks is associated with increased retransmission fees).

[32] See Jeffrey A. Eisenach, The Economics of Retransmission Consent, at 9 n.22 (Empiris LLC, Mar. 2009), available at https://nab.org/documents/resources/050809EconofRetransConsentEmpiris.pdf.

[33] See Robert Marich, TV Faces Blackout Blues, Variety (Dec. 10, 2011), https://variety.com/2011/tv/news/tv-faces-blackout-blues-1118047261.

[34] See Economics of Broadcast TV Retransmission Revenue 2020, S&P Global Mkt. Intelligence (2020), https://www.spglobal.com/marketintelligence/en/news-insights/blog/economics-of-broadcast-tv-retransmission-revenue-2020.

[35] Cf. Zarakas & Verlinda, supra note 7 at para. 6.

[36] Retransmission Fee Revenue for U.S. Local TV Stations, Pew Research Center (Jul. 2022), https://www.pewresearch.org/journalism/chart/sotnm-local-tv-u-s-local-tv-station-retransmission-fee-revenue; Advertising Revenue for Local TV, Pew Research Center (Jul. 13, 2021), https://www.pewresearch.org/journalism/chart/sotnm-local-tv-advertising-revenue-for-local-tv.

[37] DISH Comments, supra note 7 at 4.

[38] Park, et al., supra note 13 at 118 (“With stations receiving more retransmission compensation, a new phenomenon has also emerged since the 2010s: reverse retransmission revenues, whereby networks receive a portion of their affiliates and owned-and-operated stations’ retransmission revenues. As retransmission fees have become more important to television stations, broadcast networks and MVPDs, negotiations over contract terms and fees have become more contentious and protracted.”).

[39] Marich, supra note 33.

[40] DISH Comments, supra note 7 at 11.

[41] NCTA Comments, supra note 29 at 2.

[42] See Zarakas & Verlinda supra note 8 at para. 15 (citing George S. Ford, A Retrospective Analysis of Vertical Mergers in Multichannel Video Programming Distribution Markets: The Comcast-NBCU Merger, Phoenix Ctr. for Advanced L. & Econ. Pub. Pol’y Studies (Dec. 2017), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3138713).

[43] Aaron Heresco & Stephanie Figueroa, Over the Top: Retransmission Fees and New Commodities in the U.S. Television Industry, 29 Democratic Communiqué 19, 36 (2020).

[44] NTCA Comments, supra note 9 at 3.

[45] NPRM, supra note 1 at paras. 6-8.

[46] See supra notes 21-22 and accompanying text.

[47] NPRM, supra note 1 at n. 9.

[48] See id. at para. 6.

[49] See id. at n.12 (citing Alex Weprin, Weather Channel Brushes Off a Blackout, Politico (Feb. 6, 2014) https://www.politico.com/media/story/2014/02/weather-channel-brushes-off-a-blackout-001667); David Lieberman, The Weather Channel Returns To DirecTV, Deadline (April 8, 2014), https://deadline.com/2014/04/the-weatherchannel-returns-directv-deal-711602.

[50] See U.S. Television Networks, USTVDB (retrieved Mar. 28, 2024), https://ustvdb.com/networks.

[51] See Lieberman, supra note 49.

[52] See NPRM, supra note 1 at para. 6.

[53] See NCTA Comments, supra note 29 at 5.

[54] See id. at 3; see also Lieberman, supra note 49 (indicating that carriage consent agreement ending a blackout of The Weather Channel on DIRECTV required The Weather Channel to cut its reality programming by half on weekdays).

[55] See Alex Weprin & Lesley Goldberg, What’s Next for Freeform After Being Dropped by Charter, Hollywood Reporter (Dec. 14, 2023), https://www.hollywoodreporter.com/tv/tv-news/freeform-disney-charter-hulu-1235589827 (reporting that Freeform is a Disney-owned cable channel that currently caters to younger women; the channel began as a spinoff of the Christian Broadcasting Network, was subsequently rebranded as The Family Channel, then Fox Family Channel, and then ABC Family, before rebranding as Freeform).

[56] See NCTA Comments, supra note 29 at 5.

[57] Verizon Comments, supra note 24 at 13 (“Also, as the Commission points out, ‘What if the parties never reach an agreement for carriage? Would subscribers be entitled to rebates in perpetuity and how would that be calculated?’ The absurdity of these questions underscores the absurdity of the proposed regulation.”)

[58] See DISH Comments, supra note 7 at 13.

[59] Id.; see also id. at 22 (“Broadcasters increasingly demand that an MVPD agree to carry other broadcast stations or cable networks as a condition of obtaining retransmission consent for the broadcaster’s primary signal, without giving a real economic alternative to carrying just the primary signal(s).”)

[60] ATVA Comments, supra note 8 at 13 (“here is the additional complication that cable and satellite companies generally agree to confidentiality provisions with broadcasters and programmers—typically at the insistence of the broadcaster or programmer”); DISH Comments, supra note 7 at 21 (reporting broadcasters and programmers “insist” on confidentiality); NCTA Comments, supra note 27 at 6 (“It also bears emphasis that this approach would necessarily publicly expose per- subscriber rates and other highly confidential business information, and that the contracts between the parties prohibit disclosure of this and other information that each find competitively sensitive.”).

[61] NPRM, supra note 1 at para. 8.

[62] Spectrum Northeast, LLC v. Frey, 22 F.4th 287, 293 (1st Cir. 2022), cert denied, 143 S. Ct. 562 (2023); see also In the Matter of Promoting Competition in the American Economy: Cable Operator and DBS Provider Billing Practices, MB Docket No. 23-405, at n. 55 (Jan. 5, 2024), available at https://docs.fcc.gov/public/attachments/DOC-398660A1.pdf.

[63] NPRM, supra note 1 at para. 13.

[64] See supra note 59 and accompanying text for an example of a bundle.

[65] NCTA Comments, supra note 29 at 6.

[66] ATVA Comments, supra note 8 at 11.

[67] NCTA Comments, supra note 29 at 6.

Continue reading
Telecommunications & Regulated Utilities

ICLE Comments to NTIA on Dual-Use Foundation AI Models with Widely Available Model Weights

Regulatory Comments I. Introduction We thank the National Telecommunications and Information Administration (NTIA) for the opportunity to contribute to this request for comments (RFC) in the “Dual . . .

I. Introduction

We thank the National Telecommunications and Information Administration (NTIA) for the opportunity to contribute to this request for comments (RFC) in the “Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights” proceeding. In these comments, we endeavor to offer recommendations to foster the innovative and responsible production of artificial intelligence (AI), encompassing both open-source and proprietary models. Our comments are guided by a belief in the transformative potential of AI, while recognizing NTIA’s critical role in guiding the development of regulations that not only protect consumers but also enable this dynamic field to flourish. The agency should seek to champion a balanced and forward-looking approach toward AI technologies that allows them to evolve in ways that maximize their social benefits, while navigating the complexities and challenges inherent in their deployment.

NTIA’s question “How should [the] potentially competing interests of innovation, competition, and security be addressed or balanced?”[1] gets to the heart of ongoing debates about AI regulation. There is no panacea to be discovered, as all regulatory choices require balancing tradeoffs. It is crucial to bear this in mind when evaluating, e.g., regulatory proposals that implicitly treat AI as inherently dangerous and regard as obvious that stringent regulation is the only effective strategy to mitigate such risks.[2] Such presumptions discount AI’s unknown but potentially enormous capacity to produce innovation, and inadequately account for other tradeoffs inherent to imposing a risk-based framework (e.g., requiring disclosure of trade secrets or particular kinds of transparency that could yield new cybersecurity attack vectors). Adopting an overly cautious stance risks not only stifling AI’s evolution, but may also preclude a fulsome exploration of its potential to foster social, economic, and technological advancement. A more restrictive regulatory environment may also render AI technologies more homogenous and smother development of the kinds of diverse AI applications needed to foster robust competition and innovation.

We observe this problematic framing in the executive order (EO) that serves as the provenance of this RFC.[3] The EO repeatedly proclaims the importance of “[t]he responsible development and use of AI” in order to “mitigate[e] its substantial risks.”[4] Specifically, the order highlights concerns over “dual-use foundation models”—i.e., AI systems that, while beneficial, could pose serious risks to national security, national economic security, national public health, or public safety.[5] Concerningly, one of the categories the EO flags as illicit “dual use” are systems “permitting the evasion of human control or oversight through means of deception or obfuscation.”[6] This open-ended category could be interpreted so broadly that essentially any general-purpose generative-AI system would classify.

The EO also repeatedly distinguishes “open” versus “closed” approaches to AI development, while calling for “responsible” innovation and competition.[7] On our reading, the emphasis the EO places on this distinction raises alarm bells about the administration’s inclination to stifle innovation through overly prescriptive regulatory frameworks, diminishment of the intellectual property rights that offer incentives for innovation, and regulatory capture that favors incumbents over new entrants. In favoring one model of AI development over another, the EO’s prescriptions could inadvertently hamper the dynamic competitive processes that are crucial both for technological progress and for the discovery of solutions to the challenges that AI technology poses.

Given the inchoate nature of AI technology—much less the uncertain markets in which that technology will ultimately be deployed and commercialized—NTIA has an important role to play in elucidating for policymakers the nuances that might lead innovators to choose an open or closed development model, without presuming that one model is inherently better than the other—or that either is necessarily “dangerous.” Ultimately, the preponderance of AI risks will almost certainly emerge idiosyncratically. It will be incumbent on policymakers to address such risks in an iterative fashion as they become apparent. For now, it is critical to resist the urge to enshrine crude and blunt categories for the heterogeneous suite of technologies currently gathered under the broad banner of  “AI.”

Section II of these comments highlights the importance of grounding AI regulation in actual harms, rather than speculative risks, while outlining the diversity of existing AI technologies and the need for tailored approaches. Section III starts with discussion of some of the benefits and challenges posed by both open and closed approaches to AI development, while cautioning against overly prescriptive definitions of “openness” and advocating flexibility in regulatory frameworks. It proceeds to examine the EO’s prescription to regulate so-called “dual-use” foundation models, underscoring some potential unintended consequences for open-source AI development and international collaboration. Section IV offers some principles to craft an effective regulatory model for AI, including distinguishing between low-risk and high-risk applications, avoiding static regulatory approaches, and adopting adaptive mechanisms like regulatory sandboxes and iterative rulemaking. Section V concludes.

II. Risk Versus Harm in AI Regulation

In many of the debates surrounding AI regulation, disproportionate focus is placed on the need to mitigate risks, without sufficient consideration of the immense benefits that AI technologies could yield. Moreover, because these putative risks remain largely hypothetical, proposals to regulate AI descend quickly into an exercise in shadowboxing.

Indeed, there is no single coherent definition of what even constitutes “AI.” The term encompasses a wide array of technologies, methodologies, and applications, each with distinct characteristics, capabilities, and implications for society. From foundational models that can generate human-like text, to algorithms capable of diagnosing diseases with greater accuracy than human doctors, to “simple” algorithms that facilitate a more tailored online experience, AI applications and their underlying technologies are as varied as they are transformative.

This diversity has profound implications for the regulation and development of AI. Very different regulatory considerations are relevant to AI systems designed for autonomous vehicles than for those used in financial algorithms or creative-content generation. Each application domain comes with its own set of risks, benefits, ethical dilemmas, and potential social impacts, necessitating tailored approaches to each use case. And none of these properties of AI map clearly onto the “open” and “closed” designations highlighted by the EO and this RFC. This counsels for focus on specific domains and specific harms, rather than how such technologies are developed.[8]

As in prior episodes of fast-evolving technologies, what is considered cutting-edge AI today may be obsolete tomorrow. This rapid pace of innovation further complicates the task of crafting policies and regulations that will be both effective and enduring. Policymakers and regulators must navigate this terrain with a nuanced understanding of AI’s multifaceted nature, including by embracing flexible and adaptive regulatory frameworks that can accommodate AI’s continuing evolution.[9] A one-size-fits-all approach could inadvertently stifle innovation or entrench the dominance of a few large players by imposing barriers that disproportionately affect smaller entities or emerging technologies.

Experts in law and economics have long scrutinized both market conduct and regulatory rent seeking that serve to enhance or consolidate market power by disadvantaging competitors, particularly through increasing the costs incurred by rivals.[10] Various tactics may be employed to undermine competitors or exclude them from the market that do not involve direct price competition. It is widely recognized that “engaging with legislative bodies or regulatory authorities to enact regulations that negatively impact competitors” produces analogous outcomes.[11] It is therefore critical that the emerging markets for AI technologies not engender opportunities for firms to acquire regulatory leverage over rivals. Instead, recognizing the plurality of AI technologies and encouraging a multitude of approaches to AI development could help to cultivate a more vibrant and competitive ecosystem, driving technological progress forward and maximizing AI’s potential social benefits.

This overarching approach counsels skepticism about risk-based regulatory frameworks that fail to acknowledge how the theoretical harms of one type of AI system may be entirely different from those of another. Obviously, the regulation of autonomous drones is a very different sort of problem than the regulation of predictive policing or automated homework tutors. Even within a single circumscribed domain of generative AI—such as “smart chatbots” like ChatGPT or Claude—different applications may present entirely different kinds of challenges. A highly purpose-built version of such a system might be employed by government researchers to develop new materiel for the U.S. Armed Forces, while a general-purpose commercial chatbot would employ layers of protection to ensure that ordinary users couldn’t learn how to make advanced weaponry. Rather treating “chatbots” as possible vectors for weapons development, a more appropriate focus would target high-capability systems designed to assist in developing such systems. Were it the case that a general-purpose chatbot inadvertently revealed some information on building weapons, all incentives would direct that AI’s creators to treat that as a bug to fix, not a feature to expand.

Take, for example, the recent public response to the much less problematic AI-system malfunctions that accompanied Google’s release of its Gemini program.[12] Gemini was found to generate historically inaccurate images, such as ethnically diverse U.S. senators from the 1800s, including women.[13] Google quickly acknowledged that it did not intend for Gemini to create inaccurate historical images and turned off the image-generation feature to allow time for the company to work on significant improvements before re-enabling it.[14] While Google blundered in its initial release, it had every incentive to discover and remedy the problem. The market response provided further incentive for Google to get it right in the future.[15] Placing the development of such systems under regulatory scrutiny because some users might be able to jailbreak a model and generate some undesirable material would create disincentives to the production of AI systems more generally, with little gained in terms of public safety.

Rather than focus on the speculative risks of AI, it is essential to ground regulation in the need to address tangible harms that stem from the observed impacts of AI technologies on society. Moreover, focusing on realistic harms would facilitate a more dynamic and responsive regulatory approach. As AI technologies evolve and new applications emerge, so too will the  potential harms. A regulatory framework that prioritizes actual harms can adapt more readily to these changes, enabling regulators to update or modify policies in response to new evidence or social impacts. This flexibility is particularly important for a field like AI, where technological advancements could quickly outpace regulation, creating gaps in oversight that may leave individuals and communities vulnerable to harm.

Furthermore, like any other body of regulatory law, AI regulation must be grounded in empirical evidence and data-driven decision making. Demanding a solid evidentiary basis as a threshold for intervention would help policymakers to avoid the pitfalls of reacting to sensationalized or unfounded AI fears. This would not only enhance regulators’ credibility with stakeholders, but would also ensure that resources are dedicated to addressing the most pressing and substantial issues arising from the development of AI.

III. The Regulation of Foundation Models

NTIA is right to highlight the tremendous promise that attends the open development of AI technologies:

Dual use foundation models with widely available weights (referred to here as open foundation models) could play a key role in fostering growth among less resourced actors, helping to widely share access to AI’s benefits…. Open foundation models can be readily adapted and fine-tuned to specific tasks and possibly make it easier for system developers to scrutinize the role foundation models play in larger AI systems, which is important for rights- and safety-impacting AI systems (e.g. healthcare, education, housing, criminal justice, online platforms etc.)

…Historically, widely available programming libraries have given researchers the ability to simultaneously run and understand algorithms created by other programmers. Researchers and journals have supported the movement towards open science, which includes sharing research artifacts like the data and code required to reproduce results.[16]

The RFC proceeds to seek input on how to define “open” and “widely available.”[17] These, however, are the wrong questions. NTIA should instead proceed from the assumption that there are no harms inherent to either “open” or “closed” development models; it should be seeking input on anything that might give rise to discrete harms in either open or closed systems.

NTIA can play a valuable role by recommending useful alterations to existing law where gaps currently exist, regardless of the business or distribution model employed by the AI developer. In short, there is nothing necessarily more or less harmful about adopting an “open” or a “closed” approach to software systems. The decision to pursue one path over the other will be made based on the relevant tradeoffs that particular firms face. Embedding such distinctions in regulation is arbitrary, at best, and counterproductive to the fruitful development of AI, at worst.

A. ‘Open’ or ‘Widely Available’ Model Weights

To the extent that NTIA is committed to drawing distinctions between “open” and “closed” approaches to developing foundation models, it should avoid overly prescriptive definitions of what constitutes “open” or “widely available” model weights that could significantly hamper the progress and utility of AI technologies.

Imposing narrow definitions risks creating artificial boundaries that fail to accurately reflect AI’s technical and operational realities. They could also inadvertently exclude or marginalize innovative AI models that fall outside those rigid parameters, despite their potential to contribute positively to technological advancement and social well-being. For instance, a definition of “open” that requires complete public accessibility without any form of control or restriction might discourage organizations from sharing their models, fearing misuse or loss of intellectual property.

Moreover, prescriptive definitions could stifle the organic growth and evolution of AI technologies. The AI field is characterized by its rapid pace of change, where today’s cutting-edge models may become tomorrow’s basic tools. Prescribing fixed criteria for what constitutes “openness” or “widely available” risks anchoring the regulatory landscape to this specific moment in time, leaving the regulatory framework less able to adapt to future developments and innovations.

Given AI developers’ vast array of applications, methodologies, and goals, it is imperative that any definitions of “open” or “widely available” model weights embrace flexibility. A flexible approach would acknowledge how the various stakeholders within the AI ecosystem have differing needs, resources, and objectives, from individual developers and academic researchers to startups and large enterprises. A one-size-fits-all definition of “openness” would fail to accommodate this diversity, potentially privileging certain forms of innovation over others and skewing the development of AI technologies in ways that may not align with broader social needs.

Moreover, flexibility in defining “open” and “widely available” must allow for nuanced understandings of accessibility and control. There can, for example, be legitimate reasons to limit openness, such as protecting sensitive data, ensuring security, and respecting intellectual-property rights, while still promoting a culture of collaboration and knowledge sharing. A flexible regulatory approach would seek a balanced ecosystem where the benefits of open AI models are maximized, and potential risks are managed effectively.

B. The Benefits of ‘Open’ vs ‘Closed’ Business Models

NTIA asks:

What benefits do open model weights offer for competition and innovation, both in the AI marketplace and in other areas of the economy? In what ways can open dual-use foundation models enable or enhance scientific research, as well as education/training in computer science and related fields?[18]

An open approach to AI development has obvious benefits, as NTIA has itself acknowledged in other contexts.[19] Open-foundation AI models represent a transformative force, characterized by their accessibility, adaptability, and potential for widespread application across various sectors. The openness of these models may serve to foster an environment conducive to innovation, wherein developers, researchers, and entrepreneurs can build on existing technologies to create novel solutions tailored to diverse needs and challenges.

The inherent flexibility of open-foundation models can also catalyze a competitive market, encouraging a healthy ecosystem where entities ranging from startups to established corporations may all participate on roughly equal footing. By lowering some entry barriers related to access to basic AI technologies, this competitive environment can further drive technological advancements and price efficiencies, ultimately benefiting consumers and society at-large.

But more “closed” approaches can also prove very valuable. As NTIA notes in this RFC, it is rarely the case that a firm pursues a purely open or closed approach. These terms exist along a continuum, and firms blend models as necessary.[20] And just as firms readily mix elements of open and closed business models, a regulator should be agnostic about the precise mix that firms employ, which ultimately must align with the realities of market dynamics and consumer preferences.

Both open and closed approaches offer distinct benefits and potential challenges. For instance, open approaches might excel in fostering a broad and diverse ecosystem of applications, thereby appealing to users and developers who value customization and variety. They can also facilitate a more rapid dissemination of innovation, as they typically impose fewer restrictions on the development and distribution of new applications. Conversely, closed approaches, with their curated ecosystems, often provide enhanced security, privacy, and a more streamlined user experience. This can be particularly attractive to users less inclined to navigate the complexities of open systems. Under the right conditions, closed systems can likewise foster a healthy ecosystem of complementary products.

The experience of modern digital platforms demonstrates that there is no universally optimal approach to structuring business activities, thus illustrating the tradeoffs inherent in choosing among open and closed business models. The optimal choice depends on the specific needs and preferences of the relevant market participants. As Jonathan M. Barnett has noted:

Open systems may yield no net social gain over closed systems, can pose a net social loss under certain circumstances, and . . . can impose a net social gain under yet other circumstances.[21]

Similar considerations apply in the realm of AI development. Closed or semi-closed ecosystems can offer such advantages as enhanced security and curated offerings, which may appeal to certain users and developers. These benefits, however, may come at the cost of potentially limited innovation, as a firm must rely on its own internal processes for research and development. Open models, on the other hand, while fostering greater collaboration and creativity, may also introduce risks related to quality control, intellectual-property protection, and a host of other concerns that may be better controlled in a closed business model. Even along innovation dimensions, closed platforms can in many cases outperform open models.

With respect to digital platforms like the App Store and Google Play Store, there is a “fundamental welfare tradeoff between two-sided proprietary…platforms and two-sided platforms which allow ‘free entry’ on both sides of the market.”[22] Consequently, “it is by no means obvious which type of platform will create higher product variety, consumer adoption and total social welfare.”[23]

To take another example, consider the persistently low adoption rates for consumer versions of the open-source Linux operating system, versus more popular alternatives like Windows or MacOS.[24] A closed model like Apple’s MacOS is able to outcompete open solutions by better leveraging network effects and developing a close relationship with end users.[25] Even in this example, adoption of open versus closed models varies across user types, with, e.g., developers showing a strong preference for Linux over Mac, and only a slight preference for Windows over Linux.[26] This underscores the point that the suitability of an open or closed model varies not only by firm and product, nor even solely by user, but by the unique fit of a particular model for a particular user in a particular context. Many of those Linux-using developers will likely not use it on their home computing device, for example, even if they prefer it for work.

The dynamics among consumers and developers further complicate prevailing preferences for open or closed models. For some users, the security and quality assurance provided by closed ecosystems outweigh the benefits of open systems’ flexibility. On the developer side, the lower barriers to entry in more controlled ecosystems that smooth the transaction costs associated with developing and marketing applications can democratize application development, potentially leading to greater innovation within those ecosystems. Moreover, distinctions between open and closed models can play a critical role in shaping inter-brand competition. A regulator placing its thumb on the business-model scale would push the relevant markets toward less choice and lower overall welfare.[27]

By differentiating themselves through a focus on ease-of-use, quality, security, and user experience, closed systems contribute to a vibrant competitive landscape where consumers have clear choices between differing “brands” of AI. Forcing an AI developer to adopt practices that align with a regulator’s preconceptions about the relative value of “open” and “closed” risks homogenizing the market and diminishing the very competition that spurs innovation and consumer choice.

Consider some of the practical benefits sought by deployers when choosing between open and closed models. For example, it’s not straightforward to say close is inherently better than open when considering issues of data sharing or security; even here, there are tradeoffs. Open innovation in AI—characterized by the sharing of data, algorithms, and methodologies within the research community and beyond—can mitigate many of the risks associated with model development. This openness fosters a culture of transparency and accountability, where AI models and their applications are subject to scrutiny by a broad community of experts, practitioners, and the general public. This collective oversight can help to identify and address potential safety and security concerns early in the development process, thus enhancing AI technologies’ overall trustworthiness.

By contrast, a closed system may implement and enforce standardized security protocols more quickly. A closed system may have a sharper, more centralized focus on providing data security to users, which may perform better along some dimensions. And while the availability of code may provide security in some contexts, in other circumstances, closed systems perform better.[28]

In considering ethical AI development, different types of firms should be free to experiment with different approaches, even blending them where appropriate. For example, Claude’s approach to “Collective Constitutional AI” adopts what is arguably a “semi-open” model, blending proprietary elements with certain aspects of openness to foster innovation, while also maintaining a level of control.[29] This model might strike an appropriate balance, in that it ensures some degree of proprietary innovation and competitive advantage while still benefiting from community feedback and collaboration.

On the other hand, fully open-source development could lead to a different, potentially superior result that meets a broader set of needs through community-driven evolution and iteration. There is no way to determine, ex ante, that either an open or a closed approach to AI development will inherently provide superior results for developing “ethical” AI. Each has its place, and, most likely, the optimal solutions will involve elements of both approaches.

In essence, codifying a regulatory preference for one business model over the other would oversimplify the intricate balance of tradeoffs inherent to platform ecosystems. Economic theory and empirical evidence suggest that both open and closed platforms can drive innovation, serve consumer interests, and stimulate healthy competition, with all of these considerations depending heavily on context. Regulators should therefore aim for flexible policies that support coexistence of diverse business models, fostering an environment where innovation can thrive across the continuum of openness.

C. Dual-Use Foundation Models and Transparency Requirements

The EO and the RFC both focus extensively on so-called “dual-use” foundation models:

Foundation models are typically defined as, “powerful models that can be fine-tuned and used for multiple purposes.” Under the Executive Order, a “dual-use foundation model” is “an AI model that is trained on broad data; generally uses self-supervision, contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters….”[30]

But this framing will likely do more harm than good. As noted above, the terms “AI” or “AI model” are frequently invoked to refer to very different types of systems. Further defining these models as “dual use” is also unhelpful, as virtually any tool in existence can be “dual use” in this sense. Certainly, from a certain perspective, all software—particularly highly automated software—can pose a serious risk to “national security” or “safety.” Encryption and other privacy-protecting tools certainly fit this definition.[31] While it is crucial to mitigate harms associated with the misuse of AI technologies, the blanket treatment of all foundation models under this category is overly simplistic.

The EO identifies certain clear risks, such as the possibility that models could aid in the creation of chemical, biological, or nuclear weaponry. These categories are obvious subjects for regulatory control, but the EO then appears to open a giant definitional loophole that threatens to subsume virtually any useful AI system. It employs expansive terminology to describe a more generalized threat—specifically, that dual-use models could “[permit] the evasion of human control or oversight through means of deception or obfuscation.”[32] Such language could encompass a wide array of general-purpose AI models. Furthermore, by labeling systems capable of bypassing human decision making as “dual use,” the order implicitly suggests that all AI could pose such risk as warrants national-security levels of scrutiny.

Given the EO’s broad definition of AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments,” numerous software systems not typically even considered AI might be categorized as “dual-use” models.[33] Essentially, any sufficiently sophisticated statistical-analysis tool could qualify under this definition.

A significant repercussion of the EO’s very broad reporting mandates for dual-use systems, and one directly relevant to the RFC’s interest in promoting openness, is that these might chill open-source AI development.[34] Firms dabbling in AI technologies—many of which might not consider their projects to be dual use—might keep their initiatives secret until they are significantly advanced. Faced with the financial burden of adhering to the EO’s reporting obligations, companies that lack a sufficiently robust revenue model to cover both development costs and legal compliance might be motivated to dodge regulatory scrutiny in the initial phases, consequently dampening the prospects for transparency.

It is hard to imagine how open-source AI projects could survive in such an environment. Open-source AI code libraries like TensorFlow[35] and PyTorch[36] foster remarkable innovation by allowing developers to create new applications that use cutting-edge models. How could a paradigmatic startup developer working out of a garage genuinely commit to open-source development if tools like these fall under the EO’s jurisdiction? Restricting access to the weights that models use—let alone avoiding open-source development entirely—may hinder independent researchers’ ability to advance the forefront of AI technology.

Moreover, scientific endeavors typically benefit from the contributions of researchers worldwide, as collaborative efforts on a global scale are known to fast-track innovation. The pressure the EO applies to open-source development of AI tools could curtail international cooperation, thereby distancing American researchers from crucial insights and collaborations. For example, AI’s capacity to propel progress in numerous scientific areas is potentially vast—e.g., utilizing MRI images and deep learning for brain-tumor diagnoses[37] or employing machine learning to push the boundaries of materials science.[38] Such research does not benefit from stringent secrecy, but thrives on collaborative development. Enabling a broader community to contribute to and expand upon AI advancements supports this process.

Individuals respond to incentives. Just as how well-intentioned seatbelt laws paradoxically led to an uptick in risky driving behaviors,[39] ill-considered obligations placed on open-source AI developers could unintentionally stifle the exchange of innovative concepts crucial to maintain the United States’ leadership in AI innovation.

IV. Regulatory Models that Support Innovation While Managing Risks Effectively

In the rapidly evolving landscape of artificial intelligence (AI), it is paramount to establish governance and regulatory frameworks that both encourage innovation and ensure safety and ethical integrity. An effective regulatory model for AI should be adaptive, principles-based, and foster a collaborative environment among regulators, developers, researchers, and the broader community. A number of principles can help in developing this regime.

A. Low-Risk vs High-Risk AI

First, a clear distinction should be made between low-risk AI applications that enhance operational efficiency or consumer experience and high-risk applications that could have significant safety implications. Low-risk applications like search algorithms and chatbots should be governed by a set of baseline ethical guidelines and best practices that encourage innovation, while ensuring basic standards are met. On the other hand, high-risk applications—such as those used by law enforcement or the military—would require more stringent review processes, including impact assessments, ethical reviews, and ongoing monitoring to mitigate potentially adverse effects.

Contrast this with the recently enacted AI Act in the European Union, and its decision to create presumptions of risk for general purpose AI (GPAI) systems, such as large language models (LLMs), that present what the EU has termed so-called “systemic risk.”[40] Article 3(65) of the AI Act defines systemic risk as “a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.”[41]

This definition bears similarities to the “Hand formula” in U.S. tort law, which balances the burden of precautions against the probability and severity of potential harm to determine negligence.[42] The AI Act’s notion of systemic risk, however, is applied more broadly to entire categories of AI systems based on their theoretical potential for widespread harm, rather than on a case-by-case basis.

The designation of LLMs as posing “systemic risk” is problematic for several reasons. It creates a presumption of risk merely based on a GPAI system’s scale of operations, without any consideration of the actual likelihood or severity of harm in specific use cases. This could lead to unwarranted regulatory intervention and unintended consequences that hinder the development and deployment of beneficial AI technologies. And this broad definition of systemic risk gives regulators significant leeway to intervene in how firms develop and release their AI products, potentially blocking access to cutting-edge tools for European citizens, even in the absence of tangible harms.

While it is important to address potential risks associated with AI systems, the AI Act’s approach risks stifling innovation and hindering the development of beneficial AI technologies within the EU.

B. Avoid Static Regulatory Approaches

AI regulators are charged with overseeing a dynamic and rapidly developing market, and should therefore avoid erecting a rigid framework that force new innovations into ill-fitting categories. The “regulatory sandbox” may provide a better model to balance innovation with risk management. By allowing developers to test and refine AI technologies in a controlled environment under regulatory oversight, sandboxes can be used to help identify and address potential issues before wider deployment, all while facilitating dialogue between innovators and regulators. This approach not only accelerates the development of safe and ethical AI solutions, but also builds mutual understanding and trust. Where possible, NTIA should facilitate policy experimentation with regulatory sandboxes in the AI context.

Meta’s Open Loop program is an example of this kind of experimentation.[43] This program is a policy prototyping research project focused on evaluating the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) 1.0.[44] The goal is to assess whether the framework is understandable, applicable, and effective in assisting companies to identify and manage risks associated with generative AI. It also provides companies an opportunity to familiarize themselves with the NIST AI RMF and its application in risk-management processes for generative AI systems. Additionally, it aims to collect data on existing practices and offer feedback to NIST, potentially influencing future RMF updates.

1. Regulation as a discovery process

Another key principle is to ensure that regulatory mechanisms are adaptive. Some examples of adaptive mechanisms are iterative rulemaking and feedback loops that allow regulations to be updated continuously in response to new developments and insights. Such mechanisms enable policymakers to respond swiftly to technological breakthroughs, ensuring that regulations remain relevant and effective, without stifling innovation.

Geoffrey Manne & Gus Hurwitz have recently proposed a framework for “regulation as a discovery process” that could be adapted to AI.[45] They argue for a view of regulation not merely as a mechanism for enforcing rules, but as a process for discovering information that can inform and improve regulatory approaches over time. This perspective is particularly pertinent to AI, where the pace of innovation and the complexity of technologies often outstrip regulators’ understanding and ability to predict future developments. This framework:

in its simplest formulation, asks regulators to consider that they might be wrong. That they might be asking the wrong questions, collecting the wrong information, analyzing it the wrong way—or even that Congress has given them the wrong authority or misunderstood the problem that Congress has tasked them to address.[46]

That is to say, an adaptive approach to regulation requires epistemic humility, with the understanding that, particularly for complex, dynamic industries:

there is no amount of information collection or analysis that is guaranteed to be “enough.” As Coase said, the problem of social cost isn’t calculating what those costs are so that we can eliminate them, but ascertaining how much of those social costs society is willing to bear.[47]

In this sense, modern regulators’ core challenge is to develop processes that allow for iterative development of knowledge, which is always in short supply. This requires a shift in how an agency conceptualizes its mission, from one of writing regulations to one of assisting lawmakers to assemble, filter, and focus on the most relevant and pressing information needed to understand a regulatory subject’s changing dynamics.[48]

As Hurwitz & Manne note, existing efforts to position some agencies as information-gathering clearinghouses suffer from a number of shortcomings—most notably, that they tend to operate on an ad hoc basis, reporting to Congress in response to particular exigencies.[49] The key to developing a “discovery process” for AI regulation would instead require setting up ongoing mechanisms to gather and report on data, as well as directing the process toward “specifications for how information should be used, or what the regulator anticipated to find in the information, prior to its collection.”[50]

Embracing regulation as a discovery process means acknowledging the limits of our collective knowledge about AI’s potential risks and benefits. This underscores why regulators should prioritize generating and utilizing new information through regulatory experiments, iterative rulemaking, and feedback loops. A more adaptive regulatory framework could respond to new developments and insights in AI technologies, thereby ensuring that regulations remain relevant and effective, without stifling innovation.

Moreover, Hurwitz & Manne highlight the importance of considering regulation as an information-producing activity.[51] In AI regulation, this could involve setting up mechanisms that allow regulators, innovators, and the public to contribute to and benefit from a shared pool of knowledge about AI’s impacts. This could include public databases of AI incidents, standardized reporting of AI-system performance, or platforms for sharing best practices in AI safety and ethics.

Static regulatory approaches may fail to capture the evolving landscape of AI applications and their societal implications. Instead, a dynamic, information-centric regulatory strategy that embraces the market as a discovery process could better facilitate beneficial innovations, while identifying and mitigating harms.

V. Conclusion

As the NTIA navigates the complex landscape of AI regulation, it is imperative to adopt a nuanced, forward-looking approach that balances the need to foster innovation with the imperatives of ensuring public safety and ethical integrity. The rapid evolution of AI technologies necessitates a regulatory framework that is both adaptive and principles-based, eschewing static snapshots of the current state of the art in favor of flexible mechanisms that could accommodate the dynamic nature of this field.

Central to this approach is to recognize that the field of AI encompasses a diverse array of technologies, methodologies, and applications, each with its distinct characteristics, capabilities, and implications for society. A one-size-fits-all regulatory model would not only be ill-suited to the task at-hand, but would also risk stifling innovation and hindering the United States’ ability to maintain its leadership in the global AI industry. NTIA should focus instead on developing tailored approaches that distinguish between low-risk and high-risk applications, ensuring that regulatory interventions are commensurate with the potential identifiable harms and benefits associated with specific AI use cases.

Moreover, the NTIA must resist the temptation to rely on overly prescriptive definitions of “openness” or to favor particular business models over others. The coexistence of open and closed approaches to AI development is essential to foster a vibrant, competitive ecosystem that drives technological progress and maximizes social benefits. By embracing a flexible regulatory framework that allows for experimentation and iteration, the NTIA can create an environment conducive to innovation while still ensuring that appropriate safeguards are in place to mitigate potential risks.

Ultimately, the success of the U.S. AI industry will depend on the ability of regulators, developers, researchers, and the broader community to collaborate in developing governance frameworks that are both effective and adaptable. By recognizing the importance of open development and diverse business models, the NTIA can play a crucial role in shaping the future of AI in ways that promote innovation, protect public interests, and solidify the United States’ position as a global leader in this transformative field.

[1] Dual Use Foundation Artificial Intelligence Models With Widely Available Model Weights, Docket No. 240216-0052, 89 FR 14059, National Telecommunications and Information Administration (Mar. 27, 2024) at 14063, question 8(a) [hereinafter “RFC”].

[2] See, e.g., Kristian Stout, Systemic Risk and Copyright in the EU AI Act, Truth on the Market (Mar. 19, 2024), https://truthonthemarket.com/2024/03/19/systemic-risk-and-copyright-in-the-eu-ai-act.

[3] Exec. Order No. 14110, 88 F.R. 75191 (2023), https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence?_fsi=C0CdBzzA [hereinafter “EO”].

[4] See, e.g., EO at §§ 1; 2(c), 5.2(e)(ii); and § 8(c);

[5] Id. at § 3(k).

[6] Id. at § (k)(iii).

[7] Id. at § 4.6. As NTIA notes, the administration refers to “widely available model weight,” which is equivalent to “open foundation models” in this proceeding. RFC at 14060.

[8] For more on the “open” vs “closed” distinction and its poor fit as a regulatory lens, see, infra, at nn. 19-41 and accompanying text.

[9] Adaptive regulatory frameworks are discussed, infra, at nn. 42-53 and accompanying text.

[10] See Steven C. Salop & David T. Scheffman, Raising Rivals’ Costs, 73:2 Am. Econ. R. 267, 267–71 (1983), http://www.jstor.org/stable/1816853.

[11] See Steven C. Salop & David T. Scheffman, Cost-Raising Strategies, 36:1 J. Indus. Econ. 19 (1987), https://doi.org/10.2307/2098594.

[12] Cindy Gordon, Google Pauses Gemini AI Model After Latest Debacle, Forbes (Feb. 29, 2024), https://www.forbes.com/sites/cindygordon/2024/02/29/google-latest-debacle-has-paused-gemini-ai-model/?sh=3114d093536c.

[13] Id.

[14] Id.

[15] Breck Dumas, Google Loses $96B in Value on Gemini Fallout as CEO Does Damage Control, Yahoo Finance (Feb. 28, 2024), https://finance.yahoo.com/news/google-loses-96b-value-gemini-233110640.html.

[16] RFC at 14060.

[17] RFC at 14062, question 1.

[18] RFC at 14062, question 3(a).

[19] Department of Commerce, Competition in the Mobile Application Ecosystem (2023), https://www.ntia.gov/report/2023/competition-mobile-app-ecosystem (“While retaining appropriate latitude for legitimate privacy, security, and safety measures, Congress should enact laws and relevant agencies should consider measures (such as rulemaking) designed to open up distribution of lawful apps, by prohibiting… barriers to the direct downloading of applications.”).

[20] RFC at 14061 (“‘openness’ or ‘wide availability’ of model weights are also terms without clear definition or consensus. There are gradients of ‘openness,’ ranging from fully ‘closed’ to fully ‘open’”).

[21] See Jonathan M. Barnett, The Host’s Dilemma: Strategic Forfeiture in Platform Markets for Informational Goods, 124 Harv. L. Rev. 1861, 1927 (2011).

[22] Id. at 2.

[23] Id. at 3.

[24]  Desktop Operating System Market Share Worldwide Feb 2023 – Feb 2024, statcounter, https://gs.statcounter.com/os-market-share/desktop/worldwide (last visited Mar. 27, 2024).

[25]  Andrei Hagiu, Proprietary vs. Open Two-Sided Platforms and Social Efficiency (Harv. Bus. Sch. Strategy Unit, Working Paper No. 09-113, 2006).

[26] Joey Sneddon, More Developers Use Linux than Mac, Report Shows, Omg Linux (Dec. 28, 2022), https://www.omglinux.com/devs-prefer-linux-to-mac-stackoverflow-survey.

[27] See Michael L. Katz & Carl Shapiro, Systems Competition and Network Effects, 8 J. Econ. Persp. 93, 110 (1994), (“[T]he primary cost of standardization is loss of variety: consumers have fewer differentiated products to pick from, especially if standardization prevents the development of promising but unique and incompatible new systems”).

[28] See. e.g., Nokia, Threat Intelligence Report 2020 (2020), https://www.nokia.com/networks/portfolio/cyber-security/threat-intelligence-report-2020; Randal C. Picker, Security Competition and App Stores, Network Law Review (Aug. 23, 2021), https://www.networklawreview.org/picker-app-stores.

[29] Collective Constitutional AI: Aligning a Language Model with Public Input, Anthropic (Oct. 17, 2023), https://www.anthropic.com/news/collective-constitutional-ai-aligning-a-language-model-with-public-input.

[30] RFC at 14061.

[31] Encryption and the “Going Dark” Debate, Congressional Research Service (2017), https://crsreports.congress.gov/product/pdf/R/R44481.

[32] EO at. § 3(k)(iii).

[33] EO at § 3(b).

[34] EO at § 4.2 (requiring companies developing dual-use foundation models to provide ongoing reports to the federal government on their activities, security measures, model weights, and red-team testing results).

[35] An End-to-End Platform for Machine Learning, TensorFlow, https://www.tensorflow.org (last visited Mar. 27, 2024).

[36] Learn the Basics, PyTorch, https://pytorch.org/tutorials/beginner/basics/intro.html (last visited Mar. 27, 2024).

[37] Akmalbek Bobomirzaevich Abdusalomov, Mukhriddin Mukhiddinov, & Taeg Keun Whangbo, Brain Tumor Detection Based on Deep Learning Approaches and Magnetic Resonance Imaging, 15(16) Cancers (Basel) 4172 (2023), available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10453020.

[38] Keith T. Butler, et al., Machine Learning for Molecular and Materials Science, 559 Nature 547 (2018), available at https://www.nature.com/articles/s41586-018-0337-2.

[39] The Peltzman Effect, The Decision Lab, https://thedecisionlab.com/reference-guide/psychology/the-peltzman-effect (last visited Mar. 27, 2024).

[40] European Parliament, European Parliament legislative Resolution of 13 March 2024 on the Proposal for a Regulation of the European Parliament and of the Council on Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM/2021/206, available at https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html [hereinafter “EU AI Act”].

[41] Id. at Art. 3(65).

[42] See Stephen G. Gilles, On Determining Negligence: Hand Formula Balancing, the Reasonable Person Standard, and the Jury, 54 Vanderbilt L. Rev. 813, 842-49 (2001).

[43] See Open Loop’s First Policy Prototyping Program in the United States, Meta, https://www.usprogram.openloop.org (last visited Mar. 27. 2024).

[44] Id.

[45] Justin (Gus) Hurwitz & Geoffrey A. Manne, Pigou’s Plumber: Regulation as a Discovery Process, SSRN (2024), available at https://laweconcenter.org/resources/pigous-plumber.

[46] Id. at 32.

[47] Id. at 33.

[48] See id. at 28-29

[49] Id. at 37.

[50] Id. at 37-38.

[51] Id.

Continue reading
Innovation & the New Economy

Spectrum Pipeline Act a Promising Start that Needs Balance

Popular Media Given how important digital connections are to Americans’ daily lives, it’s urgent that Congress move to renew the Federal Communications Commission’s authority to auction parts . . .

Given how important digital connections are to Americans’ daily lives, it’s urgent that Congress move to renew the Federal Communications Commission’s authority to auction parts of the public airwaves.

That authority lapsed a little over a year ago and efforts to reinstate it have been repeatedly stuck in partisan gridlock.

Read the full piece here.

Continue reading
Telecommunications & Regulated Utilities