Showing 9 of 3169 PublicationsWritten Testimonies & Filings

L’Intelligence Artificielle Générative et Actifs Concurrentiels Critiques : Discussion de L’Essentialité des Données

Scholarship Résumé Le développement de l’Intelligence Artificielle (IA) générative fait l’objet d’une attention particulière de la part des autorités de concurrence. Ses impacts peuvent être déterminants . . .

Résumé

Le développement de l’Intelligence Artificielle (IA) générative fait l’objet d’une attention particulière de la part des autorités de concurrence. Ses impacts peuvent être déterminants en ce qu’elle peut aussi bien rebattre les cartes du jeu concurrentiel, c’est-à-dire affaiblir les positions de force des grandes firmes pivot des grands écosystèmes numériques actuels, que donner lieu à une nouvelle consolidation, en leur permettant d’étendre leur contrôle à cette technologie d’usage général qui est appelée à exercer un rôle déterminant dans la structuration de notre économie. Le ressort des initiatives des régulateurs de la concurrence tient à la crainte que le contrôle de certaines ressources essentielles conduise à étendre la puissance économique de ces acteurs vers ce nouveau marché. Les autorités de concurrence feraient dès lors face aux mêmes enjeux que ceux induits par les situations de dominance et de verrouillage des écosystèmes actuels : difficultés dans la définition et dans la mise en œuvre de remèdes concurrentiels effectifs ou encore nécessité d’instaurer des réglementations spécifiques pour prévenir les dommages concurrentiels.

Abstract

Competition authorities are paying particular attention to the development of generative Artificial Intelligence (AI). Its impact can be decisive in that it can both reshuffle the cards of the competitive game, i.e. weaken the positions of strength of the major firms at the heart of today’s major digital ecosystems, and give rise to new consolidation, by enabling them to extend their control over this general-purpose technology which is destined to play a decisive role in the structuring of our economy. The driving force behind the initiatives of competition regulators is the fear that control of certain essential resources will lead to the economic power of these players being extended to this new market. Competition authorities would face the same challenges as those arising from dominance and foreclosure in current ecosystems: difficulties in defining and implementing effective competitive remedies and the need to introduce specific to prevent competitive damages.

Continue reading
Innovation & the New Economy

It’s Risk, Jerry, The Game of Broadband Conquest

TOTM The big news in telecommunications policy last week wasn’t really news at all—the Federal Communications Commission (FCC) released its proposed rules to classify broadband internet under Title . . .

The big news in telecommunications policy last week wasn’t really news at all—the Federal Communications Commission (FCC) released its proposed rules to classify broadband internet under Title II of the Communications Act. Supporters frame the proposed rules as “net neutrality,” but those provisions—a ban on blocking, throttling, or engaging in paid or affiliated-prioritization arrangements—actually comprise just a small part of the 435-page document.

Read the full piece here.

Continue reading
Telecommunications & Regulated Utilities

ICLE Reply Comments to FCC Re: Customer Blackout Rebates

Regulatory Comments I. Introduction The International Center for Law & Economics (“ICLE”) thanks the Federal Communications Commission (“FCC” or “the Commission”) for the opportunity to offer reply . . .

I. Introduction

The International Center for Law & Economics (“ICLE”) thanks the Federal Communications Commission (“FCC” or “the Commission”) for the opportunity to offer reply comments to this notice of proposed rulemaking (“NPRM”), as the Commission proposes to require cable operators and direct-broadcast satellite (DBS) providers to grant their subscribers rebates when those subscribers are deprived of video programming they expected to receive during programming blackouts that resulted from failed retransmission-consent negotiations or failed non-broadcast carriage negotiations.[1]

As noted in the NPRM, the Communications Act of 1934 requires that cable operators and satellite-TV providers obtain a broadcast TV station’s consent in order to lawfully retransmit that station’s signal to subscribers. Commercial stations or networks may either (1) demand carriage pursuant to the Commission’s must-carry rules or (2) elect for carriage consent and negotiate for compensation in exchange for carriage. If a channel elects for retransmission consent but is unable to reach agreement for carriage, the cable operator or DBS provider loses the right to carry that signal. As a result, the cable operator or DBS provider’s subscribers typically lose access entirely to the channel’s signal unless and until the parties are able to reach an agreement, a situation that is often described as a “blackout.”

Blackouts tend to generate eye-catching headlines and often annoy affected consumers.[2] This annoyance is amplified when consumers don’t receive a rebate for the loss of signal, especially when they believe that they are merely bystanders in the dispute between the cable operator or DBS provider and the channel.[3] The Commission appears to echo theses concerns, concluding that its proposed rebate mandate would ensure “subscribers are made whole when they face interruptions of service that are outside their control” and would prevent subscribers “from being charged for services for the period that they did not receive them.”[4]

This framing, however, oversimplifies retransmission-consent negotiations and mischaracterizes consumers’ agency in subscribing to and using multichannel-video-programming distributors (“MVPDs”). Moreover, there are numerous questions raised by the NPRM regarding the proposal’s feasibility, including how to identify which consumers would qualify for rebates, how those rebates would be calculated, and how they would be distributed. Several comments submitted in this proceeding suggest that any implementation of this proposal would be arbitrary and unfair to cable operators, DBS providers, and consumers. In particular:

  • Blackouts result from a temporary or permanent failure to reach an agreement in negotiations between channels and either cable operators or DBS providers. The Commission’s proposal explicitly and unfairly assigns liability for blackouts to the cable operator or DBS provider. As a result, the proposal would provide channels with additional negotiating leverage relative to the status quo. Smaller cable operators may be especially disadvantaged.
  • Each consumer is unique in how much they value a particular channel and how much they would be economically harmed by a blackout. For example, in the event of a cable or DBS blackout, some consumers can receive the programming via an over-the-air antenna or a streaming platform and would suffer close to no economic harm. Other consumers may assign no value to the blacked-out channel’s programming and would likewise suffer no harm.
  • Complexities and confidentiality in programming contracts would make it impossible to accurately or fairly calculate the price or cost associated with any given channel over some set period of time. For example, cable operators and DBS providers typically sell bundles of channels, not a la carte offerings, making it impossible to calculate an appropriate rebate for one specific channel or set of channels.
  • Even if it were possible to calculate an appropriate rebate, any mandated rebate based on such calculations would constitute prohibited rate regulation.

These reply comments respond to many of the issues raised in comments on this matter. We conclude that the Commission is proposing a set of unworkable and arbitrary rules. Even if rebates could be reasonably and fairly calculated, the amount of such rebates would likely be only a few dollars and may be as little as a few pennies. In such cases, the enormous cost to the Commission, cable operators, and DBS providers would be many times greater than the amount of rebates provided to consumers. It would be a much better use of the FCC’s and MVPD providers’ resources to abandon this rulemaking process and refrain from mandating rebates for programming blackouts.

II. Who Is to Blame for Blackouts?

As discussed above, it appears the FCC’s view is that consumers who experience blackouts are mere bystanders in a dispute, as the Commission invokes “consumer protection” and “customer service” as justifications for the proposed rules mandating rebates.[5] If we believe both that consumers are bystanders and that they are harmed by blackouts, then it is crucial to identify the parties to whom blame should be assigned for those blackouts. A key principle of the law & economics approach is that the party better-positioned to avoid the blackout should bear more—or, in some cases, all—of its costs.[6]

In comments submitted by Dish Network, William Zarakas and Jeremy Verlinda note that: “Programming fees are established through bilateral negotiations between content providers and MVPDs, and depend in large part on the relative bargaining position of the two sides.”[7] This comment illustrates the obvious but important fact that both content providers and MVPD operators must reach agreement and, in any given negotiation, either side may have more bargaining power. Because of this reality, it is impossible to draw general conclusions about which party will be the least-cost avoider of blackouts, as borne out in the submitted comments.

On the one hand, the ATVA argues that programmers are the cause of blackouts: “Blackouts happen to cable and satellite providers and their subscribers.”[8] NTCA supports this claim and reports that “[s]mall providers lack negotiating power in retransmission consent discussions.”[9] On the other hand, the NAB claims the “leading cause of such disruptions” is “the pay TV industry’s desire to use consumers as pawns to push for a change in law” and that MVPDs have a “strategy of creating negotiating impasses” in order to obtain a policy change.[10] Writing in Truth on the Market, Eric Fruits concludes:

With the wide range of programming and delivery options, it’s probably unwise to generalize who has the greater bargaining power in the current system. But if one had to choose, it seems that networks and, to a lesser extent, local broadcasters are in a slightly superior position. They have the right to choose must carry or retransmission and, in some cases, have alternative outlets (such as streaming) to distribute their programming.[11]

Peer-reviewed published research by Eun-A Park, Rob Frieden, and Krishna Jayakar attempts to identify the “predictors” of blackouts using a database of nearly 400 retransmission agreements executed between 2011 and 2018.[12] The authors identify three factors associated with more blackouts and longer blackouts:

  1. Cable, satellite, and other MVPDs with larger customer bases are associated with more frequent and longer blackouts;
  2. Multi-station broadcaster groups with network affiliations are associated with more frequent but shorter blackouts; and
  3. The National Football League (“NFL”) season (g., “must see” real-time programming) has no significant relationship with blackout frequency, but when blackouts occur during the season, they are significantly shorter.

The simplistic takeaway is both that everyone is to blame, and no one is to blame. Ultimately, Park and her co-authors conclude that “the statistical analysis is not able to identify the parties or the tactics responsible for blackouts.”[13] Based on this research, it is not clear which parties in given negotiations are more likely to be the least-cost avoider of blackouts.

Nevertheless, the Commission’s proposal explicitly assigns liability for blackouts to cable operators and DBS providers.[14] Under the proposed rules, not only would cable operators and DBS providers suffer financial consequences, but they also would be made to face reputational harms stemming from a federal agency suggesting the fault for any retransmission-consent or carriage-agreement blackouts falls squarely on their shoulders.

Such reputational damage is almost certain to increase subscriber churn and impose additional subscriber-acquisition and retention costs on cable operators and DBS providers.[15] In comments on the Commission’s proposed rules for cable-operator and DBS-provider billing practices, ICLE reported that these costs are substantial and that, in addition to these costs, churn increases the uncertainty of cable-operator and DBS-provider revenues and profits.[16]

III. Consumers Are Not Bystanders

As noted earlier in these comments, the Commission’s proposal appears to be rooted in the belief that, when consumers experience a blackout, they are mere bystanders in a dispute between channels and cable operators or DBS providers. The Commission further seems to believe that the full force of the federal government is needed for these consumers to be “made whole.”[17] The implication is that consumers lack the foresight to anticipate the possibility of blackouts or the ability to respond to blackouts when they occur.

As the NPRM notes, subscribers are often informed of the risk of blackouts—and their consequences—in their service agreements with cable operators or DBS providers.[18] This is supported in ATVA’s comments:

Cable and satellite carriers make this quite clear in the contracts they offer subscribers—existing contracts which the Commission seeks to abrogate here. This language also makes clear that cable and satellite operators can and do change the programming offered in those bundles from time to time. … Cable and satellite providers add and subtract programming from their offerings to consumers frequently, and subscription agreements do not promise that all channels in a particular tier will be carried in perpetuity, let alone (with limited exception) assign a specific value to particular programming.[19]

The NPRM asks, “if a subscriber initiates service during a blackout, would that subscriber be entitled to a rebate or a lower rate?”[20] The question implicitly acknowledges that, for these subscribers, blackouts are not just a possibility, but a certainty. Yet they nonetheless enter into such agreements, knowing they may not be compensated for the interruption of service.

Many cable operators and DBS providers do offer credits[21] or other accommodations[22] to requesting subscribers affected by a blackout. In addition, many consumers have a number of options to circumvent a blackout by obtaining the programming elsewhere. Comments in this proceeding indicate that these options include the use of over-the-air antennas[23] or streaming services.[24] Given the many alternatives available in so many cases, it is unlikely that a blackout would deprive these consumers of the desired programming and any economic harm to them would be de minimis.

If cable or DBS blackouts are (or become) widespread or pernicious, consumers also have the ability to terminate service and switch providers, including by switching to streaming options. This is demonstrated by the well-known and widespread phenomenon of “cord cutting.” ATVA’s comments note that, in the third quarter of 2023, nearly one million subscribers canceled their traditional linear-television service, with just under 55% of occupied households now subscribing, the lowest share since 1989.[25] NYPSC concludes that, if the current trend of cord-cutting continues, “any final rules adopted here could become obsolete over time.”[26]

Due in part to cord cutting, ATVA reported that last year “several cable television companies either had already shut down their television services or were in the process of doing so.”[27] NTCA reports that nearly 40% of surveyed rural providers indicated they are not likely to continue service or already have plans to discontinue service, with many of them blaming the “difficulty negotiating retransmission consent agreements.”[28]

The fact that so many consumers are switching to alternatives to cable and DBS is a clear demonstration that they have the opportunity and ability obtain programming from a wide range of competitive providers. This places them in the driver’s seat, rather than suffering as helpless bystanders. It is telling that neither the NPRM nor any of the comments submitted to date offer any estimate of the cost to consumers associated with blackouts from retransmission-consent or carriage negotiations. This is likely because any costs are literally incalculable (i.e., impossible to calculate) or so small as to discourage any efforts at estimation. In either case, the Commission’s proposal to mandate and enforce blackout rebates looks to be a costly and time-consuming exercise that would yield little to no noticeable consumer benefits.

IV. Mandatory Rebates Will Increase Programmer Bargaining Power and Increase Prices to Cable and DBS Subscribers

A common theme of comments submitted in this matter is that the proposed rules would “place a thumb on the scale” in favor of channels relative to cable operators and DBS providers.[29] Without delving deeply into the esoteric details of bargaining theory, the comments identify two key factors that have, over time, improved programmers’ bargaining position relative to cable operators and DBS providers:

  1. Increased competition among MVPD providers, which has reduced cable and DBS bargaining power;[30] and
  2. Consolidation in the broadcast industry, which has increased programmer bargaining power.[31]

The Commission’s proposed rules are intended and designed to impose an additional cost on cable operators and DBS providers who do not reach an agreement with stations and networks, thereby diminishing the providers’ relative bargaining position. As profit-maximizing enterprises, it would be reasonable to expect stations and networks to exploit this additional bargaining power to extract higher retransmission fees or other concessions.

Jeffrey Eisenach notes that the first “significant” retransmission agreement to involve monetary compensation from a cable provider to a broadcaster occurred in 2005.[32] By 2008, retransmission fees totaled $500 million, according to Variety.[33] By 2020, S&P Global reported that annual retransmission fees were approximately $12 billion.[34] This represents an average annual increase of 30% between 2008 and 2020. This is in line with Zarakas & Verlinda’s estimate that retransmission fees charged by local network stations have increased at annual growth rates of 9.8% to 61.0% since 2009.[35] According to information reported by the Pew Research Center, revenues from retransmission fees for local stations now nearly equal those stations’ advertising revenues (Figure 1).

[36]

Dish Network indicated that programmers have been engaged in an “aggressive campaign of imposing steep retransmission and carriage price increases on MVPDs.”[37] Simultaneous with these steep increases in retransmission fees, networks began imposing “reverse transmission compensation” on their affiliates.[38] Previously, networks paid local affiliates for airtime in order to run network advertisements during their programming. The new arrangements have reversed that flow of compensation, such that affiliates are now expected to compensate the networks, as explained in Variety:

Station owners also face increased pressure to secure top fees for their retrans rights because their Big Four network partners now demand that affiliate stations fork over a portion of their retrans windfall to help them pay for pricey franchises like the NFL, “American Idol” and high-end scripted series.[39]

Dish Network concludes: “While MVPDs and OVDs compete aggressively with each other, the programming price increases will likely be passed through to consumers despite that competition. The reason is that all MVPDs will face the same programming price increase.”[40] NCTA further notes that increased programming costs are “borne by the cable operator or passed onto the consumer.”[41]

The most recent research cited in the comments reports that MVPDs pass through approximately 100% of retransmission-fee increases in the form of higher subscription prices.[42] Aaron Heresco and Stephanie Figueroa provided examples of how increased retransmission fees are passed on to subscribers:

On the other side of the simplified ESPN transaction are MVPD ranging from global conglomerates like Spectrum/Time Warner to small local or independent cable carriers. These MVPD pay ESPN $7.21/subscriber/month for the right to carry/transmit ESPN content to subscribing households. MVPD, with a keen eye on profits and shareholder value, pass through the costs to consumers (irrespective of if subscribers actually watch ESPN or any other network) in the form of increased monthly cable bills. Not only does this suggest that the “free lunch” of TV programming isn’t free, it also indicates that the dynamic of revenue generation via viewership is changing. As another example, consider the case of the Weather Channel, which in 2014 asked for a $.01 increase in retransmission fees despite a 20% drop in ratings (Sahagian 2014). Viewers may demand access to the channel in case of weather emergencies but may only tune in to the channel a handful of times per year. Nonetheless, the demand for access to channels drive up retransmission revenue even if the day-to-day or week-to-week ratings are weak.[43]

In some cases, however, increased retransmission fees cannot be passed on in the form of higher subscription prices. As we noted above, NTCA reports that nearly 40% of surveyed rural providers indicated they are unlikely to continue service or already have plans to discontinue service, with many of them blaming the “difficulty negotiating retransmission consent agreements.”[44] The Commission’s proposed rules would not only lead to higher prices for consumers, but they may also reduce MVPD options for some consumers, as cable operators exit the industry.

V. Proposed Rebate Mandate Would be Arbitrary and Unworkable

The NPRM asks for comments on how to implement the proposed rebate mandate. In doing so, the NPRM identifies numerous factors that illustrate the arbitrary and unworkable nature of the Commission’s proposal:[45]

  • Should cable operators and DBS providers be required to pay rebates or provide credits?
  • Should rebates apply to any channel that is blacked out?
  • What if the parties never reach an agreement for carriage? For example, should subscribers be entitled to rebates in perpetuity?
  • How should rebates be calculated when terms of the retransmission-consent agreements are confidential?
  • Should the rebate be based on the cost that the cable operator or DBS provider paid to the programmer to retransmit or carry the channel prior to the carriage impasse?
  • How should rebates account for bundling?
  • If a subscriber initiates or renews a contract during a blackout, should the subscriber receive a rebate?
  • Should the Commission deem unenforceable service agreements that explicitly specify that the cable operator or DBS provider is not liable for credits or refunds if programming becomes unavailable? Should existing service agreements be abrogated?
  • How should rebates account for (g.) advertising time as a component of the retransmission-consent agreement?

As we note above, when blackouts occur, many cable operators and DBS providers offer credits or other accommodations to requesting subscribers affected by a blackout.[46] The NPRM “tentatively concludes” there is no legal distinction between “rebates,” “refunds,” and “credits.”[47] If the Commission approves rules mandating rebates in the event of blackouts, the rules should be sufficiently flexible to allow credits or other accommodations—such as providing over-the-air antennas or programming upgrades—to satisfy the rules.

The NPRM asks whether the proposed rebate rules should apply to any channel that is blacked out,[48] citing to news stories regarding The Weather Channel.[49] The NPRM provides no context for these citations, but the cited articles suggest that The Weather Channel is of minimal value to most consumers. The channel had 105,000 primetime viewers in February 2024, which was slightly less than PopTV and slightly more than Disney Junior and VH1.[50] The Deadline article cited in the NPRM indicates that The Weather Channel averages 13 cents per-subscriber per-month across pay-TV systems.[51] Much of the channel’s content is freely available on its website (weather.com) and app, and similar weather content is freely available across numerous sources and media.

The NPRM’s singling out of the Weather Channel highlights several flaws with the Commission’s proposal. The channel has low viewership, numerous competing substitutes for content, and is relatively low-cost. During a blackout, few subscribers would notice. Even fewer would suffer any harm and, if they did, the harm would be about 13 cents a month. It seems a waste of valuable resources to impose a complex regulatory regime to “make consumers whole” to the tune of pennies a month.

The NPRM asks whether the Commission should require rebates if the parties never reach a carriage agreement and, if so, whether those rebates should be provided in perpetuity.[52] NCTA points out that it would be impossible for any regulator to determine whether any particular blackout is the result of a negotiation impasse or business decision by the cable operator or DBS provider to no longer carry the channel.[53] For example, a channel may be dropped because of changes to the programming available on the channel.[54] Indeed, the programming offered at the beginning of a retransmission-consent agreement may be very different from the content provided at the time of renegotiation.[55] Moreover, it would be impossible to know with any certainty whether any carriage termination is temporary or permanent.[56] Verizon is correct to call this inquiry “absurd,”[57] as it proposes a “Hotel California” approach to carriage agreements, in which cable operators and DBS providers can check out, but they can never leave.

To illustrate the challenges of calculating a reasonable and economically coherent rebate, Dish Network offered a hypothetical set of three options for carriage of a local station and the Tennis Channel, both owned by Sinclair.[58]

  1. $4 for the local station on a tier serving all subscribers, no carriage of Tennis Channel;
  2. $2 for the local station and $2 for the Tennis Channel, both on tiers serving all subscribers; or
  3. $2 for the local station on a tier serving all subscribers and $4 for the Tennis Channel on a tier serving 50% of subscribers.

In this hypothetical, the cable operator or DBS provider is indifferent to the details of how the package is priced. Similarly, consumers are indifferent to the pricing details of the agreement. Under the Commission’s proposal, however, these details become critical to how a rebate would be calculated. In the event of a Tennis Channel blackout, either no subscriber would receive a rebate, every subscriber would receive a $2 rebate, or half of all subscribers would receive a $4 rebate—with the amount of rebate depending on how the agreement’s pricing was structured.

Dish Network’s hypothetical demonstrates another consequence of the Commission’s proposal: the easiest way to avoid the risk of paying a rebate is to forgo carrying the channel. The hypothetical assumes a cable operator “does not particularly want to carry” the Tennis Channel, but is willing to do so in exchange for an agreement with Sinclair for the local station.[59] Under the Commission’s proposed rules, the risk of incurring the cost of providing rebates introduces another incentive to eschew carriage of the Tennis Channel.

One reason Dish Network presented a hypothetical instead of an “actual” example is because, as noted in several comments, carriage agreements are subject to confidentiality provisions.[60] Separate and apart from the impossibility of allocating a rebate across the various terms of an agreement, even if the terms were known, such an exercise would require abrogating these confidentiality agreements between the negotiating parties.

The NPRM asks whether it would be reasonable to require a cable operator or DBS provider to rebate the cost that it paid to the programmer to retransmit or carry the channel prior to the carriage impasse.[61] The NPRM cites Spectrum Northeast LLC v. Frey, a case involving early-termination fees in which the 1st U.S. Circuit Court of Appeals stated that “[a] termination event ends cable service, and a rebate on termination falls outside the ‘provision of cable service.’”[62] In the NPRM, the Commission “tentatively conclude[s] that the courts’ logic” in Spectrum Northeast “applies to the rebate requirement for blackouts.”[63]

If the Commission accepts the court’s logic that a termination event ends service on the consumer side, then it would be reasonable to conclude that the end of a retransmission or carriage agreement similarly ends service. To base a rebate on a prior agreement would mean basing the rebate on a fiction—an agreement that does not exist.

To illustrate, consider Dish Network’s hypothetical. Assume the initial agreement is Option 2 ($2 for the local station and $2 for the Tennis Channel, both on tiers serving all subscribers). The negotiations stall, leading to a blackout. Assume the parties eventually agree to Option 1, in which the Tennis Channel is no longer carried. Would subscribers be due a rebate for a channel that is no longer carried? Or, if the parties instead agree to Option 3 ($2 for the local station on a tier serving all subscribers and $4 for the Tennis Channel on a tier serving 50% of subscribers), would all subscribers be due a $2 rebate for the Tennis Channel, or would half of subscribers be due a $4 rebate? There is no “good” answer because any answer is necessarily arbitrary and devoid of economic logic.

As noted above, many retransmission and carriage agreements involve “bundles” of programming,[64] as well as “a wide range of pricing and non-pricing terms.”[65] Moreover, ATVA reports that subscribers purchase bundled programming, rather than individual channels, and that consumers are well-aware of bundling when they enter into service agreements with cable operators and DBS providers.[66] NCTA reports that bundling complicates the already-complex challenge of allocating costs across specific channels over specific periods of time.[67] Thus, any attempt to do so with an eye toward mandating rebates during blackouts is likewise arbitrary and devoid of economic logic.

In summary, the Commission is proposing a set of unworkable and arbitrary rules to distribute rebates to consumers during programming blackouts. Even if such rebates could be reasonably and fairly calculated, the sums involved would likely be only a few dollars, and may be as little as a few pennies. In these cases, the enormous costs to the Commission, cable operators, and DBS providers would be many times greater than the rebates provided to consumers. It would be a much better use of the FCC’s and MVPD providers’ resources to abandon this rulemaking process and refrain from mandating rebates for programming blackouts.

[1] Notice of Proposed Rulemaking, In the Matter of Customer Rebates for Undelivered Video Programming During Blackouts, MB Docket No. 24-20 (Jan. 17, 2024), available at https://docs.fcc.gov/public/attachments/FCC-24-2A1.pdf [hereinafter “NPRM”], at para. 1.

[2] See id. at n. 5, 7.

[3] Eric Fruits, Blackout Rebates: Tipping the Scales at the FCC, Truth on the Market (Mar. 6, 2024), https://truthonthemarket.com/2024/03/06/blackout-rebates-tipping-the-scales-at-the-fcc.

[4] NPRM, supra note 1 at para. 10.

[5] NPRM, supra note 1 at para. 13 (proposed rules “provide basic protections for cable customers”) and ¶ 7 (“How would requiring cable operators and DBS providers to provide rebates or credits change providers’ current customer service relations during a blackout?”).

[6] This is known as the “least-cost avoider” or “cheapest-cost avoider” principle. See Harold Demsetz, When Does the Rule of Liability Matter?, 1 J. Legal Stud. 13, 28 (1972); see generally Ronald Coase, The Problem of Social Cost, 3 J. L. & Econ. 1 (1960).

[7] Comments of DISH Network LLC, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/1030975783920/1 [hereinafter “DISH Comments”], Exhibit 1, Declaration of William Zarakas & Jeremy Verlinda [hereinafter “Zarakas & Verlinda”] at ¶ 8.

[8] Comments of the American Television Alliance, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/103082522212825/1 [hereinafter “ATVA Comments”] at i and 2 (“Broadcasters and programmers cause blackouts. This is, of course, true as a legal matter, as cable and satellite providers cannot lawfully deliver programming to subscribers without the permission of the rightsholder. It makes no sense to say that a cable or satellite provider has ‘blacked out’ programming by failing to obtain permission to carry it. A programmer ‘blacks out’ programming by declining to grant such permission.”).

[9] Comments of NTCA—The Rural Broadband Association, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/10308589412414/1 [hereinafter “NTCA Comments”] at 2.

[10] Comments of the National Association of Broadcasters, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/1030894019700/1 [hereinafter “NAB Comments”] at 4-5.

[11] Fruits, supra note 4.

[12] Eun-A Park, Rob Frieden, & Krishna Jayakar, Factors Affecting the Frequency and Length of Blackouts in Retransmission Consent Negotiations: A Quantitative Analysis, 22 Int’l. J. Media Mgmt. 117 (2020).

[13] Id. at 131.

[14] NPRM, supra note 1 at paras. 4, 6 (“We seek comment on whether and how to require cable operators and DBS providers to give their subscribers rebates when they blackout a channel due to a retransmission consent dispute or a failed negotiation for carriage of a non-broadcast channel.”); id. at para. 9 (“We tentatively conclude that sections 335 and 632 of the Act provide us with authority to require cable operators and DBS providers to issue a rebate to their subscribers when they blackout a channel.”) [emphasis added].

[15] See Zarakas & Verlinda supra note 7 at para. 14 (blackouts are costly “in the form of lost subscribers and higher incidence of retention rebates”).

[16] Comments of the International Center for Law & Economics, MB Docket No. 23-405 (Feb. 5, 2024), https://www.fcc.gov/ecfs/document/10204246609086/1 at 9-10 (“In its latest quarterly report to the Securities and Exchange Commission, DISH Network reported that it incurs ‘significant upfront costs to acquire Pay-TV’ subscribers, amounting to subscriber acquisition costs of $1,065 per new DISH TV subscriber. The company also reported that it incurs ‘significant’ costs to retain existing subscribers. These retention costs include upgrading and installing equipment, as well as free programming and promotional pricing, ‘in exchange for a contractual commitment to receive service for a minimum term.’”)

[17] See NPRM, supra note 1 at paras. 4, 8, 10 (using “make whole” language)

[18] See id. at n. 7, citing Spectrum Residential Video Service Agreement (“In the event particular programming becomes unavailable, either on a temporary or permanent basis, due to a dispute between Spectrum and a third party programmer, Spectrum shall not be liable for compensation, damages (including compensatory, direct, indirect, incidental, special, punitive or consequential losses or damages), credits or refunds of fees for the missing or omitted programming. Your sole recourse in such an event shall be termination of the Video Services in accordance with the Terms of Service.”) and para. 6 (“To the extent that the existing terms of service between a cable operator or DBS provider and its subscriber specify that the cable operator or DBS provider is not liable for credits or refunds in the event that programming becomes unavailable, we seek comment on whether to deem such provisions unenforceable if we were to adopt a rebate requirement.”)

[19] ATVA Comments, supra note 8 at 11.

[20] NPRM, supra note 1 at para. 6.

[21] See ATVA Comments, supra note 8 at 3 (“The Commission seeks information on the extent to which MVPDs grant rebates today. The answer is that, in today’s competitive marketplace, many ATVA members provide credits, with significant variations both among providers and among classes of subscribers served by individual providers. This, in turn, suggests that cable and satellite companies already address the issues identified by the Commission, but in a more nuanced and individualized manner than proposed in the Notice.”). See also id. at 5-6 (reporting DIRECTV provides credits to existing customers and makes the offer of credits easy to find online or via customer service representatives). See also id. at 7 (reporting DIRECTV and DISH provide credits to requesting subscribers and Verizon compensates subscribers “in certain circumstances”).

[22] See Zarakas & Verlinda, supra note 7 at para. 21 (“DISH provides certain offers to requesting customers in the case of programming blackouts, which may include a $5 per month credit, a free over-the-air antenna for big 4 local channel blackouts, or temporary free programming upgrades for cable network blackouts.”).

[23] See id. at para. 21.

[24] See ATVA Comments, supra note 8 at 4 (“If Disney blacks out ESPN on a cable system, for example, subscribers still have many ways to get ESPN. This includes both traditional competitors to cable (which are losing subscribers) and a wide array of online video providers (which are gaining subscribers).”); Comments of Verizon, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/10308316105453/1 [hereinafter “Verizon Comments”] at 12 (“In today’s competitive marketplace, consumers have many options for viewing broadcasters’ content in the event of a blackout — they can switch among MVPDs, or forgo MVPD services altogether and watch on a streaming platform or over the air. And when a subscriber switches or cancels service, it is extremely costly for video providers to win them back.”); DISH Comments, supra note 7 at 7 (“[L]ocal network stations have also been able to use another lever: the phenomenal success of over-the-top video streaming and the emergence of several online video distributors (‘OVDs’), some of which have begun incorporating local broadcast stations in their offerings.”); Comments of the New York State Public Service Commission, MB Docket 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/10308156370046/1 [hereinafter “NYPSC Comments”] at 2 (identifying streaming services and Internet Protocol Television (IPTV) providers such as YouTube TV, Sling, and DirecTV Stream as available alternatives).

[25] See ATVA Comments, supra note 8 at 4.

[26] NYPSC Comments, supra note 22 at 2.

[27] ATVA Comments, supra note 8 at 4-5.

[28] NTCA Comments, supra note 9 at 3; see Luke Bouma, Another Cable TV Company Announces It Will Shut Down Its TV Service Because of “Extreme Price Increases from Programmers,” Cord Cutters News (Dec. 10, 2023), https://cordcuttersnews.com/another-cable-tv-company-announces-it-will-shut-down-its-tv-service-because-of-extreme-price-increases-from-programmers (reporting the announced shutdown of DUO Broadband’s cable TV and streaming TV services because of increased programming fees, affecting several Kentucky counties).

[29] ATVA Comments, supra note 8 at note 15; DISH Comments, supra note 7 at 3, 8; NAB Comments, supra note 10 at 5; Comments of NCTA—The Internet & Television Alliance, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/1030958439598/1 [hereinafter “NCTA Comments”] at 2, 11.

[30] See ATVA Comments, supra note 8 at n. 19 (“With more distributors, programmers ‘lose less’ if they fail to reach agreement with any individual cable or satellite provider.”); Zarakas & Verlinda, supra note 7 at para. 6 (“This bargaining power has been further exacerbated by the increase in the number of distribution platforms coming from the growth of online video distributors. The bargaining leverage of cable networks has also received a boost from the proliferation of distribution platforms.”); id. at para. 13 (“Growth of OVDs has reduced MVPD bargaining leverage”).

[31] See DISH Comments, supra note 7 at 6 (“For one thing, the consolidation of the broadcast industry over the last ten years has exacerbated the imbalance further. This consolidation, fueled itself by the broadcasters’ interest in ever-steeper retransmission price increases, has effectively been a game of “and then there were none,” with small independent groups of two or three stations progressively vanishing from the picture.”); Zarakas & Verlinda, supra note 7 at para. 6 (concluding consolidation among local networks is associated with increased retransmission fees).

[32] See Jeffrey A. Eisenach, The Economics of Retransmission Consent, at 9 n.22 (Empiris LLC, Mar. 2009), available at https://nab.org/documents/resources/050809EconofRetransConsentEmpiris.pdf.

[33] See Robert Marich, TV Faces Blackout Blues, Variety (Dec. 10, 2011), https://variety.com/2011/tv/news/tv-faces-blackout-blues-1118047261.

[34] See Economics of Broadcast TV Retransmission Revenue 2020, S&P Global Mkt. Intelligence (2020), https://www.spglobal.com/marketintelligence/en/news-insights/blog/economics-of-broadcast-tv-retransmission-revenue-2020.

[35] Cf. Zarakas & Verlinda, supra note 7 at para. 6.

[36] Retransmission Fee Revenue for U.S. Local TV Stations, Pew Research Center (Jul. 2022), https://www.pewresearch.org/journalism/chart/sotnm-local-tv-u-s-local-tv-station-retransmission-fee-revenue; Advertising Revenue for Local TV, Pew Research Center (Jul. 13, 2021), https://www.pewresearch.org/journalism/chart/sotnm-local-tv-advertising-revenue-for-local-tv.

[37] DISH Comments, supra note 7 at 4.

[38] Park, et al., supra note 13 at 118 (“With stations receiving more retransmission compensation, a new phenomenon has also emerged since the 2010s: reverse retransmission revenues, whereby networks receive a portion of their affiliates and owned-and-operated stations’ retransmission revenues. As retransmission fees have become more important to television stations, broadcast networks and MVPDs, negotiations over contract terms and fees have become more contentious and protracted.”).

[39] Marich, supra note 33.

[40] DISH Comments, supra note 7 at 11.

[41] NCTA Comments, supra note 29 at 2.

[42] See Zarakas & Verlinda supra note 8 at para. 15 (citing George S. Ford, A Retrospective Analysis of Vertical Mergers in Multichannel Video Programming Distribution Markets: The Comcast-NBCU Merger, Phoenix Ctr. for Advanced L. & Econ. Pub. Pol’y Studies (Dec. 2017), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3138713).

[43] Aaron Heresco & Stephanie Figueroa, Over the Top: Retransmission Fees and New Commodities in the U.S. Television Industry, 29 Democratic Communiqué 19, 36 (2020).

[44] NTCA Comments, supra note 9 at 3.

[45] NPRM, supra note 1 at paras. 6-8.

[46] See supra notes 21-22 and accompanying text.

[47] NPRM, supra note 1 at n. 9.

[48] See id. at para. 6.

[49] See id. at n.12 (citing Alex Weprin, Weather Channel Brushes Off a Blackout, Politico (Feb. 6, 2014) https://www.politico.com/media/story/2014/02/weather-channel-brushes-off-a-blackout-001667); David Lieberman, The Weather Channel Returns To DirecTV, Deadline (April 8, 2014), https://deadline.com/2014/04/the-weatherchannel-returns-directv-deal-711602.

[50] See U.S. Television Networks, USTVDB (retrieved Mar. 28, 2024), https://ustvdb.com/networks.

[51] See Lieberman, supra note 49.

[52] See NPRM, supra note 1 at para. 6.

[53] See NCTA Comments, supra note 29 at 5.

[54] See id. at 3; see also Lieberman, supra note 49 (indicating that carriage consent agreement ending a blackout of The Weather Channel on DIRECTV required The Weather Channel to cut its reality programming by half on weekdays).

[55] See Alex Weprin & Lesley Goldberg, What’s Next for Freeform After Being Dropped by Charter, Hollywood Reporter (Dec. 14, 2023), https://www.hollywoodreporter.com/tv/tv-news/freeform-disney-charter-hulu-1235589827 (reporting that Freeform is a Disney-owned cable channel that currently caters to younger women; the channel began as a spinoff of the Christian Broadcasting Network, was subsequently rebranded as The Family Channel, then Fox Family Channel, and then ABC Family, before rebranding as Freeform).

[56] See NCTA Comments, supra note 29 at 5.

[57] Verizon Comments, supra note 24 at 13 (“Also, as the Commission points out, ‘What if the parties never reach an agreement for carriage? Would subscribers be entitled to rebates in perpetuity and how would that be calculated?’ The absurdity of these questions underscores the absurdity of the proposed regulation.”)

[58] See DISH Comments, supra note 7 at 13.

[59] Id.; see also id. at 22 (“Broadcasters increasingly demand that an MVPD agree to carry other broadcast stations or cable networks as a condition of obtaining retransmission consent for the broadcaster’s primary signal, without giving a real economic alternative to carrying just the primary signal(s).”)

[60] ATVA Comments, supra note 8 at 13 (“here is the additional complication that cable and satellite companies generally agree to confidentiality provisions with broadcasters and programmers—typically at the insistence of the broadcaster or programmer”); DISH Comments, supra note 7 at 21 (reporting broadcasters and programmers “insist” on confidentiality); NCTA Comments, supra note 27 at 6 (“It also bears emphasis that this approach would necessarily publicly expose per- subscriber rates and other highly confidential business information, and that the contracts between the parties prohibit disclosure of this and other information that each find competitively sensitive.”).

[61] NPRM, supra note 1 at para. 8.

[62] Spectrum Northeast, LLC v. Frey, 22 F.4th 287, 293 (1st Cir. 2022), cert denied, 143 S. Ct. 562 (2023); see also In the Matter of Promoting Competition in the American Economy: Cable Operator and DBS Provider Billing Practices, MB Docket No. 23-405, at n. 55 (Jan. 5, 2024), available at https://docs.fcc.gov/public/attachments/DOC-398660A1.pdf.

[63] NPRM, supra note 1 at para. 13.

[64] See supra note 59 and accompanying text for an example of a bundle.

[65] NCTA Comments, supra note 29 at 6.

[66] ATVA Comments, supra note 8 at 11.

[67] NCTA Comments, supra note 29 at 6.

Continue reading
Telecommunications & Regulated Utilities

Does the DMA Let Gatekeepers Protect Data Privacy and Security?

TOTM It’s been an eventful two weeks for those following the story of the European Union’s implementation of the Digital Markets Act. On April 18, the . . .

It’s been an eventful two weeks for those following the story of the European Union’s implementation of the Digital Markets Act. On April 18, the European Commission began a series of workshops with the companies designated as “gatekeepers” under the DMA: Apple, Meta, Alphabet, Amazon, ByteDance, and Microsoft. And even as those workshops were still ongoing, the Commission announced noncompliance investigations against Alphabet, Apple, and Meta. Finally, the European Parliament’s Internal Market and Consumer Protection Committee (IMCO) held its own session on DMA implementation.

Many aspects of those developments are worth commenting on, and you can expect more competition-related analysis on Truth on the Market soon. Here, I will focus on what these developments mean for data privacy and security.

Read the full piece here.

Continue reading
Data Security & Privacy

Antitrust at the Agencies Roundup: The Supply Chain, Part Deux

TOTM For all my carping about this or that program or enforcement matter, it seems to me a very good thing that Congress passed—and President Joe . . .

For all my carping about this or that program or enforcement matter, it seems to me a very good thing that Congress passed—and President Joe Biden signed into law—the spending package that will keep much of the federal government up and running for Fiscal Year 2024 (see here for the news, and here and here for a couple of the consolidated appropriations bills just signed into law).

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

Liya Palagashvili on Gig Work

Presentations & Interviews ICLE Academic Affiliate Liya Palagashvili was a guest on the Free the Economy podcast to discuss  jobs, full- and part-time jobs, contractors, gig work, California’s . . .

ICLE Academic Affiliate Liya Palagashvili was a guest on the Free the Economy podcast to discuss  jobs, full- and part-time jobs, contractors, gig work, California’s Assembly Bill 5, recent U.S. Labor Department rules, and flexible workplace benefits. Video of the full interview is embedded below.

Continue reading
Innovation & the New Economy

Lessons from GDPR for AI Policymaking

Scholarship Abstract The ChatGPT chatbot has not just caught the public imagination; it is also amplifying concern across industry, academia, and government policymakers interested in the . . .

Abstract

The ChatGPT chatbot has not just caught the public imagination; it is also amplifying concern across industry, academia, and government policymakers interested in the regulation of Artificial Intelligence (AI) about how to understand the risks and threats associated with AI applications. Following the release of ChatGPT, some EU regulators proposed changes to the EU AI Act to classify AI systems like ChatGPT that generate complex texts without any human oversight as “high-risk” AI systems that would fall under the law’s requirements. That classification was a controversial one, with other regulators arguing that technologies like ChatGPT, which merely generate text, are “not risky at all.” This controversy risks disrupting coherent discussion and progress toward formulating sound AI regulations for Large Language Models (LLMs), AI, or ICTs more generally. It remains unclear where ChatGPT fits within AI and where AI fits within the larger context of digital policy and the regulation of ICTs in spite of nascent efforts by OECD.AI and the EU.

This paper aims to address two research questions around AI policy: (1) How are LLMs like ChatGPT shifting the policy discussions around AI regulations? (2) What lessons can regulators learn from the EU’s General Data Protection Regulation (GDPR) and other data protection policymaking efforts that can be applied to AI policymaking?

The first part of the paper addresses the question of how ChatGPT and other LLMs have changed the policy discourse in the EU and other regions around regulating AI and what the broader implications for these shifts may be for AI regulation more widely. This section reviews the existing proposal for an EU AI Act and its accompanying classification of high-risk AI systems, considers the changes prompted by the release of ChatGPT and examines how LLMs appear to have altered policymakers’ conceptions of the risks presented by AI. Finally, we present a framework for understanding how the security and safety risks posed by LLMs fit within the larger context of risks presented by AI and current efforts to formulate a regulatory framework for AI.

The second part of the paper considers the similarities and differences between the proposed AI Act and GDPR in terms of (1) organizations being regulated, or scope, (2) reliance on organizations’ self-assessment of potential risks, or degree of self-regulation, (3) penalties, and (4) technical knowledge required for effective enforcement, or complexity. For each of these areas, we consider how regulators scoped or implemented GDPR to make it manageable, enforceable, meaningful, and consistent across a wide range of organizations handling many different kinds of data as well as the extent to which they were successful in doing so. We then examine different ways in which those same approaches may or may not be applicable to the AI Act and the ways in which AI may prove more difficult to regulate than issues of data protection and privacy covered by GDPR. We also look at the ways in which AI may make it more difficult to enforce and comply with GDPR since the continued evolution of AI technologies may create cybersecurity tools and threats that will impact the efficacy of GDPR and privacy policies. This section argues that the extent to which the proposed AI Act relies on self-regulation and the technical complexity of enforcement are likely to pose significant challenges to enforcement based on the implementation of the most technologically and self-regulation-focused elements of GDPR.

Continue reading
Innovation & the New Economy

ICLE Comments to NTIA on Dual-Use Foundation AI Models with Widely Available Model Weights

Regulatory Comments I. Introduction We thank the National Telecommunications and Information Administration (NTIA) for the opportunity to contribute to this request for comments (RFC) in the “Dual . . .

I. Introduction

We thank the National Telecommunications and Information Administration (NTIA) for the opportunity to contribute to this request for comments (RFC) in the “Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights” proceeding. In these comments, we endeavor to offer recommendations to foster the innovative and responsible production of artificial intelligence (AI), encompassing both open-source and proprietary models. Our comments are guided by a belief in the transformative potential of AI, while recognizing NTIA’s critical role in guiding the development of regulations that not only protect consumers but also enable this dynamic field to flourish. The agency should seek to champion a balanced and forward-looking approach toward AI technologies that allows them to evolve in ways that maximize their social benefits, while navigating the complexities and challenges inherent in their deployment.

NTIA’s question “How should [the] potentially competing interests of innovation, competition, and security be addressed or balanced?”[1] gets to the heart of ongoing debates about AI regulation. There is no panacea to be discovered, as all regulatory choices require balancing tradeoffs. It is crucial to bear this in mind when evaluating, e.g., regulatory proposals that implicitly treat AI as inherently dangerous and regard as obvious that stringent regulation is the only effective strategy to mitigate such risks.[2] Such presumptions discount AI’s unknown but potentially enormous capacity to produce innovation, and inadequately account for other tradeoffs inherent to imposing a risk-based framework (e.g., requiring disclosure of trade secrets or particular kinds of transparency that could yield new cybersecurity attack vectors). Adopting an overly cautious stance risks not only stifling AI’s evolution, but may also preclude a fulsome exploration of its potential to foster social, economic, and technological advancement. A more restrictive regulatory environment may also render AI technologies more homogenous and smother development of the kinds of diverse AI applications needed to foster robust competition and innovation.

We observe this problematic framing in the executive order (EO) that serves as the provenance of this RFC.[3] The EO repeatedly proclaims the importance of “[t]he responsible development and use of AI” in order to “mitigate[e] its substantial risks.”[4] Specifically, the order highlights concerns over “dual-use foundation models”—i.e., AI systems that, while beneficial, could pose serious risks to national security, national economic security, national public health, or public safety.[5] Concerningly, one of the categories the EO flags as illicit “dual use” are systems “permitting the evasion of human control or oversight through means of deception or obfuscation.”[6] This open-ended category could be interpreted so broadly that essentially any general-purpose generative-AI system would classify.

The EO also repeatedly distinguishes “open” versus “closed” approaches to AI development, while calling for “responsible” innovation and competition.[7] On our reading, the emphasis the EO places on this distinction raises alarm bells about the administration’s inclination to stifle innovation through overly prescriptive regulatory frameworks, diminishment of the intellectual property rights that offer incentives for innovation, and regulatory capture that favors incumbents over new entrants. In favoring one model of AI development over another, the EO’s prescriptions could inadvertently hamper the dynamic competitive processes that are crucial both for technological progress and for the discovery of solutions to the challenges that AI technology poses.

Given the inchoate nature of AI technology—much less the uncertain markets in which that technology will ultimately be deployed and commercialized—NTIA has an important role to play in elucidating for policymakers the nuances that might lead innovators to choose an open or closed development model, without presuming that one model is inherently better than the other—or that either is necessarily “dangerous.” Ultimately, the preponderance of AI risks will almost certainly emerge idiosyncratically. It will be incumbent on policymakers to address such risks in an iterative fashion as they become apparent. For now, it is critical to resist the urge to enshrine crude and blunt categories for the heterogeneous suite of technologies currently gathered under the broad banner of  “AI.”

Section II of these comments highlights the importance of grounding AI regulation in actual harms, rather than speculative risks, while outlining the diversity of existing AI technologies and the need for tailored approaches. Section III starts with discussion of some of the benefits and challenges posed by both open and closed approaches to AI development, while cautioning against overly prescriptive definitions of “openness” and advocating flexibility in regulatory frameworks. It proceeds to examine the EO’s prescription to regulate so-called “dual-use” foundation models, underscoring some potential unintended consequences for open-source AI development and international collaboration. Section IV offers some principles to craft an effective regulatory model for AI, including distinguishing between low-risk and high-risk applications, avoiding static regulatory approaches, and adopting adaptive mechanisms like regulatory sandboxes and iterative rulemaking. Section V concludes.

II. Risk Versus Harm in AI Regulation

In many of the debates surrounding AI regulation, disproportionate focus is placed on the need to mitigate risks, without sufficient consideration of the immense benefits that AI technologies could yield. Moreover, because these putative risks remain largely hypothetical, proposals to regulate AI descend quickly into an exercise in shadowboxing.

Indeed, there is no single coherent definition of what even constitutes “AI.” The term encompasses a wide array of technologies, methodologies, and applications, each with distinct characteristics, capabilities, and implications for society. From foundational models that can generate human-like text, to algorithms capable of diagnosing diseases with greater accuracy than human doctors, to “simple” algorithms that facilitate a more tailored online experience, AI applications and their underlying technologies are as varied as they are transformative.

This diversity has profound implications for the regulation and development of AI. Very different regulatory considerations are relevant to AI systems designed for autonomous vehicles than for those used in financial algorithms or creative-content generation. Each application domain comes with its own set of risks, benefits, ethical dilemmas, and potential social impacts, necessitating tailored approaches to each use case. And none of these properties of AI map clearly onto the “open” and “closed” designations highlighted by the EO and this RFC. This counsels for focus on specific domains and specific harms, rather than how such technologies are developed.[8]

As in prior episodes of fast-evolving technologies, what is considered cutting-edge AI today may be obsolete tomorrow. This rapid pace of innovation further complicates the task of crafting policies and regulations that will be both effective and enduring. Policymakers and regulators must navigate this terrain with a nuanced understanding of AI’s multifaceted nature, including by embracing flexible and adaptive regulatory frameworks that can accommodate AI’s continuing evolution.[9] A one-size-fits-all approach could inadvertently stifle innovation or entrench the dominance of a few large players by imposing barriers that disproportionately affect smaller entities or emerging technologies.

Experts in law and economics have long scrutinized both market conduct and regulatory rent seeking that serve to enhance or consolidate market power by disadvantaging competitors, particularly through increasing the costs incurred by rivals.[10] Various tactics may be employed to undermine competitors or exclude them from the market that do not involve direct price competition. It is widely recognized that “engaging with legislative bodies or regulatory authorities to enact regulations that negatively impact competitors” produces analogous outcomes.[11] It is therefore critical that the emerging markets for AI technologies not engender opportunities for firms to acquire regulatory leverage over rivals. Instead, recognizing the plurality of AI technologies and encouraging a multitude of approaches to AI development could help to cultivate a more vibrant and competitive ecosystem, driving technological progress forward and maximizing AI’s potential social benefits.

This overarching approach counsels skepticism about risk-based regulatory frameworks that fail to acknowledge how the theoretical harms of one type of AI system may be entirely different from those of another. Obviously, the regulation of autonomous drones is a very different sort of problem than the regulation of predictive policing or automated homework tutors. Even within a single circumscribed domain of generative AI—such as “smart chatbots” like ChatGPT or Claude—different applications may present entirely different kinds of challenges. A highly purpose-built version of such a system might be employed by government researchers to develop new materiel for the U.S. Armed Forces, while a general-purpose commercial chatbot would employ layers of protection to ensure that ordinary users couldn’t learn how to make advanced weaponry. Rather treating “chatbots” as possible vectors for weapons development, a more appropriate focus would target high-capability systems designed to assist in developing such systems. Were it the case that a general-purpose chatbot inadvertently revealed some information on building weapons, all incentives would direct that AI’s creators to treat that as a bug to fix, not a feature to expand.

Take, for example, the recent public response to the much less problematic AI-system malfunctions that accompanied Google’s release of its Gemini program.[12] Gemini was found to generate historically inaccurate images, such as ethnically diverse U.S. senators from the 1800s, including women.[13] Google quickly acknowledged that it did not intend for Gemini to create inaccurate historical images and turned off the image-generation feature to allow time for the company to work on significant improvements before re-enabling it.[14] While Google blundered in its initial release, it had every incentive to discover and remedy the problem. The market response provided further incentive for Google to get it right in the future.[15] Placing the development of such systems under regulatory scrutiny because some users might be able to jailbreak a model and generate some undesirable material would create disincentives to the production of AI systems more generally, with little gained in terms of public safety.

Rather than focus on the speculative risks of AI, it is essential to ground regulation in the need to address tangible harms that stem from the observed impacts of AI technologies on society. Moreover, focusing on realistic harms would facilitate a more dynamic and responsive regulatory approach. As AI technologies evolve and new applications emerge, so too will the  potential harms. A regulatory framework that prioritizes actual harms can adapt more readily to these changes, enabling regulators to update or modify policies in response to new evidence or social impacts. This flexibility is particularly important for a field like AI, where technological advancements could quickly outpace regulation, creating gaps in oversight that may leave individuals and communities vulnerable to harm.

Furthermore, like any other body of regulatory law, AI regulation must be grounded in empirical evidence and data-driven decision making. Demanding a solid evidentiary basis as a threshold for intervention would help policymakers to avoid the pitfalls of reacting to sensationalized or unfounded AI fears. This would not only enhance regulators’ credibility with stakeholders, but would also ensure that resources are dedicated to addressing the most pressing and substantial issues arising from the development of AI.

III. The Regulation of Foundation Models

NTIA is right to highlight the tremendous promise that attends the open development of AI technologies:

Dual use foundation models with widely available weights (referred to here as open foundation models) could play a key role in fostering growth among less resourced actors, helping to widely share access to AI’s benefits…. Open foundation models can be readily adapted and fine-tuned to specific tasks and possibly make it easier for system developers to scrutinize the role foundation models play in larger AI systems, which is important for rights- and safety-impacting AI systems (e.g. healthcare, education, housing, criminal justice, online platforms etc.)

…Historically, widely available programming libraries have given researchers the ability to simultaneously run and understand algorithms created by other programmers. Researchers and journals have supported the movement towards open science, which includes sharing research artifacts like the data and code required to reproduce results.[16]

The RFC proceeds to seek input on how to define “open” and “widely available.”[17] These, however, are the wrong questions. NTIA should instead proceed from the assumption that there are no harms inherent to either “open” or “closed” development models; it should be seeking input on anything that might give rise to discrete harms in either open or closed systems.

NTIA can play a valuable role by recommending useful alterations to existing law where gaps currently exist, regardless of the business or distribution model employed by the AI developer. In short, there is nothing necessarily more or less harmful about adopting an “open” or a “closed” approach to software systems. The decision to pursue one path over the other will be made based on the relevant tradeoffs that particular firms face. Embedding such distinctions in regulation is arbitrary, at best, and counterproductive to the fruitful development of AI, at worst.

A. ‘Open’ or ‘Widely Available’ Model Weights

To the extent that NTIA is committed to drawing distinctions between “open” and “closed” approaches to developing foundation models, it should avoid overly prescriptive definitions of what constitutes “open” or “widely available” model weights that could significantly hamper the progress and utility of AI technologies.

Imposing narrow definitions risks creating artificial boundaries that fail to accurately reflect AI’s technical and operational realities. They could also inadvertently exclude or marginalize innovative AI models that fall outside those rigid parameters, despite their potential to contribute positively to technological advancement and social well-being. For instance, a definition of “open” that requires complete public accessibility without any form of control or restriction might discourage organizations from sharing their models, fearing misuse or loss of intellectual property.

Moreover, prescriptive definitions could stifle the organic growth and evolution of AI technologies. The AI field is characterized by its rapid pace of change, where today’s cutting-edge models may become tomorrow’s basic tools. Prescribing fixed criteria for what constitutes “openness” or “widely available” risks anchoring the regulatory landscape to this specific moment in time, leaving the regulatory framework less able to adapt to future developments and innovations.

Given AI developers’ vast array of applications, methodologies, and goals, it is imperative that any definitions of “open” or “widely available” model weights embrace flexibility. A flexible approach would acknowledge how the various stakeholders within the AI ecosystem have differing needs, resources, and objectives, from individual developers and academic researchers to startups and large enterprises. A one-size-fits-all definition of “openness” would fail to accommodate this diversity, potentially privileging certain forms of innovation over others and skewing the development of AI technologies in ways that may not align with broader social needs.

Moreover, flexibility in defining “open” and “widely available” must allow for nuanced understandings of accessibility and control. There can, for example, be legitimate reasons to limit openness, such as protecting sensitive data, ensuring security, and respecting intellectual-property rights, while still promoting a culture of collaboration and knowledge sharing. A flexible regulatory approach would seek a balanced ecosystem where the benefits of open AI models are maximized, and potential risks are managed effectively.

B. The Benefits of ‘Open’ vs ‘Closed’ Business Models

NTIA asks:

What benefits do open model weights offer for competition and innovation, both in the AI marketplace and in other areas of the economy? In what ways can open dual-use foundation models enable or enhance scientific research, as well as education/training in computer science and related fields?[18]

An open approach to AI development has obvious benefits, as NTIA has itself acknowledged in other contexts.[19] Open-foundation AI models represent a transformative force, characterized by their accessibility, adaptability, and potential for widespread application across various sectors. The openness of these models may serve to foster an environment conducive to innovation, wherein developers, researchers, and entrepreneurs can build on existing technologies to create novel solutions tailored to diverse needs and challenges.

The inherent flexibility of open-foundation models can also catalyze a competitive market, encouraging a healthy ecosystem where entities ranging from startups to established corporations may all participate on roughly equal footing. By lowering some entry barriers related to access to basic AI technologies, this competitive environment can further drive technological advancements and price efficiencies, ultimately benefiting consumers and society at-large.

But more “closed” approaches can also prove very valuable. As NTIA notes in this RFC, it is rarely the case that a firm pursues a purely open or closed approach. These terms exist along a continuum, and firms blend models as necessary.[20] And just as firms readily mix elements of open and closed business models, a regulator should be agnostic about the precise mix that firms employ, which ultimately must align with the realities of market dynamics and consumer preferences.

Both open and closed approaches offer distinct benefits and potential challenges. For instance, open approaches might excel in fostering a broad and diverse ecosystem of applications, thereby appealing to users and developers who value customization and variety. They can also facilitate a more rapid dissemination of innovation, as they typically impose fewer restrictions on the development and distribution of new applications. Conversely, closed approaches, with their curated ecosystems, often provide enhanced security, privacy, and a more streamlined user experience. This can be particularly attractive to users less inclined to navigate the complexities of open systems. Under the right conditions, closed systems can likewise foster a healthy ecosystem of complementary products.

The experience of modern digital platforms demonstrates that there is no universally optimal approach to structuring business activities, thus illustrating the tradeoffs inherent in choosing among open and closed business models. The optimal choice depends on the specific needs and preferences of the relevant market participants. As Jonathan M. Barnett has noted:

Open systems may yield no net social gain over closed systems, can pose a net social loss under certain circumstances, and . . . can impose a net social gain under yet other circumstances.[21]

Similar considerations apply in the realm of AI development. Closed or semi-closed ecosystems can offer such advantages as enhanced security and curated offerings, which may appeal to certain users and developers. These benefits, however, may come at the cost of potentially limited innovation, as a firm must rely on its own internal processes for research and development. Open models, on the other hand, while fostering greater collaboration and creativity, may also introduce risks related to quality control, intellectual-property protection, and a host of other concerns that may be better controlled in a closed business model. Even along innovation dimensions, closed platforms can in many cases outperform open models.

With respect to digital platforms like the App Store and Google Play Store, there is a “fundamental welfare tradeoff between two-sided proprietary…platforms and two-sided platforms which allow ‘free entry’ on both sides of the market.”[22] Consequently, “it is by no means obvious which type of platform will create higher product variety, consumer adoption and total social welfare.”[23]

To take another example, consider the persistently low adoption rates for consumer versions of the open-source Linux operating system, versus more popular alternatives like Windows or MacOS.[24] A closed model like Apple’s MacOS is able to outcompete open solutions by better leveraging network effects and developing a close relationship with end users.[25] Even in this example, adoption of open versus closed models varies across user types, with, e.g., developers showing a strong preference for Linux over Mac, and only a slight preference for Windows over Linux.[26] This underscores the point that the suitability of an open or closed model varies not only by firm and product, nor even solely by user, but by the unique fit of a particular model for a particular user in a particular context. Many of those Linux-using developers will likely not use it on their home computing device, for example, even if they prefer it for work.

The dynamics among consumers and developers further complicate prevailing preferences for open or closed models. For some users, the security and quality assurance provided by closed ecosystems outweigh the benefits of open systems’ flexibility. On the developer side, the lower barriers to entry in more controlled ecosystems that smooth the transaction costs associated with developing and marketing applications can democratize application development, potentially leading to greater innovation within those ecosystems. Moreover, distinctions between open and closed models can play a critical role in shaping inter-brand competition. A regulator placing its thumb on the business-model scale would push the relevant markets toward less choice and lower overall welfare.[27]

By differentiating themselves through a focus on ease-of-use, quality, security, and user experience, closed systems contribute to a vibrant competitive landscape where consumers have clear choices between differing “brands” of AI. Forcing an AI developer to adopt practices that align with a regulator’s preconceptions about the relative value of “open” and “closed” risks homogenizing the market and diminishing the very competition that spurs innovation and consumer choice.

Consider some of the practical benefits sought by deployers when choosing between open and closed models. For example, it’s not straightforward to say close is inherently better than open when considering issues of data sharing or security; even here, there are tradeoffs. Open innovation in AI—characterized by the sharing of data, algorithms, and methodologies within the research community and beyond—can mitigate many of the risks associated with model development. This openness fosters a culture of transparency and accountability, where AI models and their applications are subject to scrutiny by a broad community of experts, practitioners, and the general public. This collective oversight can help to identify and address potential safety and security concerns early in the development process, thus enhancing AI technologies’ overall trustworthiness.

By contrast, a closed system may implement and enforce standardized security protocols more quickly. A closed system may have a sharper, more centralized focus on providing data security to users, which may perform better along some dimensions. And while the availability of code may provide security in some contexts, in other circumstances, closed systems perform better.[28]

In considering ethical AI development, different types of firms should be free to experiment with different approaches, even blending them where appropriate. For example, Claude’s approach to “Collective Constitutional AI” adopts what is arguably a “semi-open” model, blending proprietary elements with certain aspects of openness to foster innovation, while also maintaining a level of control.[29] This model might strike an appropriate balance, in that it ensures some degree of proprietary innovation and competitive advantage while still benefiting from community feedback and collaboration.

On the other hand, fully open-source development could lead to a different, potentially superior result that meets a broader set of needs through community-driven evolution and iteration. There is no way to determine, ex ante, that either an open or a closed approach to AI development will inherently provide superior results for developing “ethical” AI. Each has its place, and, most likely, the optimal solutions will involve elements of both approaches.

In essence, codifying a regulatory preference for one business model over the other would oversimplify the intricate balance of tradeoffs inherent to platform ecosystems. Economic theory and empirical evidence suggest that both open and closed platforms can drive innovation, serve consumer interests, and stimulate healthy competition, with all of these considerations depending heavily on context. Regulators should therefore aim for flexible policies that support coexistence of diverse business models, fostering an environment where innovation can thrive across the continuum of openness.

C. Dual-Use Foundation Models and Transparency Requirements

The EO and the RFC both focus extensively on so-called “dual-use” foundation models:

Foundation models are typically defined as, “powerful models that can be fine-tuned and used for multiple purposes.” Under the Executive Order, a “dual-use foundation model” is “an AI model that is trained on broad data; generally uses self-supervision, contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters….”[30]

But this framing will likely do more harm than good. As noted above, the terms “AI” or “AI model” are frequently invoked to refer to very different types of systems. Further defining these models as “dual use” is also unhelpful, as virtually any tool in existence can be “dual use” in this sense. Certainly, from a certain perspective, all software—particularly highly automated software—can pose a serious risk to “national security” or “safety.” Encryption and other privacy-protecting tools certainly fit this definition.[31] While it is crucial to mitigate harms associated with the misuse of AI technologies, the blanket treatment of all foundation models under this category is overly simplistic.

The EO identifies certain clear risks, such as the possibility that models could aid in the creation of chemical, biological, or nuclear weaponry. These categories are obvious subjects for regulatory control, but the EO then appears to open a giant definitional loophole that threatens to subsume virtually any useful AI system. It employs expansive terminology to describe a more generalized threat—specifically, that dual-use models could “[permit] the evasion of human control or oversight through means of deception or obfuscation.”[32] Such language could encompass a wide array of general-purpose AI models. Furthermore, by labeling systems capable of bypassing human decision making as “dual use,” the order implicitly suggests that all AI could pose such risk as warrants national-security levels of scrutiny.

Given the EO’s broad definition of AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments,” numerous software systems not typically even considered AI might be categorized as “dual-use” models.[33] Essentially, any sufficiently sophisticated statistical-analysis tool could qualify under this definition.

A significant repercussion of the EO’s very broad reporting mandates for dual-use systems, and one directly relevant to the RFC’s interest in promoting openness, is that these might chill open-source AI development.[34] Firms dabbling in AI technologies—many of which might not consider their projects to be dual use—might keep their initiatives secret until they are significantly advanced. Faced with the financial burden of adhering to the EO’s reporting obligations, companies that lack a sufficiently robust revenue model to cover both development costs and legal compliance might be motivated to dodge regulatory scrutiny in the initial phases, consequently dampening the prospects for transparency.

It is hard to imagine how open-source AI projects could survive in such an environment. Open-source AI code libraries like TensorFlow[35] and PyTorch[36] foster remarkable innovation by allowing developers to create new applications that use cutting-edge models. How could a paradigmatic startup developer working out of a garage genuinely commit to open-source development if tools like these fall under the EO’s jurisdiction? Restricting access to the weights that models use—let alone avoiding open-source development entirely—may hinder independent researchers’ ability to advance the forefront of AI technology.

Moreover, scientific endeavors typically benefit from the contributions of researchers worldwide, as collaborative efforts on a global scale are known to fast-track innovation. The pressure the EO applies to open-source development of AI tools could curtail international cooperation, thereby distancing American researchers from crucial insights and collaborations. For example, AI’s capacity to propel progress in numerous scientific areas is potentially vast—e.g., utilizing MRI images and deep learning for brain-tumor diagnoses[37] or employing machine learning to push the boundaries of materials science.[38] Such research does not benefit from stringent secrecy, but thrives on collaborative development. Enabling a broader community to contribute to and expand upon AI advancements supports this process.

Individuals respond to incentives. Just as how well-intentioned seatbelt laws paradoxically led to an uptick in risky driving behaviors,[39] ill-considered obligations placed on open-source AI developers could unintentionally stifle the exchange of innovative concepts crucial to maintain the United States’ leadership in AI innovation.

IV. Regulatory Models that Support Innovation While Managing Risks Effectively

In the rapidly evolving landscape of artificial intelligence (AI), it is paramount to establish governance and regulatory frameworks that both encourage innovation and ensure safety and ethical integrity. An effective regulatory model for AI should be adaptive, principles-based, and foster a collaborative environment among regulators, developers, researchers, and the broader community. A number of principles can help in developing this regime.

A. Low-Risk vs High-Risk AI

First, a clear distinction should be made between low-risk AI applications that enhance operational efficiency or consumer experience and high-risk applications that could have significant safety implications. Low-risk applications like search algorithms and chatbots should be governed by a set of baseline ethical guidelines and best practices that encourage innovation, while ensuring basic standards are met. On the other hand, high-risk applications—such as those used by law enforcement or the military—would require more stringent review processes, including impact assessments, ethical reviews, and ongoing monitoring to mitigate potentially adverse effects.

Contrast this with the recently enacted AI Act in the European Union, and its decision to create presumptions of risk for general purpose AI (GPAI) systems, such as large language models (LLMs), that present what the EU has termed so-called “systemic risk.”[40] Article 3(65) of the AI Act defines systemic risk as “a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.”[41]

This definition bears similarities to the “Hand formula” in U.S. tort law, which balances the burden of precautions against the probability and severity of potential harm to determine negligence.[42] The AI Act’s notion of systemic risk, however, is applied more broadly to entire categories of AI systems based on their theoretical potential for widespread harm, rather than on a case-by-case basis.

The designation of LLMs as posing “systemic risk” is problematic for several reasons. It creates a presumption of risk merely based on a GPAI system’s scale of operations, without any consideration of the actual likelihood or severity of harm in specific use cases. This could lead to unwarranted regulatory intervention and unintended consequences that hinder the development and deployment of beneficial AI technologies. And this broad definition of systemic risk gives regulators significant leeway to intervene in how firms develop and release their AI products, potentially blocking access to cutting-edge tools for European citizens, even in the absence of tangible harms.

While it is important to address potential risks associated with AI systems, the AI Act’s approach risks stifling innovation and hindering the development of beneficial AI technologies within the EU.

B. Avoid Static Regulatory Approaches

AI regulators are charged with overseeing a dynamic and rapidly developing market, and should therefore avoid erecting a rigid framework that force new innovations into ill-fitting categories. The “regulatory sandbox” may provide a better model to balance innovation with risk management. By allowing developers to test and refine AI technologies in a controlled environment under regulatory oversight, sandboxes can be used to help identify and address potential issues before wider deployment, all while facilitating dialogue between innovators and regulators. This approach not only accelerates the development of safe and ethical AI solutions, but also builds mutual understanding and trust. Where possible, NTIA should facilitate policy experimentation with regulatory sandboxes in the AI context.

Meta’s Open Loop program is an example of this kind of experimentation.[43] This program is a policy prototyping research project focused on evaluating the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) 1.0.[44] The goal is to assess whether the framework is understandable, applicable, and effective in assisting companies to identify and manage risks associated with generative AI. It also provides companies an opportunity to familiarize themselves with the NIST AI RMF and its application in risk-management processes for generative AI systems. Additionally, it aims to collect data on existing practices and offer feedback to NIST, potentially influencing future RMF updates.

1. Regulation as a discovery process

Another key principle is to ensure that regulatory mechanisms are adaptive. Some examples of adaptive mechanisms are iterative rulemaking and feedback loops that allow regulations to be updated continuously in response to new developments and insights. Such mechanisms enable policymakers to respond swiftly to technological breakthroughs, ensuring that regulations remain relevant and effective, without stifling innovation.

Geoffrey Manne & Gus Hurwitz have recently proposed a framework for “regulation as a discovery process” that could be adapted to AI.[45] They argue for a view of regulation not merely as a mechanism for enforcing rules, but as a process for discovering information that can inform and improve regulatory approaches over time. This perspective is particularly pertinent to AI, where the pace of innovation and the complexity of technologies often outstrip regulators’ understanding and ability to predict future developments. This framework:

in its simplest formulation, asks regulators to consider that they might be wrong. That they might be asking the wrong questions, collecting the wrong information, analyzing it the wrong way—or even that Congress has given them the wrong authority or misunderstood the problem that Congress has tasked them to address.[46]

That is to say, an adaptive approach to regulation requires epistemic humility, with the understanding that, particularly for complex, dynamic industries:

there is no amount of information collection or analysis that is guaranteed to be “enough.” As Coase said, the problem of social cost isn’t calculating what those costs are so that we can eliminate them, but ascertaining how much of those social costs society is willing to bear.[47]

In this sense, modern regulators’ core challenge is to develop processes that allow for iterative development of knowledge, which is always in short supply. This requires a shift in how an agency conceptualizes its mission, from one of writing regulations to one of assisting lawmakers to assemble, filter, and focus on the most relevant and pressing information needed to understand a regulatory subject’s changing dynamics.[48]

As Hurwitz & Manne note, existing efforts to position some agencies as information-gathering clearinghouses suffer from a number of shortcomings—most notably, that they tend to operate on an ad hoc basis, reporting to Congress in response to particular exigencies.[49] The key to developing a “discovery process” for AI regulation would instead require setting up ongoing mechanisms to gather and report on data, as well as directing the process toward “specifications for how information should be used, or what the regulator anticipated to find in the information, prior to its collection.”[50]

Embracing regulation as a discovery process means acknowledging the limits of our collective knowledge about AI’s potential risks and benefits. This underscores why regulators should prioritize generating and utilizing new information through regulatory experiments, iterative rulemaking, and feedback loops. A more adaptive regulatory framework could respond to new developments and insights in AI technologies, thereby ensuring that regulations remain relevant and effective, without stifling innovation.

Moreover, Hurwitz & Manne highlight the importance of considering regulation as an information-producing activity.[51] In AI regulation, this could involve setting up mechanisms that allow regulators, innovators, and the public to contribute to and benefit from a shared pool of knowledge about AI’s impacts. This could include public databases of AI incidents, standardized reporting of AI-system performance, or platforms for sharing best practices in AI safety and ethics.

Static regulatory approaches may fail to capture the evolving landscape of AI applications and their societal implications. Instead, a dynamic, information-centric regulatory strategy that embraces the market as a discovery process could better facilitate beneficial innovations, while identifying and mitigating harms.

V. Conclusion

As the NTIA navigates the complex landscape of AI regulation, it is imperative to adopt a nuanced, forward-looking approach that balances the need to foster innovation with the imperatives of ensuring public safety and ethical integrity. The rapid evolution of AI technologies necessitates a regulatory framework that is both adaptive and principles-based, eschewing static snapshots of the current state of the art in favor of flexible mechanisms that could accommodate the dynamic nature of this field.

Central to this approach is to recognize that the field of AI encompasses a diverse array of technologies, methodologies, and applications, each with its distinct characteristics, capabilities, and implications for society. A one-size-fits-all regulatory model would not only be ill-suited to the task at-hand, but would also risk stifling innovation and hindering the United States’ ability to maintain its leadership in the global AI industry. NTIA should focus instead on developing tailored approaches that distinguish between low-risk and high-risk applications, ensuring that regulatory interventions are commensurate with the potential identifiable harms and benefits associated with specific AI use cases.

Moreover, the NTIA must resist the temptation to rely on overly prescriptive definitions of “openness” or to favor particular business models over others. The coexistence of open and closed approaches to AI development is essential to foster a vibrant, competitive ecosystem that drives technological progress and maximizes social benefits. By embracing a flexible regulatory framework that allows for experimentation and iteration, the NTIA can create an environment conducive to innovation while still ensuring that appropriate safeguards are in place to mitigate potential risks.

Ultimately, the success of the U.S. AI industry will depend on the ability of regulators, developers, researchers, and the broader community to collaborate in developing governance frameworks that are both effective and adaptable. By recognizing the importance of open development and diverse business models, the NTIA can play a crucial role in shaping the future of AI in ways that promote innovation, protect public interests, and solidify the United States’ position as a global leader in this transformative field.

[1] Dual Use Foundation Artificial Intelligence Models With Widely Available Model Weights, Docket No. 240216-0052, 89 FR 14059, National Telecommunications and Information Administration (Mar. 27, 2024) at 14063, question 8(a) [hereinafter “RFC”].

[2] See, e.g., Kristian Stout, Systemic Risk and Copyright in the EU AI Act, Truth on the Market (Mar. 19, 2024), https://truthonthemarket.com/2024/03/19/systemic-risk-and-copyright-in-the-eu-ai-act.

[3] Exec. Order No. 14110, 88 F.R. 75191 (2023), https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence?_fsi=C0CdBzzA [hereinafter “EO”].

[4] See, e.g., EO at §§ 1; 2(c), 5.2(e)(ii); and § 8(c);

[5] Id. at § 3(k).

[6] Id. at § (k)(iii).

[7] Id. at § 4.6. As NTIA notes, the administration refers to “widely available model weight,” which is equivalent to “open foundation models” in this proceeding. RFC at 14060.

[8] For more on the “open” vs “closed” distinction and its poor fit as a regulatory lens, see, infra, at nn. 19-41 and accompanying text.

[9] Adaptive regulatory frameworks are discussed, infra, at nn. 42-53 and accompanying text.

[10] See Steven C. Salop & David T. Scheffman, Raising Rivals’ Costs, 73:2 Am. Econ. R. 267, 267–71 (1983), http://www.jstor.org/stable/1816853.

[11] See Steven C. Salop & David T. Scheffman, Cost-Raising Strategies, 36:1 J. Indus. Econ. 19 (1987), https://doi.org/10.2307/2098594.

[12] Cindy Gordon, Google Pauses Gemini AI Model After Latest Debacle, Forbes (Feb. 29, 2024), https://www.forbes.com/sites/cindygordon/2024/02/29/google-latest-debacle-has-paused-gemini-ai-model/?sh=3114d093536c.

[13] Id.

[14] Id.

[15] Breck Dumas, Google Loses $96B in Value on Gemini Fallout as CEO Does Damage Control, Yahoo Finance (Feb. 28, 2024), https://finance.yahoo.com/news/google-loses-96b-value-gemini-233110640.html.

[16] RFC at 14060.

[17] RFC at 14062, question 1.

[18] RFC at 14062, question 3(a).

[19] Department of Commerce, Competition in the Mobile Application Ecosystem (2023), https://www.ntia.gov/report/2023/competition-mobile-app-ecosystem (“While retaining appropriate latitude for legitimate privacy, security, and safety measures, Congress should enact laws and relevant agencies should consider measures (such as rulemaking) designed to open up distribution of lawful apps, by prohibiting… barriers to the direct downloading of applications.”).

[20] RFC at 14061 (“‘openness’ or ‘wide availability’ of model weights are also terms without clear definition or consensus. There are gradients of ‘openness,’ ranging from fully ‘closed’ to fully ‘open’”).

[21] See Jonathan M. Barnett, The Host’s Dilemma: Strategic Forfeiture in Platform Markets for Informational Goods, 124 Harv. L. Rev. 1861, 1927 (2011).

[22] Id. at 2.

[23] Id. at 3.

[24]  Desktop Operating System Market Share Worldwide Feb 2023 – Feb 2024, statcounter, https://gs.statcounter.com/os-market-share/desktop/worldwide (last visited Mar. 27, 2024).

[25]  Andrei Hagiu, Proprietary vs. Open Two-Sided Platforms and Social Efficiency (Harv. Bus. Sch. Strategy Unit, Working Paper No. 09-113, 2006).

[26] Joey Sneddon, More Developers Use Linux than Mac, Report Shows, Omg Linux (Dec. 28, 2022), https://www.omglinux.com/devs-prefer-linux-to-mac-stackoverflow-survey.

[27] See Michael L. Katz & Carl Shapiro, Systems Competition and Network Effects, 8 J. Econ. Persp. 93, 110 (1994), (“[T]he primary cost of standardization is loss of variety: consumers have fewer differentiated products to pick from, especially if standardization prevents the development of promising but unique and incompatible new systems”).

[28] See. e.g., Nokia, Threat Intelligence Report 2020 (2020), https://www.nokia.com/networks/portfolio/cyber-security/threat-intelligence-report-2020; Randal C. Picker, Security Competition and App Stores, Network Law Review (Aug. 23, 2021), https://www.networklawreview.org/picker-app-stores.

[29] Collective Constitutional AI: Aligning a Language Model with Public Input, Anthropic (Oct. 17, 2023), https://www.anthropic.com/news/collective-constitutional-ai-aligning-a-language-model-with-public-input.

[30] RFC at 14061.

[31] Encryption and the “Going Dark” Debate, Congressional Research Service (2017), https://crsreports.congress.gov/product/pdf/R/R44481.

[32] EO at. § 3(k)(iii).

[33] EO at § 3(b).

[34] EO at § 4.2 (requiring companies developing dual-use foundation models to provide ongoing reports to the federal government on their activities, security measures, model weights, and red-team testing results).

[35] An End-to-End Platform for Machine Learning, TensorFlow, https://www.tensorflow.org (last visited Mar. 27, 2024).

[36] Learn the Basics, PyTorch, https://pytorch.org/tutorials/beginner/basics/intro.html (last visited Mar. 27, 2024).

[37] Akmalbek Bobomirzaevich Abdusalomov, Mukhriddin Mukhiddinov, & Taeg Keun Whangbo, Brain Tumor Detection Based on Deep Learning Approaches and Magnetic Resonance Imaging, 15(16) Cancers (Basel) 4172 (2023), available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10453020.

[38] Keith T. Butler, et al., Machine Learning for Molecular and Materials Science, 559 Nature 547 (2018), available at https://www.nature.com/articles/s41586-018-0337-2.

[39] The Peltzman Effect, The Decision Lab, https://thedecisionlab.com/reference-guide/psychology/the-peltzman-effect (last visited Mar. 27, 2024).

[40] European Parliament, European Parliament legislative Resolution of 13 March 2024 on the Proposal for a Regulation of the European Parliament and of the Council on Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM/2021/206, available at https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html [hereinafter “EU AI Act”].

[41] Id. at Art. 3(65).

[42] See Stephen G. Gilles, On Determining Negligence: Hand Formula Balancing, the Reasonable Person Standard, and the Jury, 54 Vanderbilt L. Rev. 813, 842-49 (2001).

[43] See Open Loop’s First Policy Prototyping Program in the United States, Meta, https://www.usprogram.openloop.org (last visited Mar. 27. 2024).

[44] Id.

[45] Justin (Gus) Hurwitz & Geoffrey A. Manne, Pigou’s Plumber: Regulation as a Discovery Process, SSRN (2024), available at https://laweconcenter.org/resources/pigous-plumber.

[46] Id. at 32.

[47] Id. at 33.

[48] See id. at 28-29

[49] Id. at 37.

[50] Id. at 37-38.

[51] Id.

Continue reading
Innovation & the New Economy

FCC’s Digital-Discrimination Rules Could Delay Broadband

Popular Media When Congress passed the Infrastructure Investment and Jobs Act (IIJA) near the end of 2021, it included a short provision that required the Federal Communications . . .

When Congress passed the Infrastructure Investment and Jobs Act (IIJA) near the end of 2021, it included a short provision that required the Federal Communications Commission to adopt rules to prevent “digital discrimination.” At the time, it was understood the law intended to prohibit broadband providers from intentionally discriminating in their deployment decisions based on “income level, race, ethnicity, color, religion, or national origin.”

Read the full piece here.

Continue reading
Telecommunications & Regulated Utilities