Showing 9 of 214 Publications by Kristian Stout

Brief of ICLE and ITIF to 8th Circuit in Minnesota Telecom Alliance v FCC

Amicus Brief STATEMENTS OF INTEREST The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center that builds intellectual foundations for . . .

STATEMENTS OF INTEREST

The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center that builds intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law and economics methodologies and economic learning to inform policy debates and has longstanding expertise evaluating law and policy.

ICLE scholars have written extensively in the areas of telecommunications and broadband policy. This includes white papers, law journal articles, and amicus briefs touching on issues related to the provision and regulation of broadband Internet service.

The FCC’s final rule by Report and Order adopted on January 22, 2024  concerning “digital discrimination” (the Order) constitutes a significant change to an economic policy. Broadband alone is a $112 billion industry with over 125 million customers. If permitted to stand, the FCC’s broad Order will be harmful to the dynamic marketplace for broadband that presently exists in the United States.

The Information Technology and Innovation Foundation (“ITIF”) is an independent non-profit, non-partisan think tank. ITIF’s mission is to formulate, evaluate, and promote policy solutions that accelerate innovation and boost productivity to spur growth, opportunity, and progress. To that end, ITIF strives to provide policymakers around the world with high-quality information, analysis, and recommendations they can trust. ITIF adheres to the highest standards of research integrity, guided by an internal code of ethics grounded in analytical rigor, policy pragmatism, and independence from external direction or bias.

ITIF’s mission is to advance public policies that accelerate the progress of technological innovation. ITIF believes that innovation can almost always be a force for good. It is the major driver of human advancement and the essential means for improving societal welfare. A robust rate of innovation makes it possible to achieve many other goals—including increases in median per-capita income, improved health, transportation mobility, and a cleaner environment. ITIF engages in policy and legal debates, both directly and indirectly, by presenting policymakers, courts, and other policy influencers with compelling data, analysis, arguments, and proposals to advance effective innovation policies and oppose counterproductive ones.

The FCC’s Order will have a significant impact on the speed and adoption of technological innovation in the United States. The Order not only raises the cost of deployment investments, but it also increases the risk of liability for discrimination, thereby increasing the uncertainty of the investments’ returns. As a result, the Order will not only stifle new deployment to unserved areas, but also will delay network upgrades and maintenance out of fear of alleged disparate effects.

Pursuant to Federal Rule of Appellate Procedure 29(a)(2), ICLE and ITIF have obtained consent of the parties to file the instant Brief of the International Center for Law & Economics and the Information Technology and Innovation Foundation as Amici Curiae In Support of Petitioners.

INTRODUCTION AND SUMMARY OF ARGUMENT

The present marketplace for broadband in the United States is dynamic and generally serves consumers well. See Geoffrey A. Manne, Kristian Stout, & Ben Sperry, A Dynamic Analysis of Broadband Competition: What Concentration Numbers Fail to Capture (ICLE White Paper, Jun. 2021), https://laweconcenter.org/wp-content/uploads/2021/06/A-Dynamic-Analysis-of-Broadband-Competition.pdf. Broadband providers acting in the marketplace have invested $2.1 trillion in building, maintaining, and improving their networks since 1996, including $102.4 billion in 2022 alone. See USTelecom, 2022 Broadband Capex Report (Sept. 8, 2023), https://www.ustelecom.org/research/2022-broadband-capex/. The FCC’s own data suggests that 91% of Americans have access to high-speed broadband under its new and faster definition. See 2024 706 Report, FCC 24-27, GN Docket No. 22-270, at paras. 20, 22 (Mar. 18, 2024).

Despite this, there are areas in the country, primarily due to low population density, where serving consumers is prohibitively expensive. Moreover, affordability remains a concern for some lower-income groups. To address these concerns, Congress passed the Infrastructure Investment and Jobs Act (IIJA), Pub. L. No. 117-58, 135 Stat. 429, which invested $42.5 billion in building out broadband to rural areas through the Broadband Equity, Access, and Deployment (BEAD) Program, and billions more in the Affordable Connectivity Program (ACP), which provided low-income individuals a $30 per month voucher. Congress’s passage of the IIJA was consistent with sustaining the free and dynamic market for broadband.

In addition, to address concerns that broadband providers could engage in discriminatory behavior in deployment decisions, Section 60506(b) of IIJA requires that “[n]ot later than 2 years after November 15, 2021, the Commission shall adopt final rules to facilitate equal access to broadband internet access services, taking into account the issues of technical and economic feasibility presented by that objective, including… preventing digital discrimination of access based on income level, race, ethnicity, color, religion, or national origin.” Pub. L. No. 117-58, § 60506(b)(1), 135 Stat. 429, 1246.

The FCC adopted the final rule by Report and Order in the Federal Register on January 22, 2024. See 89 Fed. Reg. 4128 (Jan. 22, 2024) [hereinafter “Order”] attached as the Addendum to Petitioners’ Brief (“Pet. Add.”). But the digital discrimination rule issued in this Order is inconsistent with the IIJA, so expansive as to claim regulatory authority over major political and economic questions, and is arbitrary and capricious. As a result, this Court must vacate it.

The FCC could have issued a final rule consistent with the statute and the dynamic broadband marketplace. Such a rule would have recognized the limited purpose of the statute was to outlaw intentional discrimination by broadband providers in deployment decisions, in a way that would treat a person or group of persons less favorably than others because of a listed protected trait. This rule would be workable, leaving the FCC to focus its attention on cases where broadband providers fail to invest in deploying networks due to animus against those groups.

Instead, the FCC chose to create an expansive regulatory scheme that gives it essentially unlimited discretion over anything that would affect the adoption of broadband. It did this by adopting a differential impact standard that applies not only to broadband providers, but to anyone that could “otherwise affect consumer access to broadband internet access service,” see 47 CFR §16.2 (definition of “Covered entity”), which includes considerations of price among the “comparable terms and conditions.” See Pet. Add. 59, Order at para. 111 (“Indeed, pricing is often the most important term that consumers consider when purchasing goods and services… this is no less true with respect to broadband internet access services.”). Taken together, these departures from the text of Section 60506 would give the FCC nearly unlimited authority over broadband providers, and even a great deal of authority over other entities that can affect broadband access.

To interpret Section 60506 to encompass a “differential impact” standard, as the agency has done here, leads to a situation in which covered entities that have no intent to discriminate or even take active measures to help protected classes could still be found in violation of the rules. This standard opens nearly everything to FCC review because of the correlation of profit-maximizing motivations not covered by the statute with things that are covered by the statute.

Income level, race, ethnicity, color, religion, and national origin are often incidentally associated with some other non-protected factor important for investment decisions. Specifically, population density is widely recognized as one of the determinants of expected profitability for broadband deployment. See Eric Fruits & Kristian Stout, The Income Conundrum: Intent and Effects Analysis of Digital Discrimination (ICLE Issue Brief 2022-11-14) available at https://laweconcenter.org/wp-content/uploads/2022/11/The-Income-Conundrum-Intent-and-Effects-Analysis-of-Digital-Discrimination.pdf citing U.S. Gov’t Accountability Office, GAO-06-426, Telecommunications Broadband Deployment Is Extensive Throughout the United States, but It Is Difficult to Assess the Extent of Deployment Gaps in Rural Areas 19 (2006) (population density is the “most frequently cited cost factor affecting broadband deployment” and “a critical determinant of companies’ deployment decisions”). But population density is also correlated with income level, with higher density associated with higher incomes. See Daniel Hummel, The Effects of Population and Housing Density in Urban Areas on Income in the United States, 35 Loc. Econ. 27, Feb. 7, 2020, (showing statistically significant positive relationship between income and both population and housing density). Higher population density is also correlated with greater racial, ethnic, religious, and national origin diversity. See, e.g., Barrett A. Lee & Gregory Sharp, Diversity Across the Rural-Urban Continuum, 672 Annals Am. Acad. Pol. & Soc. Sci. 26 (2017).

Consider a hypothetical provider who eschews discrimination against any of the protected traits in its deployment practices by prioritizing its investments solely on population density, deploying to high-density areas first then lower-density areas later. If higher-density areas are also areas with higher incomes, then it would be relatively easy to produce a statistical analysis showing that lower-income areas are associated with lower rates of deployment. Similarly, because of the relationships between population density and race, ethnicity, color, religion, and national origin, it would be relatively easy to produce a statistical analysis showing disparate impacts across these protected traits.

With so many possible spurious correlations, it is almost impossible for any covered entity to know with any certainty whether its policies or practices could be actionable for differential impacts. Nobel laureate, Ronald Coase, is reported to have said, “If you torture the data long enough, it will confess.” Garson O’Toole, If You Torture the Data Long Enough, It Will Confess, Quote Investigator (Jan. 18, 2021), https://quoteinvestigator.com/2021/01/18/confess. The FCC’s Order amounts to an open invitation to torture the data.

While it is possible that the FCC could determine that the costs of deployment due to population density or another profit-relevant reason go to “technical or economic feasibility,” the burden to prove infeasibility are on the covered entity by a preponderance of the evidence standard. See 47 CFR §16.5(c)-(d). This may include “proof that available, less discriminatory alternatives were not reasonably achievable.” See 47 CFR §16.5(c). In its case-by-case review process, there is no guarantee that the Commission will agree that “technical or economic feasibility” warrants an exception in any given dispute. See 47 CFR §16.5(e). This rule will put a great deal of pressure on covered entities to avoid possible litigation by getting all plans pre-approved by the FCC through its advisory opinion authority. See 47 CFR §16.7. This sets up the FCC to be a central planner for nearly everything related to broadband, from deployment to policies and practices that affect even adoption itself, including price of the service. This is inconsistent with preserving the ability of businesses to make “practical business choices and profit-related decisions that sustain a vibrant and dynamic free-enterprise system.” Texas Dep’t of Hous. & Cmty. Affs. v. Inclusive Communities Project, Inc., 576 U.S. 519, 533 (2015). The Order will thus dampen investment incentives because “the specter of disparate-impact litigation” will cause private broadband providers to “no longer construct or renovate” their networks, leading to a situation where the FCC’s rule “undermines its own purpose” under the IIJA “as well as the free market system.” Id. at 544.

ARGUMENT

The FCC’s Order is unlawful. First, the Order’s interpretation of Section 60506 is inconsistent with the structure of the IIJA. Second, the Order is inconsistent with the clear meaning of Section 60506. Third, the Order raises major questions of political and economic significance by giving the FCC nearly unlimited authority over broadband deployment decisions, including price. Fourth, the Order is arbitrary and capricious because it fails to adopt a rule that is reasonable insofar as it will end up reducing investment incentives of broadband providers to deploy and improve broadband service, which is inconsistent with the purpose of the IIJA. Finally, the Order’s vagueness leaves a person of ordinary intelligence no ability to know whether they are subject to the law and thus gives the FCC the ability to engage in arbitrary and discriminatory enforcement.

I. The Order’s Interpretation of Section 60506 Is Inconsistent with the Structure of the IIJA

“It is a fundamental canon of statutory construction that the words of a statute must be read in their context and with a view to their place in the overall statutory scheme.” Davis v. Michigan Dept. of Treasury, 489 U.S. 803, 809 (1989). The structure of the IIJA as a whole, as well as the fact that Section 60506, in particular, was not placed within the larger Communications Act (47 U.S.C. §150 et seq.) that gives the FCC authority, suggests that the Order claims authority far beyond what Congress has granted the FCC.

The IIJA divided broadband policy priorities between different agencies and circumscribes the scope of each program or rulemaking it delegates to agencies. Section 60102 addressed the issue of universal broadband deployment by creating the Broadband Equity, Access, and Deployment (BEAD) Program. See IIJA §60102. The statute designated the National Telecommunication and Information Administration (NTIA) to administer this $42.45 billion program with funds to be first allocated to deploy broadband service to all areas that currently lack access to high-speed broadband Internet. See IIJA §60102(b), (h). BEAD is, therefore, Congress’s chosen method to remedy disparities in broadband deployment due to cost-based barriers like low population density. Section 60502 then created the Affordable Connectivity Program (ACP), which provided low-income individuals a $30 per month voucher, and delegated its administration to the FCC. See IIJA §60502. ACP is, therefore, Congress’s chosen method to remedy broadband affordability for households whose low income is a barrier to broadband adoption. Title V of Division F of the IIJA goes on to create several more broadband programs, each with a specific and limited scope. See IIJA § 60101 et seq.

In short, Congress was intentional about circumscribing the different problems with broadband deployment and access, as well as the scope of the programs it designed to fix them. Section 60506’s authorization for the FCC to prevent “digital discrimination” fits neatly into this statutory scheme if it targets disparate treatment in deployment decisions based upon protected status—i.e., intentional harmful actions that are distinct from deployment decisions based on costs of deployment or projected demand for broadband service. But the FCC’s Order vastly exceeds this statutory scope and claims authority over virtually every aspect of the broadband marketplace, including infrastructure deployment decisions due to cost generally and the potential market for the networks once deployed.  Indeed, the FCC envisions scenarios in which its rules conflict with other federal funding programs but nevertheless says that compliance with them is no safe harbor from liability for disparate impacts that compliance creates. See Pet. Add. 69-70, Order at para. 142. The Order thus dramatically exceeds the boundaries Congress set in Section 60506. Congress cannot have meant for section 60506 to remedy all deployment disparities or all issues of affordability because it created BEAD and ACP for those purposes.

Moreover, Section 60506 was not incorporated into the Communications Act, unlike other parts of the IIJA. In other words, the FCC’s general enforcement authority doesn’t apply to the regulatory scheme of Section 60506. The IIJA was not meant to give the FCC vast authority over broadband deployment and adoption by implication. The FCC must rely on Section 60506 alone for any authority it was given to combat digital discrimination.

II. The Order Is Inconsistent with the Clear Meaning of the Text of Section 60506

The text of Section 60506 plainly shows that the intention of Congress to combat digital discrimination was through the use of circumscribed rules aimed at preventing intentional discrimination in deployment decisions by broadband providers. The statute starts with a statement of policy in part (a) and then gives the Commission direction to fulfill that purpose in parts (b) and (c).

The statement of policy in Section 60506(a) is exactly that: a statement of policy. Courts have long held that statutory sections like Section 60506(a)(1) and (a)(3) using words like “should” are “precatory.” See Emergency Coal. to Def. Educ. Travel v. U.S. Dep’t of Treasury, 498 F. Supp. 2d 150, 165 (D.D.C. 2007) (“Courts have repeatedly held that such ‘sense of Congress’ language is merely precatory and non-binding.”), aff’d, 545 F.3d 4 (D.C. Cir. 2008). While the statement of policy helps illuminate the goal of the provision at issue, it does not actually give the FCC authority. The goal of the statute is clear: to make sure the Commission prevents intentional discrimination in deployment decisions. For instance, Section 60506(c) empowers the Commission (and the Attorney General) to ensure federal policies promote equal access by prohibiting intentional deployment discrimination. See Section 60506(c) (“The Commission and the Attorney General shall ensure that Federal policies promote equal access to robust broadband internet access service by prohibiting deployment discrimination…”). Moreover, the definition of equal access as “equal opportunity to subscribe,” see 47 U.S.C. §1754(a)(2), does not imply a disparate impact analysis. See Brnovich v. Democratic Nat’l Comm., 141 S. Ct. 2321, 2339 (2021) (“[T]he mere fact there is some disparity in impact does not necessarily mean… that it does not give everyone an equal opportunity.”)

There is no evidence that IIJA’s drafters intended the law to be read as broadly as the Commission has done in its rules. The legislative record on Section 60506 is exceedingly sparse, containing almost no discussion of the provision beyond assertions that “broadband ought to be available to all Americans,” 167 Cong. Rec. 6046 (2021), and also that the IIJA was not to be used as a basis for the “regulation of internet rates.”167 Cong. Rec. 6053 (2021). The FCC argues that since “there is little evidence in the legislative history… that impediments to broadband internet access service are the result of intentional discrimination,” Congress must have desired a disparate impact standard. See Pet. Add. 25, Order at para. 47. But the limited nature of the problem suggests a limited solution in the form of a framework aimed at preventing such discrimination. Given the sparse evidence on legislative intent, Section 60506 should be read as granting a limited authority to the Commission.

With Section 60506(b), Congress gave the Commission a set of tools to identify and remedy acts of intentional discrimination by broadband providers in deployment decisions. As we explain below, under both the text of Section 60506 and the Supreme Court’s established jurisprudence, the Commission was not empowered to employ a disparate-impact (or “differential impact”) analysis under its digital discrimination rules.

Among the primary justifications for disparate-impact analysis is to remedy historical patterns of de jure segregation that left an indelible mark on minority communities. See Inclusive Communities, 576 at 528-29. While racial discrimination has not been purged from society, broadband only became prominent in the United States well after all forms of de jure segregation were made illegal, and after Congress and the courts had invested decades in rooting out impermissible de facto discrimination. In enacting its rules that give it presumptive authority over nearly all decisions related to broadband deployment and adoption, the FCC failed to adequately take this history into account.

Beyond the policy questions, however, Section 60506 cannot be reasonably construed as authorizing disparate-impact analysis. While the Supreme Court has allowed disparate-impact analysis in the context of civil-rights law, it has imposed some important limitations. To find disparate impact, the statute must be explicitly directed “to the consequences of an action rather than the actor’s intent.”  Inclusive Communities., 576 U.S. at 534. There, the Fair Housing Act made it unlawful:

To refuse to sell or rent after the making of a bona fide offer, or to refuse to negotiate for the sale or rental of, or otherwise make unavailable or deny, a dwelling to any person because of race, color, religion, sex, familial status, or national origin.

42 U.S.C. §3604(a) (emphasis added). The Court noted that the presence of language like “otherwise make unavailable” is critical to construing a statute as demanding an effects-based analysis. Inclusive Communities., 576 U.S. at 534. Such phrases, the Court found, “refer[] to the consequences of an action rather than the actor’s intent.” Id. Further, the structure of a statute’s language matters:

The relevant statutory phrases… play an identical role in the structure common to all three statutes: Located at the end of lengthy sentences that begin with prohibitions on disparate treatment, they serve as catchall phrases looking to consequences, not intent. And all [of these] statutes use the word “otherwise” to introduce the results-oriented phrase. “Otherwise” means “in a different way or manner,” thus signaling a shift in emphasis from an actor’s intent to the consequences of his actions.

Id. at 534-35.

Previous Court opinions help parse the distinction between statutes limited to intentional discrimination claims and those that allow for disparate impact claims. Particularly relevant here, the Court looked at language from Section 601 of the Civil Rights Act stating that “[n]o person in the United States shall, on the ground of race, color, or national origin, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any program or activity receiving Federal financial assistance,” 42 U.S.C. §2000d (emphasis added), and found it “beyond dispute—and no party disagrees—that [it] prohibits only intentional discrimination.”  Alexander v. Sandoval, 532 U.S. 275, 280 (2001).

Here, the language of Section 60506” (“based on”) mirrors the language of Section 601 of the Civil Rights Act (“on the ground of”). Moreover, it is consistent with the reasoning of Inclusive Communities that determines when a statute allows for disparate impact analysis. Inclusive Communities primarily based its opinion on the “otherwise make unavailable” language at issue, with a particular focus on “otherwise” creating a more open-ended inquiry. See Inclusive Communities, 576 U.S. at 534 (“Here, the phrase ‘otherwise make unavailable’ is of central importance to the analysis that follows”). Such language is absent in Section 60506. Moreover, the closest analogy for Section 60506’s “based on” language is the “on the ground of” language of Title VI of the Civil Rights Act, which also does not include the “otherwise” language found to be so important in Inclusive Communities. Compare 42 U.S.C. §2000d with Inclusive Communities, 576 U.S. at 534-35 (focusing on how “otherwise” is a catch-all phrase looking to consequences instead of intent). If the Court has found “grounded on” means only intentional discrimination, then it is hard to see how “based on” wouldn’t lead to the same conclusion.

Thus, since Section 60506 was drafted without “results-oriented language” and instead frames the prohibition against digital discrimination as “based on income level, race, ethnicity, color, religion, or national origin,” this would put the rule squarely within the realm of prohibitions on intentional discrimination. That is, to be discriminatory, the decision to deploy or not to deploy must have been intentionally made based on or grounded on the protected characteristic. Mere statistical correlation between deployment and protected characteristics is insufficient.

In enacting the IIJA, Congress was undoubtedly aware of the Court’s history with disparate-impact analysis. Had it chosen to do so, it could have made the requirements of Section 60506 align with the requirements of that precedent. But it chose not to do so.

III. Congress Did Not Clearly Authorize the FCC to Decide a Major Question in this Order

To read Section 60506 of the IIJA as broadly as the FCC does in the Order invites a challenge under the major-questions doctrine. There are “extraordinary cases” where the “history and the breadth of the authority” that an agency asserts and the “economic and political significance” of that asserted authority provide “reason to hesitate before concluding that Congress” meant to confer such authority. See West Virginia v. EPA, 597 U.S. 697, 721 (2022) (quoting FDA v. Brown & Williamson, 529 U.S. 120, 159-60 (2000)). In such cases, “something more than a merely plausible textual basis for agency action is necessary. The agency instead must point to ‘clear congressional authorization’ for the power it claims.” Id. at 723 (quoting Utility Air Regulatory Group v. EPA, 573 U.S. 302, 324 (2014).

Here, the FCC has claimed dramatic new powers over the deployment of broadband Internet access, and it has exercised that alleged authority to create a process for inquiry into generalized civil rights claims. Such a system is as unprecedented as it is important to the political and economic environment of the country. The FCC itself implicitly recognizes this fact when it emphasizes the critical importance of Internet access as necessary “to meet basic needs.” Broadband alone is a $112 billion industry with over 125 million customers. See The History of US Broadband, S&P Global (last accessed May 11, 2023), https://www.spglobal.com/marketintelligence/en/news-insights/research/the-history-of-us-broadband. This doesn’t even include all the entities covered by this Order, which also includes all those who could “otherwise affect consumer access to broadband internet access service.” See 47 CFR §16.2. There is, therefore, no doubt that the Order is of great economic and political significance.

This would be fine if the statute clearly delegated such power to the FCC. But the only potential source of authority for the Order is Section 60506. Since the text of Section 60506 can be (and is better) read as not giving the FCC such authority, it simply can’t be an unambiguous delegation of authority.

As argued above, Congress knows how to write a disparate-impact statute in light of Supreme Court jurisprudence. Put simply, Congress did not write a disparate-impact statute here because there is no catch-all language comparable to what the Supreme Court has pointed to in statutes like the FHA. Cf. Inclusive Communities, 576 U.S. at 533 (finding a statute includes disparate-impact liability when the “text refers to the consequences of actions and not just the mindset of actors”). At best, Section 60506 is ambiguous in giving the authority to the FCC to use disparate impact analysis. That is simply not enough when regulating an area of great economic and political significance.

In addition to the major question of whether the FCC may enact its vast disparate impact apparatus, the FCC claims vast authority over the economically and politically significant arena of broadband rates despite no clear authorization to do so in Section 60506. In fact, in the legislative record, Congress explicitly wanted to avoid the possibility that the IIJA would be used as the basis for the “regulation of internet rates.” 167 Cong. Rec. 6053 (2021). The FCC disclaims the authority to engage in rate regulation, but it does claim authority for “ensuring pricing consistency.” See Pet. Add. 56-57, Order at para. 105. While the act of assessing the comparability of prices is not rate regulation in the sense that the Communications Act contemplates, a policy that holds entities liable for those disparities such that an ISP must adjust its prices until it matches an FCC definition of “comparable” is tantamount to setting that rate. See Eric Fruits & Geoffrey Manne, Quack Attack: De Facto Rate Regulation in Telecommunications (ICLE Issue Brief 2023-03-30), available at https://laweconcenter.org/wp-content/uploads/2023/03/De-Facto-Rate-Reg-Final-1.pdf (describing how the FCC often engages in rate regulation in practice even when it doesn’t call it that).

Furthermore, the Order could also allow the FCC to use the rule to demand higher service quality under the “comparable terms and conditions” language, even if consumers may prefer lower speeds for less money. That increased quality comes at a cost that will necessarily increase the market price of broadband. In this way, the Order would allow the FCC to set a price floor even if it never explicitly requires ISPs to submit their rates for approval.

The elephant of rate regulation is not hiding in the mousehole of Section 60506. Cf. Whitman v. American Trucking Assns., Inc., 531 U.S. 457, 468 (2001). Indeed, the FCC itself forswears rate regulation in an ongoing proceeding in which the relevant statute would clearly authorize it. See Safeguarding and Securing the Open Internet, 88 Fed. Reg. 76048 (proposed Nov. 3, 2023) (to be codified at 47 CFR pts. 8, 20). Nevertheless, the FCC recognized that rate regulation is inappropriate for the broadband marketplace and has declined its application in that proceeding. Even here, the FCC has denied that including pricing within the scope of the rules is “an attempt to institute rate regulation.” See Pet. Add. 59, Order at para. 111. But despite its denials, the FCC’s claim of authority would allow it to regulate prices despite nothing in Section 60506 granting it authority to do so. The FCC should not be able to recognize a politically significant consensus against rate regulation one minute and then smuggle that disfavored policy in through a statute that never mentions it the next.

Finally, as noted above, since many of the protected characteristics, but especially income, can be correlated with many factors relevant to profitability, it would be no surprise that almost any policy or practice of a covered entity under the Order could be subject to FCC enforcement. And since there is no guarantee that the FCC would agree in a particular case that technical or economic feasibility justifies a particular policy or practice, nearly everything a broadband provider or other covered entities do would likely need pre-approval under the FCC’s advisory opinion process. This would essentially make the FCC a central planner of everything related to broadband. In other words, the FCC has clearly claimed authority far beyond what Congress could have imagined without any clear authorization to do so.

IV. The Order Is Arbitrary and Capricious Because It Will Produce Results Inconsistent with the Purpose of the Statute

As noted above, the purposes of the broadband provisions of the IIJA are to encourage broadband deployment, enhance broadband affordability, and prevent discrimination in broadband access. Put simply, the purpose is to get more Americans to adopt more broadband, regardless of income level, race, ethnicity, color, religion, or national origin. The FCC’s Order should curtail discrimination, but the aggressive and expansive police powers the agency grants itself will surely diminish investments in broadband deployment and efforts to encourage adoption. We urge the Court to vacate the Order and require the FCC to adopt rules limited to preventing intentional discrimination in deployment by broadband Internet access service providers. More narrowly tailored rules would satisfy Section 60506’s mandates while preserving incentives to invest in deployment and encourage adoption. Cf. Cin. Bell Tel. Co. v. FCC, 69 F.3d 752, 761 (6th Cir. 1995) (“The FCC is required to give [a reasoned] explanation when it declines to adopt less restrictive measures in promulgating its rules.”). But the current Order is arbitrary and capricious because the predictable results of the rules would be inconsistent with the purpose of the IIJA in promoting broadband deployment. See Motor Vehicle Mfrs. Ass’n v. State Farm Mutual Auto. Ins. Co., 463 U.S. 29, 43 (1983) (“[A]n agency rule would be arbitrary and capricious if the agency has… offered an explanation for its decision that runs counter to the evidence before the agency, or is so implausible that it could not be ascribed to a difference in view of the product of agency expertise”).

The Order spans nearly every aspect of broadband deployment, including, but not limited to network infrastructure deployment, network reliability, network upgrades, and network maintenance. Pet. Add. 58, Order ¶ 108. In addition, the Order covers a wide range of policies and practices that while not directly related to deployment, affect the profitability of deployment investments, such as pricing, discounts, credit checks, marketing or advertising, service suspension, and account termination. Pet. Add. 58, Order ¶ 108.

Like all firms, broadband providers have limited resources with which to make their investments. While profitability (i.e., economic feasibility) is a necessary precondition for investment, not all profitable investments can be undertaken. Among the universe of economically feasible projects, firms are likely to give priority to those that promise greater returns on investment relative to those with lower returns. Returns on investment in broadband depend on several factors. Population density, terrain, regulations, and taxes are all important cost factors, while a given consumer population’s willingness to adopt and pay for broadband are key demand-related factors. Anything that raises the cost of expected cost deployment or reduces the demand for service can turn a profitable investment into an unprofitable prospect or downgrade its priority relative to other investment opportunities.

The Order not only raises the cost of deployment investments, but it also increases the risk of liability for discrimination, thereby increasing the uncertainty of the investments’ returns. Because of the well-known and widely accepted risk-return tradeoff, firms that face increased uncertainty in investment returns will demand higher expected returns from the investments they pursue. This demand for higher returns means that some projects that would have been pursued under more limited digital discrimination rules will not be pursued under the current Order.

The Order will not only stifle new deployment to unserved areas, but also will delay network upgrades and maintenance out of fear of alleged disparate effects. At the extreme, providers will be faced with the choice to upgrade everyone or upgrade no one. Because they cannot afford to upgrade everyone, then they will upgrade no one.

It might be argued that providers could avoid some of the ex post regulatory risk by ex ante seeking pre-approval under the FCC’s advisory opinion process. Such processes are costly and are not certain to result in approval. Even if approved, the FCC reserves to right to rescind the pre-approval. See Pet. Add. 75, Order ¶ 156 (“[A]dvisory opinions will be issued without prejudice to the Enforcement Bureau’s or the Commission’s ability to reconsider the questions involved, and rescind the opinion. Because advisory opinions would be issued by the Enforcement Bureau, they would also be issued without prejudice to the Commission’s right to later rescind or revoke the findings.”). Under the Order’s informal complaint procedures, third parties can allege discriminatory effects associated with pre-approved policies and practices that could result in the recission of pre-approval. The result is an unambiguous increase in deployment and operating costs, even with pre-approval.

Moreover, by imposing liability for disparate impacts outside the control of covered broadband providers, the Order produces results inconsistent with the purpose of the IIJA because parties cannot conform their conduct to the rules. Among the 7% of households who do not use the internet at home, more than half of Current Population Survey (CPS) respondents indicated that they “don’t need it or [are] not interested.” George S. Ford, Confusing Relevance and Price: Interpreting and Improving Surveys on Internet Non-adoption, 45 Telecomm. Pol’y, Mar. 2021. ISPs sell broadband service, but they cannot force uninterested people to buy their product.

Only 2-3% of U.S. households that have not adopted at-home broadband indicate it is because of a lack of access. Eric Fruits & Geoffrey Manne, Quack Attack: De Facto Rate Regulation in Telecommunications (ICLE Issue Brief 2023-03-30) at Table 1, available at https://laweconcenter.org/wp-content/uploads/2023/03/De-Facto-Rate-Reg-Final-1.pdf. And even this tiny fraction is driven by factors such as topography, population density, and projected consumer demand. Differences in these factors will be linked to differences in broadband deployment, but there is little that an ISP can do to change them. If the FCC’s command could make the mountainous regions into flat plains, it would have done so already. It is nonsensical to hold liable a company attempting to overcome obstacles to deployment because they do not do so simultaneously everywhere. And it is not a rational course of action to address a digital divide by imposing liability on entities that cannot fix the underlying causes driving it.

Punishment exacted on an ISP will not produce the broadband access the statute envisions for all Americans. In fact, it will put that access further out of reach by incentivizing ISPs to reduce the speed of deployments and upgrades so that they do not produce inadvertent statistical disparities. Given the statute’s objective of enhancing broadband access, the FCC’s rulemaking must contain a process for achieving greater access. The Order does the opposite and, therefore, cannot be what Congress intended. Cf. Inclusive Communities, 576 U.S. at 544 (“If the specter of disparate-impact litigation causes private developers to no longer construct or renovate housing units for low-income individuals, then the FHA would have undermined its own purpose as well as the free-market system.”).

The Order will result in less broadband investment by essentially making the FCC the central planner of all deployment and pricing decisions. This is inconsistent with the purpose of Section 60506, making the rule arbitrary and capricious.

V. The Order’s Vagueness Gives the FCC Unbounded Power

The Order’s digital discrimination rule is vague because it does not have “sufficient definiteness that ordinary people can understand what conduct is prohibited.” Kolender v. Lawson, 461 U.S. 352, 357 (1983). As a result, the FCC has claimed unbounded power to engage in “arbitrary and discriminatory enforcement.” Id. As argued above, the disparate impact standard means that anything that is correlated with income, which includes many things that may be benignly relevant to deployment and pricing decisions, could give rise to a possible violation of the Order.

While a covered entity could argue that there are economic or technical feasibility reasons for a policy or practice, the case-by-case nature of enforcement outlined in the Order means that no one can be sure of whether they are on the right side of the law. See 47 CFR §16.5(e) (“The Commission will determine on a case-by-case basis whether genuine issues of technical or economic feasibility justified the adoption, implementation, or utilization of a [barred] policy or practice…”).

This vagueness is not cured by the presence of the Order’s advisory opinion process because the FCC retains the right to bring an enforcement action anyway after reconsidering, rescinding, or revoking it. See 47 CFR §16.5(e) (“An advisory opinion states only the enforcement intention of the Enforcement Bureau as of the date of the opinion, and it is not binding on any party. Advisory opinions will be issued without prejudice to the Enforcement Bureau or the Commission to reconsider the questions involved, or to rescind or revoke the opinion. Advisory opinions will not be subject to appeal or further review”). In other words, there is no basis for concluding a covered entity has “the ability to clarify the meaning of the regulation by its own inquiry, or by resort to an administrative process.” Cf. Village of Hoffman Estates v. Flipside, Hoffman Estates, Inc., 455 U.S. 489, 498 (1982). The FCC may engage in utterly arbitrary and discriminatory enforcement under the Order.

Moreover, the Order’s expansive definition of covered entities to include any “entities that provide services that facilitate and affect consumer access to broadband internet access service,” 47 CFR § 16.2 (definition of “Covered entity”, which includes “Entities that otherwise affect consumer access to broadband internet access service”), also leads to vagueness as to whom the digital discrimination rules apply. This would arguably include state and local governments and nonprofits, as well as multi-family housing owners, many of whom may have no idea they are subject to the FCC’s digital discrimination rules nor any idea of how to comply.

The Order is therefore void for vagueness because it does not allow a person of ordinary intelligence to know whether they are complying with the law and gives the FCC nearly unlimited enforcement authority.

CONCLUSION

For the foregoing reasons, ICLE and ITIF urge the Court to set aside the FCC’s Order.

Continue reading
Telecommunications & Regulated Utilities

ICLE/ITIF Amicus Brief Urges Court to Set Aside FCC’s Digital-Discrimination Rules

TOTM The Federal Communications Commission (FCC) recently adopted sweeping new rules designed to prevent so-called “digital discrimination” in the deployment, access, and adoption of broadband internet . . .

The Federal Communications Commission (FCC) recently adopted sweeping new rules designed to prevent so-called “digital discrimination” in the deployment, access, and adoption of broadband internet services. But an amicus brief filed by the International Center for Law & Economics (ICLE) and the Information Technology & Innovation Foundation (ITIF) with the 8th U.S. Circuit Court of Appeals argues that the rules go far beyond what Congress authorized.

It appears to us quite likely the court will vacate the new rules, because they exceed the authority Congress granted the FCC and undermine the very broadband investment and deployment that Congress wanted to encourage. In effect, the rules would set the FCC up as a central planner of all things broadband-related. In combination with the commission’s recent reclassification of broadband as a Title II service, the FCC has stretched its authority far beyond the breaking point.

Read the full piece here.

Continue reading
Telecommunications & Regulated Utilities

ICLE Reply Comments to FCC Re: Customer Blackout Rebates

Regulatory Comments I. Introduction The International Center for Law & Economics (“ICLE”) thanks the Federal Communications Commission (“FCC” or “the Commission”) for the opportunity to offer reply . . .

I. Introduction

The International Center for Law & Economics (“ICLE”) thanks the Federal Communications Commission (“FCC” or “the Commission”) for the opportunity to offer reply comments to this notice of proposed rulemaking (“NPRM”), as the Commission proposes to require cable operators and direct-broadcast satellite (DBS) providers to grant their subscribers rebates when those subscribers are deprived of video programming they expected to receive during programming blackouts that resulted from failed retransmission-consent negotiations or failed non-broadcast carriage negotiations.[1]

As noted in the NPRM, the Communications Act of 1934 requires that cable operators and satellite-TV providers obtain a broadcast TV station’s consent in order to lawfully retransmit that station’s signal to subscribers. Commercial stations or networks may either (1) demand carriage pursuant to the Commission’s must-carry rules or (2) elect for carriage consent and negotiate for compensation in exchange for carriage. If a channel elects for retransmission consent but is unable to reach agreement for carriage, the cable operator or DBS provider loses the right to carry that signal. As a result, the cable operator or DBS provider’s subscribers typically lose access entirely to the channel’s signal unless and until the parties are able to reach an agreement, a situation that is often described as a “blackout.”

Blackouts tend to generate eye-catching headlines and often annoy affected consumers.[2] This annoyance is amplified when consumers don’t receive a rebate for the loss of signal, especially when they believe that they are merely bystanders in the dispute between the cable operator or DBS provider and the channel.[3] The Commission appears to echo theses concerns, concluding that its proposed rebate mandate would ensure “subscribers are made whole when they face interruptions of service that are outside their control” and would prevent subscribers “from being charged for services for the period that they did not receive them.”[4]

This framing, however, oversimplifies retransmission-consent negotiations and mischaracterizes consumers’ agency in subscribing to and using multichannel-video-programming distributors (“MVPDs”). Moreover, there are numerous questions raised by the NPRM regarding the proposal’s feasibility, including how to identify which consumers would qualify for rebates, how those rebates would be calculated, and how they would be distributed. Several comments submitted in this proceeding suggest that any implementation of this proposal would be arbitrary and unfair to cable operators, DBS providers, and consumers. In particular:

  • Blackouts result from a temporary or permanent failure to reach an agreement in negotiations between channels and either cable operators or DBS providers. The Commission’s proposal explicitly and unfairly assigns liability for blackouts to the cable operator or DBS provider. As a result, the proposal would provide channels with additional negotiating leverage relative to the status quo. Smaller cable operators may be especially disadvantaged.
  • Each consumer is unique in how much they value a particular channel and how much they would be economically harmed by a blackout. For example, in the event of a cable or DBS blackout, some consumers can receive the programming via an over-the-air antenna or a streaming platform and would suffer close to no economic harm. Other consumers may assign no value to the blacked-out channel’s programming and would likewise suffer no harm.
  • Complexities and confidentiality in programming contracts would make it impossible to accurately or fairly calculate the price or cost associated with any given channel over some set period of time. For example, cable operators and DBS providers typically sell bundles of channels, not a la carte offerings, making it impossible to calculate an appropriate rebate for one specific channel or set of channels.
  • Even if it were possible to calculate an appropriate rebate, any mandated rebate based on such calculations would constitute prohibited rate regulation.

These reply comments respond to many of the issues raised in comments on this matter. We conclude that the Commission is proposing a set of unworkable and arbitrary rules. Even if rebates could be reasonably and fairly calculated, the amount of such rebates would likely be only a few dollars and may be as little as a few pennies. In such cases, the enormous cost to the Commission, cable operators, and DBS providers would be many times greater than the amount of rebates provided to consumers. It would be a much better use of the FCC’s and MVPD providers’ resources to abandon this rulemaking process and refrain from mandating rebates for programming blackouts.

II. Who Is to Blame for Blackouts?

As discussed above, it appears the FCC’s view is that consumers who experience blackouts are mere bystanders in a dispute, as the Commission invokes “consumer protection” and “customer service” as justifications for the proposed rules mandating rebates.[5] If we believe both that consumers are bystanders and that they are harmed by blackouts, then it is crucial to identify the parties to whom blame should be assigned for those blackouts. A key principle of the law & economics approach is that the party better-positioned to avoid the blackout should bear more—or, in some cases, all—of its costs.[6]

In comments submitted by Dish Network, William Zarakas and Jeremy Verlinda note that: “Programming fees are established through bilateral negotiations between content providers and MVPDs, and depend in large part on the relative bargaining position of the two sides.”[7] This comment illustrates the obvious but important fact that both content providers and MVPD operators must reach agreement and, in any given negotiation, either side may have more bargaining power. Because of this reality, it is impossible to draw general conclusions about which party will be the least-cost avoider of blackouts, as borne out in the submitted comments.

On the one hand, the ATVA argues that programmers are the cause of blackouts: “Blackouts happen to cable and satellite providers and their subscribers.”[8] NTCA supports this claim and reports that “[s]mall providers lack negotiating power in retransmission consent discussions.”[9] On the other hand, the NAB claims the “leading cause of such disruptions” is “the pay TV industry’s desire to use consumers as pawns to push for a change in law” and that MVPDs have a “strategy of creating negotiating impasses” in order to obtain a policy change.[10] Writing in Truth on the Market, Eric Fruits concludes:

With the wide range of programming and delivery options, it’s probably unwise to generalize who has the greater bargaining power in the current system. But if one had to choose, it seems that networks and, to a lesser extent, local broadcasters are in a slightly superior position. They have the right to choose must carry or retransmission and, in some cases, have alternative outlets (such as streaming) to distribute their programming.[11]

Peer-reviewed published research by Eun-A Park, Rob Frieden, and Krishna Jayakar attempts to identify the “predictors” of blackouts using a database of nearly 400 retransmission agreements executed between 2011 and 2018.[12] The authors identify three factors associated with more blackouts and longer blackouts:

  1. Cable, satellite, and other MVPDs with larger customer bases are associated with more frequent and longer blackouts;
  2. Multi-station broadcaster groups with network affiliations are associated with more frequent but shorter blackouts; and
  3. The National Football League (“NFL”) season (g., “must see” real-time programming) has no significant relationship with blackout frequency, but when blackouts occur during the season, they are significantly shorter.

The simplistic takeaway is both that everyone is to blame, and no one is to blame. Ultimately, Park and her co-authors conclude that “the statistical analysis is not able to identify the parties or the tactics responsible for blackouts.”[13] Based on this research, it is not clear which parties in given negotiations are more likely to be the least-cost avoider of blackouts.

Nevertheless, the Commission’s proposal explicitly assigns liability for blackouts to cable operators and DBS providers.[14] Under the proposed rules, not only would cable operators and DBS providers suffer financial consequences, but they also would be made to face reputational harms stemming from a federal agency suggesting the fault for any retransmission-consent or carriage-agreement blackouts falls squarely on their shoulders.

Such reputational damage is almost certain to increase subscriber churn and impose additional subscriber-acquisition and retention costs on cable operators and DBS providers.[15] In comments on the Commission’s proposed rules for cable-operator and DBS-provider billing practices, ICLE reported that these costs are substantial and that, in addition to these costs, churn increases the uncertainty of cable-operator and DBS-provider revenues and profits.[16]

III. Consumers Are Not Bystanders

As noted earlier in these comments, the Commission’s proposal appears to be rooted in the belief that, when consumers experience a blackout, they are mere bystanders in a dispute between channels and cable operators or DBS providers. The Commission further seems to believe that the full force of the federal government is needed for these consumers to be “made whole.”[17] The implication is that consumers lack the foresight to anticipate the possibility of blackouts or the ability to respond to blackouts when they occur.

As the NPRM notes, subscribers are often informed of the risk of blackouts—and their consequences—in their service agreements with cable operators or DBS providers.[18] This is supported in ATVA’s comments:

Cable and satellite carriers make this quite clear in the contracts they offer subscribers—existing contracts which the Commission seeks to abrogate here. This language also makes clear that cable and satellite operators can and do change the programming offered in those bundles from time to time. … Cable and satellite providers add and subtract programming from their offerings to consumers frequently, and subscription agreements do not promise that all channels in a particular tier will be carried in perpetuity, let alone (with limited exception) assign a specific value to particular programming.[19]

The NPRM asks, “if a subscriber initiates service during a blackout, would that subscriber be entitled to a rebate or a lower rate?”[20] The question implicitly acknowledges that, for these subscribers, blackouts are not just a possibility, but a certainty. Yet they nonetheless enter into such agreements, knowing they may not be compensated for the interruption of service.

Many cable operators and DBS providers do offer credits[21] or other accommodations[22] to requesting subscribers affected by a blackout. In addition, many consumers have a number of options to circumvent a blackout by obtaining the programming elsewhere. Comments in this proceeding indicate that these options include the use of over-the-air antennas[23] or streaming services.[24] Given the many alternatives available in so many cases, it is unlikely that a blackout would deprive these consumers of the desired programming and any economic harm to them would be de minimis.

If cable or DBS blackouts are (or become) widespread or pernicious, consumers also have the ability to terminate service and switch providers, including by switching to streaming options. This is demonstrated by the well-known and widespread phenomenon of “cord cutting.” ATVA’s comments note that, in the third quarter of 2023, nearly one million subscribers canceled their traditional linear-television service, with just under 55% of occupied households now subscribing, the lowest share since 1989.[25] NYPSC concludes that, if the current trend of cord-cutting continues, “any final rules adopted here could become obsolete over time.”[26]

Due in part to cord cutting, ATVA reported that last year “several cable television companies either had already shut down their television services or were in the process of doing so.”[27] NTCA reports that nearly 40% of surveyed rural providers indicated they are not likely to continue service or already have plans to discontinue service, with many of them blaming the “difficulty negotiating retransmission consent agreements.”[28]

The fact that so many consumers are switching to alternatives to cable and DBS is a clear demonstration that they have the opportunity and ability obtain programming from a wide range of competitive providers. This places them in the driver’s seat, rather than suffering as helpless bystanders. It is telling that neither the NPRM nor any of the comments submitted to date offer any estimate of the cost to consumers associated with blackouts from retransmission-consent or carriage negotiations. This is likely because any costs are literally incalculable (i.e., impossible to calculate) or so small as to discourage any efforts at estimation. In either case, the Commission’s proposal to mandate and enforce blackout rebates looks to be a costly and time-consuming exercise that would yield little to no noticeable consumer benefits.

IV. Mandatory Rebates Will Increase Programmer Bargaining Power and Increase Prices to Cable and DBS Subscribers

A common theme of comments submitted in this matter is that the proposed rules would “place a thumb on the scale” in favor of channels relative to cable operators and DBS providers.[29] Without delving deeply into the esoteric details of bargaining theory, the comments identify two key factors that have, over time, improved programmers’ bargaining position relative to cable operators and DBS providers:

  1. Increased competition among MVPD providers, which has reduced cable and DBS bargaining power;[30] and
  2. Consolidation in the broadcast industry, which has increased programmer bargaining power.[31]

The Commission’s proposed rules are intended and designed to impose an additional cost on cable operators and DBS providers who do not reach an agreement with stations and networks, thereby diminishing the providers’ relative bargaining position. As profit-maximizing enterprises, it would be reasonable to expect stations and networks to exploit this additional bargaining power to extract higher retransmission fees or other concessions.

Jeffrey Eisenach notes that the first “significant” retransmission agreement to involve monetary compensation from a cable provider to a broadcaster occurred in 2005.[32] By 2008, retransmission fees totaled $500 million, according to Variety.[33] By 2020, S&P Global reported that annual retransmission fees were approximately $12 billion.[34] This represents an average annual increase of 30% between 2008 and 2020. This is in line with Zarakas & Verlinda’s estimate that retransmission fees charged by local network stations have increased at annual growth rates of 9.8% to 61.0% since 2009.[35] According to information reported by the Pew Research Center, revenues from retransmission fees for local stations now nearly equal those stations’ advertising revenues (Figure 1).

[36]

Dish Network indicated that programmers have been engaged in an “aggressive campaign of imposing steep retransmission and carriage price increases on MVPDs.”[37] Simultaneous with these steep increases in retransmission fees, networks began imposing “reverse transmission compensation” on their affiliates.[38] Previously, networks paid local affiliates for airtime in order to run network advertisements during their programming. The new arrangements have reversed that flow of compensation, such that affiliates are now expected to compensate the networks, as explained in Variety:

Station owners also face increased pressure to secure top fees for their retrans rights because their Big Four network partners now demand that affiliate stations fork over a portion of their retrans windfall to help them pay for pricey franchises like the NFL, “American Idol” and high-end scripted series.[39]

Dish Network concludes: “While MVPDs and OVDs compete aggressively with each other, the programming price increases will likely be passed through to consumers despite that competition. The reason is that all MVPDs will face the same programming price increase.”[40] NCTA further notes that increased programming costs are “borne by the cable operator or passed onto the consumer.”[41]

The most recent research cited in the comments reports that MVPDs pass through approximately 100% of retransmission-fee increases in the form of higher subscription prices.[42] Aaron Heresco and Stephanie Figueroa provided examples of how increased retransmission fees are passed on to subscribers:

On the other side of the simplified ESPN transaction are MVPD ranging from global conglomerates like Spectrum/Time Warner to small local or independent cable carriers. These MVPD pay ESPN $7.21/subscriber/month for the right to carry/transmit ESPN content to subscribing households. MVPD, with a keen eye on profits and shareholder value, pass through the costs to consumers (irrespective of if subscribers actually watch ESPN or any other network) in the form of increased monthly cable bills. Not only does this suggest that the “free lunch” of TV programming isn’t free, it also indicates that the dynamic of revenue generation via viewership is changing. As another example, consider the case of the Weather Channel, which in 2014 asked for a $.01 increase in retransmission fees despite a 20% drop in ratings (Sahagian 2014). Viewers may demand access to the channel in case of weather emergencies but may only tune in to the channel a handful of times per year. Nonetheless, the demand for access to channels drive up retransmission revenue even if the day-to-day or week-to-week ratings are weak.[43]

In some cases, however, increased retransmission fees cannot be passed on in the form of higher subscription prices. As we noted above, NTCA reports that nearly 40% of surveyed rural providers indicated they are unlikely to continue service or already have plans to discontinue service, with many of them blaming the “difficulty negotiating retransmission consent agreements.”[44] The Commission’s proposed rules would not only lead to higher prices for consumers, but they may also reduce MVPD options for some consumers, as cable operators exit the industry.

V. Proposed Rebate Mandate Would be Arbitrary and Unworkable

The NPRM asks for comments on how to implement the proposed rebate mandate. In doing so, the NPRM identifies numerous factors that illustrate the arbitrary and unworkable nature of the Commission’s proposal:[45]

  • Should cable operators and DBS providers be required to pay rebates or provide credits?
  • Should rebates apply to any channel that is blacked out?
  • What if the parties never reach an agreement for carriage? For example, should subscribers be entitled to rebates in perpetuity?
  • How should rebates be calculated when terms of the retransmission-consent agreements are confidential?
  • Should the rebate be based on the cost that the cable operator or DBS provider paid to the programmer to retransmit or carry the channel prior to the carriage impasse?
  • How should rebates account for bundling?
  • If a subscriber initiates or renews a contract during a blackout, should the subscriber receive a rebate?
  • Should the Commission deem unenforceable service agreements that explicitly specify that the cable operator or DBS provider is not liable for credits or refunds if programming becomes unavailable? Should existing service agreements be abrogated?
  • How should rebates account for (g.) advertising time as a component of the retransmission-consent agreement?

As we note above, when blackouts occur, many cable operators and DBS providers offer credits or other accommodations to requesting subscribers affected by a blackout.[46] The NPRM “tentatively concludes” there is no legal distinction between “rebates,” “refunds,” and “credits.”[47] If the Commission approves rules mandating rebates in the event of blackouts, the rules should be sufficiently flexible to allow credits or other accommodations—such as providing over-the-air antennas or programming upgrades—to satisfy the rules.

The NPRM asks whether the proposed rebate rules should apply to any channel that is blacked out,[48] citing to news stories regarding The Weather Channel.[49] The NPRM provides no context for these citations, but the cited articles suggest that The Weather Channel is of minimal value to most consumers. The channel had 105,000 primetime viewers in February 2024, which was slightly less than PopTV and slightly more than Disney Junior and VH1.[50] The Deadline article cited in the NPRM indicates that The Weather Channel averages 13 cents per-subscriber per-month across pay-TV systems.[51] Much of the channel’s content is freely available on its website (weather.com) and app, and similar weather content is freely available across numerous sources and media.

The NPRM’s singling out of the Weather Channel highlights several flaws with the Commission’s proposal. The channel has low viewership, numerous competing substitutes for content, and is relatively low-cost. During a blackout, few subscribers would notice. Even fewer would suffer any harm and, if they did, the harm would be about 13 cents a month. It seems a waste of valuable resources to impose a complex regulatory regime to “make consumers whole” to the tune of pennies a month.

The NPRM asks whether the Commission should require rebates if the parties never reach a carriage agreement and, if so, whether those rebates should be provided in perpetuity.[52] NCTA points out that it would be impossible for any regulator to determine whether any particular blackout is the result of a negotiation impasse or business decision by the cable operator or DBS provider to no longer carry the channel.[53] For example, a channel may be dropped because of changes to the programming available on the channel.[54] Indeed, the programming offered at the beginning of a retransmission-consent agreement may be very different from the content provided at the time of renegotiation.[55] Moreover, it would be impossible to know with any certainty whether any carriage termination is temporary or permanent.[56] Verizon is correct to call this inquiry “absurd,”[57] as it proposes a “Hotel California” approach to carriage agreements, in which cable operators and DBS providers can check out, but they can never leave.

To illustrate the challenges of calculating a reasonable and economically coherent rebate, Dish Network offered a hypothetical set of three options for carriage of a local station and the Tennis Channel, both owned by Sinclair.[58]

  1. $4 for the local station on a tier serving all subscribers, no carriage of Tennis Channel;
  2. $2 for the local station and $2 for the Tennis Channel, both on tiers serving all subscribers; or
  3. $2 for the local station on a tier serving all subscribers and $4 for the Tennis Channel on a tier serving 50% of subscribers.

In this hypothetical, the cable operator or DBS provider is indifferent to the details of how the package is priced. Similarly, consumers are indifferent to the pricing details of the agreement. Under the Commission’s proposal, however, these details become critical to how a rebate would be calculated. In the event of a Tennis Channel blackout, either no subscriber would receive a rebate, every subscriber would receive a $2 rebate, or half of all subscribers would receive a $4 rebate—with the amount of rebate depending on how the agreement’s pricing was structured.

Dish Network’s hypothetical demonstrates another consequence of the Commission’s proposal: the easiest way to avoid the risk of paying a rebate is to forgo carrying the channel. The hypothetical assumes a cable operator “does not particularly want to carry” the Tennis Channel, but is willing to do so in exchange for an agreement with Sinclair for the local station.[59] Under the Commission’s proposed rules, the risk of incurring the cost of providing rebates introduces another incentive to eschew carriage of the Tennis Channel.

One reason Dish Network presented a hypothetical instead of an “actual” example is because, as noted in several comments, carriage agreements are subject to confidentiality provisions.[60] Separate and apart from the impossibility of allocating a rebate across the various terms of an agreement, even if the terms were known, such an exercise would require abrogating these confidentiality agreements between the negotiating parties.

The NPRM asks whether it would be reasonable to require a cable operator or DBS provider to rebate the cost that it paid to the programmer to retransmit or carry the channel prior to the carriage impasse.[61] The NPRM cites Spectrum Northeast LLC v. Frey, a case involving early-termination fees in which the 1st U.S. Circuit Court of Appeals stated that “[a] termination event ends cable service, and a rebate on termination falls outside the ‘provision of cable service.’”[62] In the NPRM, the Commission “tentatively conclude[s] that the courts’ logic” in Spectrum Northeast “applies to the rebate requirement for blackouts.”[63]

If the Commission accepts the court’s logic that a termination event ends service on the consumer side, then it would be reasonable to conclude that the end of a retransmission or carriage agreement similarly ends service. To base a rebate on a prior agreement would mean basing the rebate on a fiction—an agreement that does not exist.

To illustrate, consider Dish Network’s hypothetical. Assume the initial agreement is Option 2 ($2 for the local station and $2 for the Tennis Channel, both on tiers serving all subscribers). The negotiations stall, leading to a blackout. Assume the parties eventually agree to Option 1, in which the Tennis Channel is no longer carried. Would subscribers be due a rebate for a channel that is no longer carried? Or, if the parties instead agree to Option 3 ($2 for the local station on a tier serving all subscribers and $4 for the Tennis Channel on a tier serving 50% of subscribers), would all subscribers be due a $2 rebate for the Tennis Channel, or would half of subscribers be due a $4 rebate? There is no “good” answer because any answer is necessarily arbitrary and devoid of economic logic.

As noted above, many retransmission and carriage agreements involve “bundles” of programming,[64] as well as “a wide range of pricing and non-pricing terms.”[65] Moreover, ATVA reports that subscribers purchase bundled programming, rather than individual channels, and that consumers are well-aware of bundling when they enter into service agreements with cable operators and DBS providers.[66] NCTA reports that bundling complicates the already-complex challenge of allocating costs across specific channels over specific periods of time.[67] Thus, any attempt to do so with an eye toward mandating rebates during blackouts is likewise arbitrary and devoid of economic logic.

In summary, the Commission is proposing a set of unworkable and arbitrary rules to distribute rebates to consumers during programming blackouts. Even if such rebates could be reasonably and fairly calculated, the sums involved would likely be only a few dollars, and may be as little as a few pennies. In these cases, the enormous costs to the Commission, cable operators, and DBS providers would be many times greater than the rebates provided to consumers. It would be a much better use of the FCC’s and MVPD providers’ resources to abandon this rulemaking process and refrain from mandating rebates for programming blackouts.

[1] Notice of Proposed Rulemaking, In the Matter of Customer Rebates for Undelivered Video Programming During Blackouts, MB Docket No. 24-20 (Jan. 17, 2024), available at https://docs.fcc.gov/public/attachments/FCC-24-2A1.pdf [hereinafter “NPRM”], at para. 1.

[2] See id. at n. 5, 7.

[3] Eric Fruits, Blackout Rebates: Tipping the Scales at the FCC, Truth on the Market (Mar. 6, 2024), https://truthonthemarket.com/2024/03/06/blackout-rebates-tipping-the-scales-at-the-fcc.

[4] NPRM, supra note 1 at para. 10.

[5] NPRM, supra note 1 at para. 13 (proposed rules “provide basic protections for cable customers”) and ¶ 7 (“How would requiring cable operators and DBS providers to provide rebates or credits change providers’ current customer service relations during a blackout?”).

[6] This is known as the “least-cost avoider” or “cheapest-cost avoider” principle. See Harold Demsetz, When Does the Rule of Liability Matter?, 1 J. Legal Stud. 13, 28 (1972); see generally Ronald Coase, The Problem of Social Cost, 3 J. L. & Econ. 1 (1960).

[7] Comments of DISH Network LLC, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/1030975783920/1 [hereinafter “DISH Comments”], Exhibit 1, Declaration of William Zarakas & Jeremy Verlinda [hereinafter “Zarakas & Verlinda”] at ¶ 8.

[8] Comments of the American Television Alliance, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/103082522212825/1 [hereinafter “ATVA Comments”] at i and 2 (“Broadcasters and programmers cause blackouts. This is, of course, true as a legal matter, as cable and satellite providers cannot lawfully deliver programming to subscribers without the permission of the rightsholder. It makes no sense to say that a cable or satellite provider has ‘blacked out’ programming by failing to obtain permission to carry it. A programmer ‘blacks out’ programming by declining to grant such permission.”).

[9] Comments of NTCA—The Rural Broadband Association, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/10308589412414/1 [hereinafter “NTCA Comments”] at 2.

[10] Comments of the National Association of Broadcasters, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/1030894019700/1 [hereinafter “NAB Comments”] at 4-5.

[11] Fruits, supra note 4.

[12] Eun-A Park, Rob Frieden, & Krishna Jayakar, Factors Affecting the Frequency and Length of Blackouts in Retransmission Consent Negotiations: A Quantitative Analysis, 22 Int’l. J. Media Mgmt. 117 (2020).

[13] Id. at 131.

[14] NPRM, supra note 1 at paras. 4, 6 (“We seek comment on whether and how to require cable operators and DBS providers to give their subscribers rebates when they blackout a channel due to a retransmission consent dispute or a failed negotiation for carriage of a non-broadcast channel.”); id. at para. 9 (“We tentatively conclude that sections 335 and 632 of the Act provide us with authority to require cable operators and DBS providers to issue a rebate to their subscribers when they blackout a channel.”) [emphasis added].

[15] See Zarakas & Verlinda supra note 7 at para. 14 (blackouts are costly “in the form of lost subscribers and higher incidence of retention rebates”).

[16] Comments of the International Center for Law & Economics, MB Docket No. 23-405 (Feb. 5, 2024), https://www.fcc.gov/ecfs/document/10204246609086/1 at 9-10 (“In its latest quarterly report to the Securities and Exchange Commission, DISH Network reported that it incurs ‘significant upfront costs to acquire Pay-TV’ subscribers, amounting to subscriber acquisition costs of $1,065 per new DISH TV subscriber. The company also reported that it incurs ‘significant’ costs to retain existing subscribers. These retention costs include upgrading and installing equipment, as well as free programming and promotional pricing, ‘in exchange for a contractual commitment to receive service for a minimum term.’”)

[17] See NPRM, supra note 1 at paras. 4, 8, 10 (using “make whole” language)

[18] See id. at n. 7, citing Spectrum Residential Video Service Agreement (“In the event particular programming becomes unavailable, either on a temporary or permanent basis, due to a dispute between Spectrum and a third party programmer, Spectrum shall not be liable for compensation, damages (including compensatory, direct, indirect, incidental, special, punitive or consequential losses or damages), credits or refunds of fees for the missing or omitted programming. Your sole recourse in such an event shall be termination of the Video Services in accordance with the Terms of Service.”) and para. 6 (“To the extent that the existing terms of service between a cable operator or DBS provider and its subscriber specify that the cable operator or DBS provider is not liable for credits or refunds in the event that programming becomes unavailable, we seek comment on whether to deem such provisions unenforceable if we were to adopt a rebate requirement.”)

[19] ATVA Comments, supra note 8 at 11.

[20] NPRM, supra note 1 at para. 6.

[21] See ATVA Comments, supra note 8 at 3 (“The Commission seeks information on the extent to which MVPDs grant rebates today. The answer is that, in today’s competitive marketplace, many ATVA members provide credits, with significant variations both among providers and among classes of subscribers served by individual providers. This, in turn, suggests that cable and satellite companies already address the issues identified by the Commission, but in a more nuanced and individualized manner than proposed in the Notice.”). See also id. at 5-6 (reporting DIRECTV provides credits to existing customers and makes the offer of credits easy to find online or via customer service representatives). See also id. at 7 (reporting DIRECTV and DISH provide credits to requesting subscribers and Verizon compensates subscribers “in certain circumstances”).

[22] See Zarakas & Verlinda, supra note 7 at para. 21 (“DISH provides certain offers to requesting customers in the case of programming blackouts, which may include a $5 per month credit, a free over-the-air antenna for big 4 local channel blackouts, or temporary free programming upgrades for cable network blackouts.”).

[23] See id. at para. 21.

[24] See ATVA Comments, supra note 8 at 4 (“If Disney blacks out ESPN on a cable system, for example, subscribers still have many ways to get ESPN. This includes both traditional competitors to cable (which are losing subscribers) and a wide array of online video providers (which are gaining subscribers).”); Comments of Verizon, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/10308316105453/1 [hereinafter “Verizon Comments”] at 12 (“In today’s competitive marketplace, consumers have many options for viewing broadcasters’ content in the event of a blackout — they can switch among MVPDs, or forgo MVPD services altogether and watch on a streaming platform or over the air. And when a subscriber switches or cancels service, it is extremely costly for video providers to win them back.”); DISH Comments, supra note 7 at 7 (“[L]ocal network stations have also been able to use another lever: the phenomenal success of over-the-top video streaming and the emergence of several online video distributors (‘OVDs’), some of which have begun incorporating local broadcast stations in their offerings.”); Comments of the New York State Public Service Commission, MB Docket 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/10308156370046/1 [hereinafter “NYPSC Comments”] at 2 (identifying streaming services and Internet Protocol Television (IPTV) providers such as YouTube TV, Sling, and DirecTV Stream as available alternatives).

[25] See ATVA Comments, supra note 8 at 4.

[26] NYPSC Comments, supra note 22 at 2.

[27] ATVA Comments, supra note 8 at 4-5.

[28] NTCA Comments, supra note 9 at 3; see Luke Bouma, Another Cable TV Company Announces It Will Shut Down Its TV Service Because of “Extreme Price Increases from Programmers,” Cord Cutters News (Dec. 10, 2023), https://cordcuttersnews.com/another-cable-tv-company-announces-it-will-shut-down-its-tv-service-because-of-extreme-price-increases-from-programmers (reporting the announced shutdown of DUO Broadband’s cable TV and streaming TV services because of increased programming fees, affecting several Kentucky counties).

[29] ATVA Comments, supra note 8 at note 15; DISH Comments, supra note 7 at 3, 8; NAB Comments, supra note 10 at 5; Comments of NCTA—The Internet & Television Alliance, MB Docket No. 24-20 (Mar. 8, 2024), https://www.fcc.gov/ecfs/document/1030958439598/1 [hereinafter “NCTA Comments”] at 2, 11.

[30] See ATVA Comments, supra note 8 at n. 19 (“With more distributors, programmers ‘lose less’ if they fail to reach agreement with any individual cable or satellite provider.”); Zarakas & Verlinda, supra note 7 at para. 6 (“This bargaining power has been further exacerbated by the increase in the number of distribution platforms coming from the growth of online video distributors. The bargaining leverage of cable networks has also received a boost from the proliferation of distribution platforms.”); id. at para. 13 (“Growth of OVDs has reduced MVPD bargaining leverage”).

[31] See DISH Comments, supra note 7 at 6 (“For one thing, the consolidation of the broadcast industry over the last ten years has exacerbated the imbalance further. This consolidation, fueled itself by the broadcasters’ interest in ever-steeper retransmission price increases, has effectively been a game of “and then there were none,” with small independent groups of two or three stations progressively vanishing from the picture.”); Zarakas & Verlinda, supra note 7 at para. 6 (concluding consolidation among local networks is associated with increased retransmission fees).

[32] See Jeffrey A. Eisenach, The Economics of Retransmission Consent, at 9 n.22 (Empiris LLC, Mar. 2009), available at https://nab.org/documents/resources/050809EconofRetransConsentEmpiris.pdf.

[33] See Robert Marich, TV Faces Blackout Blues, Variety (Dec. 10, 2011), https://variety.com/2011/tv/news/tv-faces-blackout-blues-1118047261.

[34] See Economics of Broadcast TV Retransmission Revenue 2020, S&P Global Mkt. Intelligence (2020), https://www.spglobal.com/marketintelligence/en/news-insights/blog/economics-of-broadcast-tv-retransmission-revenue-2020.

[35] Cf. Zarakas & Verlinda, supra note 7 at para. 6.

[36] Retransmission Fee Revenue for U.S. Local TV Stations, Pew Research Center (Jul. 2022), https://www.pewresearch.org/journalism/chart/sotnm-local-tv-u-s-local-tv-station-retransmission-fee-revenue; Advertising Revenue for Local TV, Pew Research Center (Jul. 13, 2021), https://www.pewresearch.org/journalism/chart/sotnm-local-tv-advertising-revenue-for-local-tv.

[37] DISH Comments, supra note 7 at 4.

[38] Park, et al., supra note 13 at 118 (“With stations receiving more retransmission compensation, a new phenomenon has also emerged since the 2010s: reverse retransmission revenues, whereby networks receive a portion of their affiliates and owned-and-operated stations’ retransmission revenues. As retransmission fees have become more important to television stations, broadcast networks and MVPDs, negotiations over contract terms and fees have become more contentious and protracted.”).

[39] Marich, supra note 33.

[40] DISH Comments, supra note 7 at 11.

[41] NCTA Comments, supra note 29 at 2.

[42] See Zarakas & Verlinda supra note 8 at para. 15 (citing George S. Ford, A Retrospective Analysis of Vertical Mergers in Multichannel Video Programming Distribution Markets: The Comcast-NBCU Merger, Phoenix Ctr. for Advanced L. & Econ. Pub. Pol’y Studies (Dec. 2017), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3138713).

[43] Aaron Heresco & Stephanie Figueroa, Over the Top: Retransmission Fees and New Commodities in the U.S. Television Industry, 29 Democratic Communiqué 19, 36 (2020).

[44] NTCA Comments, supra note 9 at 3.

[45] NPRM, supra note 1 at paras. 6-8.

[46] See supra notes 21-22 and accompanying text.

[47] NPRM, supra note 1 at n. 9.

[48] See id. at para. 6.

[49] See id. at n.12 (citing Alex Weprin, Weather Channel Brushes Off a Blackout, Politico (Feb. 6, 2014) https://www.politico.com/media/story/2014/02/weather-channel-brushes-off-a-blackout-001667); David Lieberman, The Weather Channel Returns To DirecTV, Deadline (April 8, 2014), https://deadline.com/2014/04/the-weatherchannel-returns-directv-deal-711602.

[50] See U.S. Television Networks, USTVDB (retrieved Mar. 28, 2024), https://ustvdb.com/networks.

[51] See Lieberman, supra note 49.

[52] See NPRM, supra note 1 at para. 6.

[53] See NCTA Comments, supra note 29 at 5.

[54] See id. at 3; see also Lieberman, supra note 49 (indicating that carriage consent agreement ending a blackout of The Weather Channel on DIRECTV required The Weather Channel to cut its reality programming by half on weekdays).

[55] See Alex Weprin & Lesley Goldberg, What’s Next for Freeform After Being Dropped by Charter, Hollywood Reporter (Dec. 14, 2023), https://www.hollywoodreporter.com/tv/tv-news/freeform-disney-charter-hulu-1235589827 (reporting that Freeform is a Disney-owned cable channel that currently caters to younger women; the channel began as a spinoff of the Christian Broadcasting Network, was subsequently rebranded as The Family Channel, then Fox Family Channel, and then ABC Family, before rebranding as Freeform).

[56] See NCTA Comments, supra note 29 at 5.

[57] Verizon Comments, supra note 24 at 13 (“Also, as the Commission points out, ‘What if the parties never reach an agreement for carriage? Would subscribers be entitled to rebates in perpetuity and how would that be calculated?’ The absurdity of these questions underscores the absurdity of the proposed regulation.”)

[58] See DISH Comments, supra note 7 at 13.

[59] Id.; see also id. at 22 (“Broadcasters increasingly demand that an MVPD agree to carry other broadcast stations or cable networks as a condition of obtaining retransmission consent for the broadcaster’s primary signal, without giving a real economic alternative to carrying just the primary signal(s).”)

[60] ATVA Comments, supra note 8 at 13 (“here is the additional complication that cable and satellite companies generally agree to confidentiality provisions with broadcasters and programmers—typically at the insistence of the broadcaster or programmer”); DISH Comments, supra note 7 at 21 (reporting broadcasters and programmers “insist” on confidentiality); NCTA Comments, supra note 27 at 6 (“It also bears emphasis that this approach would necessarily publicly expose per- subscriber rates and other highly confidential business information, and that the contracts between the parties prohibit disclosure of this and other information that each find competitively sensitive.”).

[61] NPRM, supra note 1 at para. 8.

[62] Spectrum Northeast, LLC v. Frey, 22 F.4th 287, 293 (1st Cir. 2022), cert denied, 143 S. Ct. 562 (2023); see also In the Matter of Promoting Competition in the American Economy: Cable Operator and DBS Provider Billing Practices, MB Docket No. 23-405, at n. 55 (Jan. 5, 2024), available at https://docs.fcc.gov/public/attachments/DOC-398660A1.pdf.

[63] NPRM, supra note 1 at para. 13.

[64] See supra note 59 and accompanying text for an example of a bundle.

[65] NCTA Comments, supra note 29 at 6.

[66] ATVA Comments, supra note 8 at 11.

[67] NCTA Comments, supra note 29 at 6.

Continue reading
Telecommunications & Regulated Utilities

ICLE Comments to NTIA on Dual-Use Foundation AI Models with Widely Available Model Weights

Regulatory Comments I. Introduction We thank the National Telecommunications and Information Administration (NTIA) for the opportunity to contribute to this request for comments (RFC) in the “Dual . . .

I. Introduction

We thank the National Telecommunications and Information Administration (NTIA) for the opportunity to contribute to this request for comments (RFC) in the “Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights” proceeding. In these comments, we endeavor to offer recommendations to foster the innovative and responsible production of artificial intelligence (AI), encompassing both open-source and proprietary models. Our comments are guided by a belief in the transformative potential of AI, while recognizing NTIA’s critical role in guiding the development of regulations that not only protect consumers but also enable this dynamic field to flourish. The agency should seek to champion a balanced and forward-looking approach toward AI technologies that allows them to evolve in ways that maximize their social benefits, while navigating the complexities and challenges inherent in their deployment.

NTIA’s question “How should [the] potentially competing interests of innovation, competition, and security be addressed or balanced?”[1] gets to the heart of ongoing debates about AI regulation. There is no panacea to be discovered, as all regulatory choices require balancing tradeoffs. It is crucial to bear this in mind when evaluating, e.g., regulatory proposals that implicitly treat AI as inherently dangerous and regard as obvious that stringent regulation is the only effective strategy to mitigate such risks.[2] Such presumptions discount AI’s unknown but potentially enormous capacity to produce innovation, and inadequately account for other tradeoffs inherent to imposing a risk-based framework (e.g., requiring disclosure of trade secrets or particular kinds of transparency that could yield new cybersecurity attack vectors). Adopting an overly cautious stance risks not only stifling AI’s evolution, but may also preclude a fulsome exploration of its potential to foster social, economic, and technological advancement. A more restrictive regulatory environment may also render AI technologies more homogenous and smother development of the kinds of diverse AI applications needed to foster robust competition and innovation.

We observe this problematic framing in the executive order (EO) that serves as the provenance of this RFC.[3] The EO repeatedly proclaims the importance of “[t]he responsible development and use of AI” in order to “mitigate[e] its substantial risks.”[4] Specifically, the order highlights concerns over “dual-use foundation models”—i.e., AI systems that, while beneficial, could pose serious risks to national security, national economic security, national public health, or public safety.[5] Concerningly, one of the categories the EO flags as illicit “dual use” are systems “permitting the evasion of human control or oversight through means of deception or obfuscation.”[6] This open-ended category could be interpreted so broadly that essentially any general-purpose generative-AI system would classify.

The EO also repeatedly distinguishes “open” versus “closed” approaches to AI development, while calling for “responsible” innovation and competition.[7] On our reading, the emphasis the EO places on this distinction raises alarm bells about the administration’s inclination to stifle innovation through overly prescriptive regulatory frameworks, diminishment of the intellectual property rights that offer incentives for innovation, and regulatory capture that favors incumbents over new entrants. In favoring one model of AI development over another, the EO’s prescriptions could inadvertently hamper the dynamic competitive processes that are crucial both for technological progress and for the discovery of solutions to the challenges that AI technology poses.

Given the inchoate nature of AI technology—much less the uncertain markets in which that technology will ultimately be deployed and commercialized—NTIA has an important role to play in elucidating for policymakers the nuances that might lead innovators to choose an open or closed development model, without presuming that one model is inherently better than the other—or that either is necessarily “dangerous.” Ultimately, the preponderance of AI risks will almost certainly emerge idiosyncratically. It will be incumbent on policymakers to address such risks in an iterative fashion as they become apparent. For now, it is critical to resist the urge to enshrine crude and blunt categories for the heterogeneous suite of technologies currently gathered under the broad banner of  “AI.”

Section II of these comments highlights the importance of grounding AI regulation in actual harms, rather than speculative risks, while outlining the diversity of existing AI technologies and the need for tailored approaches. Section III starts with discussion of some of the benefits and challenges posed by both open and closed approaches to AI development, while cautioning against overly prescriptive definitions of “openness” and advocating flexibility in regulatory frameworks. It proceeds to examine the EO’s prescription to regulate so-called “dual-use” foundation models, underscoring some potential unintended consequences for open-source AI development and international collaboration. Section IV offers some principles to craft an effective regulatory model for AI, including distinguishing between low-risk and high-risk applications, avoiding static regulatory approaches, and adopting adaptive mechanisms like regulatory sandboxes and iterative rulemaking. Section V concludes.

II. Risk Versus Harm in AI Regulation

In many of the debates surrounding AI regulation, disproportionate focus is placed on the need to mitigate risks, without sufficient consideration of the immense benefits that AI technologies could yield. Moreover, because these putative risks remain largely hypothetical, proposals to regulate AI descend quickly into an exercise in shadowboxing.

Indeed, there is no single coherent definition of what even constitutes “AI.” The term encompasses a wide array of technologies, methodologies, and applications, each with distinct characteristics, capabilities, and implications for society. From foundational models that can generate human-like text, to algorithms capable of diagnosing diseases with greater accuracy than human doctors, to “simple” algorithms that facilitate a more tailored online experience, AI applications and their underlying technologies are as varied as they are transformative.

This diversity has profound implications for the regulation and development of AI. Very different regulatory considerations are relevant to AI systems designed for autonomous vehicles than for those used in financial algorithms or creative-content generation. Each application domain comes with its own set of risks, benefits, ethical dilemmas, and potential social impacts, necessitating tailored approaches to each use case. And none of these properties of AI map clearly onto the “open” and “closed” designations highlighted by the EO and this RFC. This counsels for focus on specific domains and specific harms, rather than how such technologies are developed.[8]

As in prior episodes of fast-evolving technologies, what is considered cutting-edge AI today may be obsolete tomorrow. This rapid pace of innovation further complicates the task of crafting policies and regulations that will be both effective and enduring. Policymakers and regulators must navigate this terrain with a nuanced understanding of AI’s multifaceted nature, including by embracing flexible and adaptive regulatory frameworks that can accommodate AI’s continuing evolution.[9] A one-size-fits-all approach could inadvertently stifle innovation or entrench the dominance of a few large players by imposing barriers that disproportionately affect smaller entities or emerging technologies.

Experts in law and economics have long scrutinized both market conduct and regulatory rent seeking that serve to enhance or consolidate market power by disadvantaging competitors, particularly through increasing the costs incurred by rivals.[10] Various tactics may be employed to undermine competitors or exclude them from the market that do not involve direct price competition. It is widely recognized that “engaging with legislative bodies or regulatory authorities to enact regulations that negatively impact competitors” produces analogous outcomes.[11] It is therefore critical that the emerging markets for AI technologies not engender opportunities for firms to acquire regulatory leverage over rivals. Instead, recognizing the plurality of AI technologies and encouraging a multitude of approaches to AI development could help to cultivate a more vibrant and competitive ecosystem, driving technological progress forward and maximizing AI’s potential social benefits.

This overarching approach counsels skepticism about risk-based regulatory frameworks that fail to acknowledge how the theoretical harms of one type of AI system may be entirely different from those of another. Obviously, the regulation of autonomous drones is a very different sort of problem than the regulation of predictive policing or automated homework tutors. Even within a single circumscribed domain of generative AI—such as “smart chatbots” like ChatGPT or Claude—different applications may present entirely different kinds of challenges. A highly purpose-built version of such a system might be employed by government researchers to develop new materiel for the U.S. Armed Forces, while a general-purpose commercial chatbot would employ layers of protection to ensure that ordinary users couldn’t learn how to make advanced weaponry. Rather treating “chatbots” as possible vectors for weapons development, a more appropriate focus would target high-capability systems designed to assist in developing such systems. Were it the case that a general-purpose chatbot inadvertently revealed some information on building weapons, all incentives would direct that AI’s creators to treat that as a bug to fix, not a feature to expand.

Take, for example, the recent public response to the much less problematic AI-system malfunctions that accompanied Google’s release of its Gemini program.[12] Gemini was found to generate historically inaccurate images, such as ethnically diverse U.S. senators from the 1800s, including women.[13] Google quickly acknowledged that it did not intend for Gemini to create inaccurate historical images and turned off the image-generation feature to allow time for the company to work on significant improvements before re-enabling it.[14] While Google blundered in its initial release, it had every incentive to discover and remedy the problem. The market response provided further incentive for Google to get it right in the future.[15] Placing the development of such systems under regulatory scrutiny because some users might be able to jailbreak a model and generate some undesirable material would create disincentives to the production of AI systems more generally, with little gained in terms of public safety.

Rather than focus on the speculative risks of AI, it is essential to ground regulation in the need to address tangible harms that stem from the observed impacts of AI technologies on society. Moreover, focusing on realistic harms would facilitate a more dynamic and responsive regulatory approach. As AI technologies evolve and new applications emerge, so too will the  potential harms. A regulatory framework that prioritizes actual harms can adapt more readily to these changes, enabling regulators to update or modify policies in response to new evidence or social impacts. This flexibility is particularly important for a field like AI, where technological advancements could quickly outpace regulation, creating gaps in oversight that may leave individuals and communities vulnerable to harm.

Furthermore, like any other body of regulatory law, AI regulation must be grounded in empirical evidence and data-driven decision making. Demanding a solid evidentiary basis as a threshold for intervention would help policymakers to avoid the pitfalls of reacting to sensationalized or unfounded AI fears. This would not only enhance regulators’ credibility with stakeholders, but would also ensure that resources are dedicated to addressing the most pressing and substantial issues arising from the development of AI.

III. The Regulation of Foundation Models

NTIA is right to highlight the tremendous promise that attends the open development of AI technologies:

Dual use foundation models with widely available weights (referred to here as open foundation models) could play a key role in fostering growth among less resourced actors, helping to widely share access to AI’s benefits…. Open foundation models can be readily adapted and fine-tuned to specific tasks and possibly make it easier for system developers to scrutinize the role foundation models play in larger AI systems, which is important for rights- and safety-impacting AI systems (e.g. healthcare, education, housing, criminal justice, online platforms etc.)

…Historically, widely available programming libraries have given researchers the ability to simultaneously run and understand algorithms created by other programmers. Researchers and journals have supported the movement towards open science, which includes sharing research artifacts like the data and code required to reproduce results.[16]

The RFC proceeds to seek input on how to define “open” and “widely available.”[17] These, however, are the wrong questions. NTIA should instead proceed from the assumption that there are no harms inherent to either “open” or “closed” development models; it should be seeking input on anything that might give rise to discrete harms in either open or closed systems.

NTIA can play a valuable role by recommending useful alterations to existing law where gaps currently exist, regardless of the business or distribution model employed by the AI developer. In short, there is nothing necessarily more or less harmful about adopting an “open” or a “closed” approach to software systems. The decision to pursue one path over the other will be made based on the relevant tradeoffs that particular firms face. Embedding such distinctions in regulation is arbitrary, at best, and counterproductive to the fruitful development of AI, at worst.

A. ‘Open’ or ‘Widely Available’ Model Weights

To the extent that NTIA is committed to drawing distinctions between “open” and “closed” approaches to developing foundation models, it should avoid overly prescriptive definitions of what constitutes “open” or “widely available” model weights that could significantly hamper the progress and utility of AI technologies.

Imposing narrow definitions risks creating artificial boundaries that fail to accurately reflect AI’s technical and operational realities. They could also inadvertently exclude or marginalize innovative AI models that fall outside those rigid parameters, despite their potential to contribute positively to technological advancement and social well-being. For instance, a definition of “open” that requires complete public accessibility without any form of control or restriction might discourage organizations from sharing their models, fearing misuse or loss of intellectual property.

Moreover, prescriptive definitions could stifle the organic growth and evolution of AI technologies. The AI field is characterized by its rapid pace of change, where today’s cutting-edge models may become tomorrow’s basic tools. Prescribing fixed criteria for what constitutes “openness” or “widely available” risks anchoring the regulatory landscape to this specific moment in time, leaving the regulatory framework less able to adapt to future developments and innovations.

Given AI developers’ vast array of applications, methodologies, and goals, it is imperative that any definitions of “open” or “widely available” model weights embrace flexibility. A flexible approach would acknowledge how the various stakeholders within the AI ecosystem have differing needs, resources, and objectives, from individual developers and academic researchers to startups and large enterprises. A one-size-fits-all definition of “openness” would fail to accommodate this diversity, potentially privileging certain forms of innovation over others and skewing the development of AI technologies in ways that may not align with broader social needs.

Moreover, flexibility in defining “open” and “widely available” must allow for nuanced understandings of accessibility and control. There can, for example, be legitimate reasons to limit openness, such as protecting sensitive data, ensuring security, and respecting intellectual-property rights, while still promoting a culture of collaboration and knowledge sharing. A flexible regulatory approach would seek a balanced ecosystem where the benefits of open AI models are maximized, and potential risks are managed effectively.

B. The Benefits of ‘Open’ vs ‘Closed’ Business Models

NTIA asks:

What benefits do open model weights offer for competition and innovation, both in the AI marketplace and in other areas of the economy? In what ways can open dual-use foundation models enable or enhance scientific research, as well as education/training in computer science and related fields?[18]

An open approach to AI development has obvious benefits, as NTIA has itself acknowledged in other contexts.[19] Open-foundation AI models represent a transformative force, characterized by their accessibility, adaptability, and potential for widespread application across various sectors. The openness of these models may serve to foster an environment conducive to innovation, wherein developers, researchers, and entrepreneurs can build on existing technologies to create novel solutions tailored to diverse needs and challenges.

The inherent flexibility of open-foundation models can also catalyze a competitive market, encouraging a healthy ecosystem where entities ranging from startups to established corporations may all participate on roughly equal footing. By lowering some entry barriers related to access to basic AI technologies, this competitive environment can further drive technological advancements and price efficiencies, ultimately benefiting consumers and society at-large.

But more “closed” approaches can also prove very valuable. As NTIA notes in this RFC, it is rarely the case that a firm pursues a purely open or closed approach. These terms exist along a continuum, and firms blend models as necessary.[20] And just as firms readily mix elements of open and closed business models, a regulator should be agnostic about the precise mix that firms employ, which ultimately must align with the realities of market dynamics and consumer preferences.

Both open and closed approaches offer distinct benefits and potential challenges. For instance, open approaches might excel in fostering a broad and diverse ecosystem of applications, thereby appealing to users and developers who value customization and variety. They can also facilitate a more rapid dissemination of innovation, as they typically impose fewer restrictions on the development and distribution of new applications. Conversely, closed approaches, with their curated ecosystems, often provide enhanced security, privacy, and a more streamlined user experience. This can be particularly attractive to users less inclined to navigate the complexities of open systems. Under the right conditions, closed systems can likewise foster a healthy ecosystem of complementary products.

The experience of modern digital platforms demonstrates that there is no universally optimal approach to structuring business activities, thus illustrating the tradeoffs inherent in choosing among open and closed business models. The optimal choice depends on the specific needs and preferences of the relevant market participants. As Jonathan M. Barnett has noted:

Open systems may yield no net social gain over closed systems, can pose a net social loss under certain circumstances, and . . . can impose a net social gain under yet other circumstances.[21]

Similar considerations apply in the realm of AI development. Closed or semi-closed ecosystems can offer such advantages as enhanced security and curated offerings, which may appeal to certain users and developers. These benefits, however, may come at the cost of potentially limited innovation, as a firm must rely on its own internal processes for research and development. Open models, on the other hand, while fostering greater collaboration and creativity, may also introduce risks related to quality control, intellectual-property protection, and a host of other concerns that may be better controlled in a closed business model. Even along innovation dimensions, closed platforms can in many cases outperform open models.

With respect to digital platforms like the App Store and Google Play Store, there is a “fundamental welfare tradeoff between two-sided proprietary…platforms and two-sided platforms which allow ‘free entry’ on both sides of the market.”[22] Consequently, “it is by no means obvious which type of platform will create higher product variety, consumer adoption and total social welfare.”[23]

To take another example, consider the persistently low adoption rates for consumer versions of the open-source Linux operating system, versus more popular alternatives like Windows or MacOS.[24] A closed model like Apple’s MacOS is able to outcompete open solutions by better leveraging network effects and developing a close relationship with end users.[25] Even in this example, adoption of open versus closed models varies across user types, with, e.g., developers showing a strong preference for Linux over Mac, and only a slight preference for Windows over Linux.[26] This underscores the point that the suitability of an open or closed model varies not only by firm and product, nor even solely by user, but by the unique fit of a particular model for a particular user in a particular context. Many of those Linux-using developers will likely not use it on their home computing device, for example, even if they prefer it for work.

The dynamics among consumers and developers further complicate prevailing preferences for open or closed models. For some users, the security and quality assurance provided by closed ecosystems outweigh the benefits of open systems’ flexibility. On the developer side, the lower barriers to entry in more controlled ecosystems that smooth the transaction costs associated with developing and marketing applications can democratize application development, potentially leading to greater innovation within those ecosystems. Moreover, distinctions between open and closed models can play a critical role in shaping inter-brand competition. A regulator placing its thumb on the business-model scale would push the relevant markets toward less choice and lower overall welfare.[27]

By differentiating themselves through a focus on ease-of-use, quality, security, and user experience, closed systems contribute to a vibrant competitive landscape where consumers have clear choices between differing “brands” of AI. Forcing an AI developer to adopt practices that align with a regulator’s preconceptions about the relative value of “open” and “closed” risks homogenizing the market and diminishing the very competition that spurs innovation and consumer choice.

Consider some of the practical benefits sought by deployers when choosing between open and closed models. For example, it’s not straightforward to say close is inherently better than open when considering issues of data sharing or security; even here, there are tradeoffs. Open innovation in AI—characterized by the sharing of data, algorithms, and methodologies within the research community and beyond—can mitigate many of the risks associated with model development. This openness fosters a culture of transparency and accountability, where AI models and their applications are subject to scrutiny by a broad community of experts, practitioners, and the general public. This collective oversight can help to identify and address potential safety and security concerns early in the development process, thus enhancing AI technologies’ overall trustworthiness.

By contrast, a closed system may implement and enforce standardized security protocols more quickly. A closed system may have a sharper, more centralized focus on providing data security to users, which may perform better along some dimensions. And while the availability of code may provide security in some contexts, in other circumstances, closed systems perform better.[28]

In considering ethical AI development, different types of firms should be free to experiment with different approaches, even blending them where appropriate. For example, Claude’s approach to “Collective Constitutional AI” adopts what is arguably a “semi-open” model, blending proprietary elements with certain aspects of openness to foster innovation, while also maintaining a level of control.[29] This model might strike an appropriate balance, in that it ensures some degree of proprietary innovation and competitive advantage while still benefiting from community feedback and collaboration.

On the other hand, fully open-source development could lead to a different, potentially superior result that meets a broader set of needs through community-driven evolution and iteration. There is no way to determine, ex ante, that either an open or a closed approach to AI development will inherently provide superior results for developing “ethical” AI. Each has its place, and, most likely, the optimal solutions will involve elements of both approaches.

In essence, codifying a regulatory preference for one business model over the other would oversimplify the intricate balance of tradeoffs inherent to platform ecosystems. Economic theory and empirical evidence suggest that both open and closed platforms can drive innovation, serve consumer interests, and stimulate healthy competition, with all of these considerations depending heavily on context. Regulators should therefore aim for flexible policies that support coexistence of diverse business models, fostering an environment where innovation can thrive across the continuum of openness.

C. Dual-Use Foundation Models and Transparency Requirements

The EO and the RFC both focus extensively on so-called “dual-use” foundation models:

Foundation models are typically defined as, “powerful models that can be fine-tuned and used for multiple purposes.” Under the Executive Order, a “dual-use foundation model” is “an AI model that is trained on broad data; generally uses self-supervision, contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters….”[30]

But this framing will likely do more harm than good. As noted above, the terms “AI” or “AI model” are frequently invoked to refer to very different types of systems. Further defining these models as “dual use” is also unhelpful, as virtually any tool in existence can be “dual use” in this sense. Certainly, from a certain perspective, all software—particularly highly automated software—can pose a serious risk to “national security” or “safety.” Encryption and other privacy-protecting tools certainly fit this definition.[31] While it is crucial to mitigate harms associated with the misuse of AI technologies, the blanket treatment of all foundation models under this category is overly simplistic.

The EO identifies certain clear risks, such as the possibility that models could aid in the creation of chemical, biological, or nuclear weaponry. These categories are obvious subjects for regulatory control, but the EO then appears to open a giant definitional loophole that threatens to subsume virtually any useful AI system. It employs expansive terminology to describe a more generalized threat—specifically, that dual-use models could “[permit] the evasion of human control or oversight through means of deception or obfuscation.”[32] Such language could encompass a wide array of general-purpose AI models. Furthermore, by labeling systems capable of bypassing human decision making as “dual use,” the order implicitly suggests that all AI could pose such risk as warrants national-security levels of scrutiny.

Given the EO’s broad definition of AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments,” numerous software systems not typically even considered AI might be categorized as “dual-use” models.[33] Essentially, any sufficiently sophisticated statistical-analysis tool could qualify under this definition.

A significant repercussion of the EO’s very broad reporting mandates for dual-use systems, and one directly relevant to the RFC’s interest in promoting openness, is that these might chill open-source AI development.[34] Firms dabbling in AI technologies—many of which might not consider their projects to be dual use—might keep their initiatives secret until they are significantly advanced. Faced with the financial burden of adhering to the EO’s reporting obligations, companies that lack a sufficiently robust revenue model to cover both development costs and legal compliance might be motivated to dodge regulatory scrutiny in the initial phases, consequently dampening the prospects for transparency.

It is hard to imagine how open-source AI projects could survive in such an environment. Open-source AI code libraries like TensorFlow[35] and PyTorch[36] foster remarkable innovation by allowing developers to create new applications that use cutting-edge models. How could a paradigmatic startup developer working out of a garage genuinely commit to open-source development if tools like these fall under the EO’s jurisdiction? Restricting access to the weights that models use—let alone avoiding open-source development entirely—may hinder independent researchers’ ability to advance the forefront of AI technology.

Moreover, scientific endeavors typically benefit from the contributions of researchers worldwide, as collaborative efforts on a global scale are known to fast-track innovation. The pressure the EO applies to open-source development of AI tools could curtail international cooperation, thereby distancing American researchers from crucial insights and collaborations. For example, AI’s capacity to propel progress in numerous scientific areas is potentially vast—e.g., utilizing MRI images and deep learning for brain-tumor diagnoses[37] or employing machine learning to push the boundaries of materials science.[38] Such research does not benefit from stringent secrecy, but thrives on collaborative development. Enabling a broader community to contribute to and expand upon AI advancements supports this process.

Individuals respond to incentives. Just as how well-intentioned seatbelt laws paradoxically led to an uptick in risky driving behaviors,[39] ill-considered obligations placed on open-source AI developers could unintentionally stifle the exchange of innovative concepts crucial to maintain the United States’ leadership in AI innovation.

IV. Regulatory Models that Support Innovation While Managing Risks Effectively

In the rapidly evolving landscape of artificial intelligence (AI), it is paramount to establish governance and regulatory frameworks that both encourage innovation and ensure safety and ethical integrity. An effective regulatory model for AI should be adaptive, principles-based, and foster a collaborative environment among regulators, developers, researchers, and the broader community. A number of principles can help in developing this regime.

A. Low-Risk vs High-Risk AI

First, a clear distinction should be made between low-risk AI applications that enhance operational efficiency or consumer experience and high-risk applications that could have significant safety implications. Low-risk applications like search algorithms and chatbots should be governed by a set of baseline ethical guidelines and best practices that encourage innovation, while ensuring basic standards are met. On the other hand, high-risk applications—such as those used by law enforcement or the military—would require more stringent review processes, including impact assessments, ethical reviews, and ongoing monitoring to mitigate potentially adverse effects.

Contrast this with the recently enacted AI Act in the European Union, and its decision to create presumptions of risk for general purpose AI (GPAI) systems, such as large language models (LLMs), that present what the EU has termed so-called “systemic risk.”[40] Article 3(65) of the AI Act defines systemic risk as “a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.”[41]

This definition bears similarities to the “Hand formula” in U.S. tort law, which balances the burden of precautions against the probability and severity of potential harm to determine negligence.[42] The AI Act’s notion of systemic risk, however, is applied more broadly to entire categories of AI systems based on their theoretical potential for widespread harm, rather than on a case-by-case basis.

The designation of LLMs as posing “systemic risk” is problematic for several reasons. It creates a presumption of risk merely based on a GPAI system’s scale of operations, without any consideration of the actual likelihood or severity of harm in specific use cases. This could lead to unwarranted regulatory intervention and unintended consequences that hinder the development and deployment of beneficial AI technologies. And this broad definition of systemic risk gives regulators significant leeway to intervene in how firms develop and release their AI products, potentially blocking access to cutting-edge tools for European citizens, even in the absence of tangible harms.

While it is important to address potential risks associated with AI systems, the AI Act’s approach risks stifling innovation and hindering the development of beneficial AI technologies within the EU.

B. Avoid Static Regulatory Approaches

AI regulators are charged with overseeing a dynamic and rapidly developing market, and should therefore avoid erecting a rigid framework that force new innovations into ill-fitting categories. The “regulatory sandbox” may provide a better model to balance innovation with risk management. By allowing developers to test and refine AI technologies in a controlled environment under regulatory oversight, sandboxes can be used to help identify and address potential issues before wider deployment, all while facilitating dialogue between innovators and regulators. This approach not only accelerates the development of safe and ethical AI solutions, but also builds mutual understanding and trust. Where possible, NTIA should facilitate policy experimentation with regulatory sandboxes in the AI context.

Meta’s Open Loop program is an example of this kind of experimentation.[43] This program is a policy prototyping research project focused on evaluating the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) 1.0.[44] The goal is to assess whether the framework is understandable, applicable, and effective in assisting companies to identify and manage risks associated with generative AI. It also provides companies an opportunity to familiarize themselves with the NIST AI RMF and its application in risk-management processes for generative AI systems. Additionally, it aims to collect data on existing practices and offer feedback to NIST, potentially influencing future RMF updates.

1. Regulation as a discovery process

Another key principle is to ensure that regulatory mechanisms are adaptive. Some examples of adaptive mechanisms are iterative rulemaking and feedback loops that allow regulations to be updated continuously in response to new developments and insights. Such mechanisms enable policymakers to respond swiftly to technological breakthroughs, ensuring that regulations remain relevant and effective, without stifling innovation.

Geoffrey Manne & Gus Hurwitz have recently proposed a framework for “regulation as a discovery process” that could be adapted to AI.[45] They argue for a view of regulation not merely as a mechanism for enforcing rules, but as a process for discovering information that can inform and improve regulatory approaches over time. This perspective is particularly pertinent to AI, where the pace of innovation and the complexity of technologies often outstrip regulators’ understanding and ability to predict future developments. This framework:

in its simplest formulation, asks regulators to consider that they might be wrong. That they might be asking the wrong questions, collecting the wrong information, analyzing it the wrong way—or even that Congress has given them the wrong authority or misunderstood the problem that Congress has tasked them to address.[46]

That is to say, an adaptive approach to regulation requires epistemic humility, with the understanding that, particularly for complex, dynamic industries:

there is no amount of information collection or analysis that is guaranteed to be “enough.” As Coase said, the problem of social cost isn’t calculating what those costs are so that we can eliminate them, but ascertaining how much of those social costs society is willing to bear.[47]

In this sense, modern regulators’ core challenge is to develop processes that allow for iterative development of knowledge, which is always in short supply. This requires a shift in how an agency conceptualizes its mission, from one of writing regulations to one of assisting lawmakers to assemble, filter, and focus on the most relevant and pressing information needed to understand a regulatory subject’s changing dynamics.[48]

As Hurwitz & Manne note, existing efforts to position some agencies as information-gathering clearinghouses suffer from a number of shortcomings—most notably, that they tend to operate on an ad hoc basis, reporting to Congress in response to particular exigencies.[49] The key to developing a “discovery process” for AI regulation would instead require setting up ongoing mechanisms to gather and report on data, as well as directing the process toward “specifications for how information should be used, or what the regulator anticipated to find in the information, prior to its collection.”[50]

Embracing regulation as a discovery process means acknowledging the limits of our collective knowledge about AI’s potential risks and benefits. This underscores why regulators should prioritize generating and utilizing new information through regulatory experiments, iterative rulemaking, and feedback loops. A more adaptive regulatory framework could respond to new developments and insights in AI technologies, thereby ensuring that regulations remain relevant and effective, without stifling innovation.

Moreover, Hurwitz & Manne highlight the importance of considering regulation as an information-producing activity.[51] In AI regulation, this could involve setting up mechanisms that allow regulators, innovators, and the public to contribute to and benefit from a shared pool of knowledge about AI’s impacts. This could include public databases of AI incidents, standardized reporting of AI-system performance, or platforms for sharing best practices in AI safety and ethics.

Static regulatory approaches may fail to capture the evolving landscape of AI applications and their societal implications. Instead, a dynamic, information-centric regulatory strategy that embraces the market as a discovery process could better facilitate beneficial innovations, while identifying and mitigating harms.

V. Conclusion

As the NTIA navigates the complex landscape of AI regulation, it is imperative to adopt a nuanced, forward-looking approach that balances the need to foster innovation with the imperatives of ensuring public safety and ethical integrity. The rapid evolution of AI technologies necessitates a regulatory framework that is both adaptive and principles-based, eschewing static snapshots of the current state of the art in favor of flexible mechanisms that could accommodate the dynamic nature of this field.

Central to this approach is to recognize that the field of AI encompasses a diverse array of technologies, methodologies, and applications, each with its distinct characteristics, capabilities, and implications for society. A one-size-fits-all regulatory model would not only be ill-suited to the task at-hand, but would also risk stifling innovation and hindering the United States’ ability to maintain its leadership in the global AI industry. NTIA should focus instead on developing tailored approaches that distinguish between low-risk and high-risk applications, ensuring that regulatory interventions are commensurate with the potential identifiable harms and benefits associated with specific AI use cases.

Moreover, the NTIA must resist the temptation to rely on overly prescriptive definitions of “openness” or to favor particular business models over others. The coexistence of open and closed approaches to AI development is essential to foster a vibrant, competitive ecosystem that drives technological progress and maximizes social benefits. By embracing a flexible regulatory framework that allows for experimentation and iteration, the NTIA can create an environment conducive to innovation while still ensuring that appropriate safeguards are in place to mitigate potential risks.

Ultimately, the success of the U.S. AI industry will depend on the ability of regulators, developers, researchers, and the broader community to collaborate in developing governance frameworks that are both effective and adaptable. By recognizing the importance of open development and diverse business models, the NTIA can play a crucial role in shaping the future of AI in ways that promote innovation, protect public interests, and solidify the United States’ position as a global leader in this transformative field.

[1] Dual Use Foundation Artificial Intelligence Models With Widely Available Model Weights, Docket No. 240216-0052, 89 FR 14059, National Telecommunications and Information Administration (Mar. 27, 2024) at 14063, question 8(a) [hereinafter “RFC”].

[2] See, e.g., Kristian Stout, Systemic Risk and Copyright in the EU AI Act, Truth on the Market (Mar. 19, 2024), https://truthonthemarket.com/2024/03/19/systemic-risk-and-copyright-in-the-eu-ai-act.

[3] Exec. Order No. 14110, 88 F.R. 75191 (2023), https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence?_fsi=C0CdBzzA [hereinafter “EO”].

[4] See, e.g., EO at §§ 1; 2(c), 5.2(e)(ii); and § 8(c);

[5] Id. at § 3(k).

[6] Id. at § (k)(iii).

[7] Id. at § 4.6. As NTIA notes, the administration refers to “widely available model weight,” which is equivalent to “open foundation models” in this proceeding. RFC at 14060.

[8] For more on the “open” vs “closed” distinction and its poor fit as a regulatory lens, see, infra, at nn. 19-41 and accompanying text.

[9] Adaptive regulatory frameworks are discussed, infra, at nn. 42-53 and accompanying text.

[10] See Steven C. Salop & David T. Scheffman, Raising Rivals’ Costs, 73:2 Am. Econ. R. 267, 267–71 (1983), http://www.jstor.org/stable/1816853.

[11] See Steven C. Salop & David T. Scheffman, Cost-Raising Strategies, 36:1 J. Indus. Econ. 19 (1987), https://doi.org/10.2307/2098594.

[12] Cindy Gordon, Google Pauses Gemini AI Model After Latest Debacle, Forbes (Feb. 29, 2024), https://www.forbes.com/sites/cindygordon/2024/02/29/google-latest-debacle-has-paused-gemini-ai-model/?sh=3114d093536c.

[13] Id.

[14] Id.

[15] Breck Dumas, Google Loses $96B in Value on Gemini Fallout as CEO Does Damage Control, Yahoo Finance (Feb. 28, 2024), https://finance.yahoo.com/news/google-loses-96b-value-gemini-233110640.html.

[16] RFC at 14060.

[17] RFC at 14062, question 1.

[18] RFC at 14062, question 3(a).

[19] Department of Commerce, Competition in the Mobile Application Ecosystem (2023), https://www.ntia.gov/report/2023/competition-mobile-app-ecosystem (“While retaining appropriate latitude for legitimate privacy, security, and safety measures, Congress should enact laws and relevant agencies should consider measures (such as rulemaking) designed to open up distribution of lawful apps, by prohibiting… barriers to the direct downloading of applications.”).

[20] RFC at 14061 (“‘openness’ or ‘wide availability’ of model weights are also terms without clear definition or consensus. There are gradients of ‘openness,’ ranging from fully ‘closed’ to fully ‘open’”).

[21] See Jonathan M. Barnett, The Host’s Dilemma: Strategic Forfeiture in Platform Markets for Informational Goods, 124 Harv. L. Rev. 1861, 1927 (2011).

[22] Id. at 2.

[23] Id. at 3.

[24]  Desktop Operating System Market Share Worldwide Feb 2023 – Feb 2024, statcounter, https://gs.statcounter.com/os-market-share/desktop/worldwide (last visited Mar. 27, 2024).

[25]  Andrei Hagiu, Proprietary vs. Open Two-Sided Platforms and Social Efficiency (Harv. Bus. Sch. Strategy Unit, Working Paper No. 09-113, 2006).

[26] Joey Sneddon, More Developers Use Linux than Mac, Report Shows, Omg Linux (Dec. 28, 2022), https://www.omglinux.com/devs-prefer-linux-to-mac-stackoverflow-survey.

[27] See Michael L. Katz & Carl Shapiro, Systems Competition and Network Effects, 8 J. Econ. Persp. 93, 110 (1994), (“[T]he primary cost of standardization is loss of variety: consumers have fewer differentiated products to pick from, especially if standardization prevents the development of promising but unique and incompatible new systems”).

[28] See. e.g., Nokia, Threat Intelligence Report 2020 (2020), https://www.nokia.com/networks/portfolio/cyber-security/threat-intelligence-report-2020; Randal C. Picker, Security Competition and App Stores, Network Law Review (Aug. 23, 2021), https://www.networklawreview.org/picker-app-stores.

[29] Collective Constitutional AI: Aligning a Language Model with Public Input, Anthropic (Oct. 17, 2023), https://www.anthropic.com/news/collective-constitutional-ai-aligning-a-language-model-with-public-input.

[30] RFC at 14061.

[31] Encryption and the “Going Dark” Debate, Congressional Research Service (2017), https://crsreports.congress.gov/product/pdf/R/R44481.

[32] EO at. § 3(k)(iii).

[33] EO at § 3(b).

[34] EO at § 4.2 (requiring companies developing dual-use foundation models to provide ongoing reports to the federal government on their activities, security measures, model weights, and red-team testing results).

[35] An End-to-End Platform for Machine Learning, TensorFlow, https://www.tensorflow.org (last visited Mar. 27, 2024).

[36] Learn the Basics, PyTorch, https://pytorch.org/tutorials/beginner/basics/intro.html (last visited Mar. 27, 2024).

[37] Akmalbek Bobomirzaevich Abdusalomov, Mukhriddin Mukhiddinov, & Taeg Keun Whangbo, Brain Tumor Detection Based on Deep Learning Approaches and Magnetic Resonance Imaging, 15(16) Cancers (Basel) 4172 (2023), available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10453020.

[38] Keith T. Butler, et al., Machine Learning for Molecular and Materials Science, 559 Nature 547 (2018), available at https://www.nature.com/articles/s41586-018-0337-2.

[39] The Peltzman Effect, The Decision Lab, https://thedecisionlab.com/reference-guide/psychology/the-peltzman-effect (last visited Mar. 27, 2024).

[40] European Parliament, European Parliament legislative Resolution of 13 March 2024 on the Proposal for a Regulation of the European Parliament and of the Council on Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM/2021/206, available at https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html [hereinafter “EU AI Act”].

[41] Id. at Art. 3(65).

[42] See Stephen G. Gilles, On Determining Negligence: Hand Formula Balancing, the Reasonable Person Standard, and the Jury, 54 Vanderbilt L. Rev. 813, 842-49 (2001).

[43] See Open Loop’s First Policy Prototyping Program in the United States, Meta, https://www.usprogram.openloop.org (last visited Mar. 27. 2024).

[44] Id.

[45] Justin (Gus) Hurwitz & Geoffrey A. Manne, Pigou’s Plumber: Regulation as a Discovery Process, SSRN (2024), available at https://laweconcenter.org/resources/pigous-plumber.

[46] Id. at 32.

[47] Id. at 33.

[48] See id. at 28-29

[49] Id. at 37.

[50] Id. at 37-38.

[51] Id.

Continue reading
Innovation & the New Economy

Spectrum Pipeline Act a Promising Start that Needs Balance

Popular Media Given how important digital connections are to Americans’ daily lives, it’s urgent that Congress move to renew the Federal Communications Commission’s authority to auction parts . . .

Given how important digital connections are to Americans’ daily lives, it’s urgent that Congress move to renew the Federal Communications Commission’s authority to auction parts of the public airwaves.

That authority lapsed a little over a year ago and efforts to reinstate it have been repeatedly stuck in partisan gridlock.

Read the full piece here.

Continue reading
Telecommunications & Regulated Utilities

Amicus of IP Law Experts to the 2nd Circuit in Hachette v Internet Archive

Amicus Brief INTEREST OF AMICI CURIAE Amici Curiae are 24 former government officials, former judges, and intellectual property scholars who have developed copyright law and policy, researched . . .

INTEREST OF AMICI CURIAE

Amici Curiae are 24 former government officials, former judges, and intellectual property scholars who have developed copyright law and policy, researched and written about copyright law, or both. They are concerned about ensuring that copyright law continues to secure both the rights of authors and publishers in creating and disseminating their works and the rights of the public in accessing these works. It is vital for this Court to maintain this balance between creators and the public set forth in the constitutional authorization to Congress to create the copyright laws. Amici have no stake in the parties or in the outcome of the case. The names and affiliations of the members of the Amici are set forth in Addendum A below.[1]

SUMMARY OF ARGUMENT

Copyright fulfills its constitutional purpose to incentivize the creation and dissemination of new works by securing to creators the exclusive rights of reproduction and distribution. 17 U.S.C. § 106. Congress narrowly tailored the exceptions to these rights to avoid undermining the balanced system envisioned by the Framers. See 17 U.S.C. §§ 107–22. As James Madison recognized, the “public good fully coincides . . . with the claims of individuals” in the protection of copyright. The Federalist NO. 43, at 271–72 (James Madison) (Clinton Rossiter ed., 1961). Internet Archive (“IA”) and its amici wrongly frame copyright’s balance of interests as between the incentive to create, on the one hand, and the public good, on the other hand. That is not the balance that copyright envisions.

IA’s position also ignores the key role that publishers serve in the incentives copyright offers to authors and other creators. Few authors, no matter how intellectually driven, will continue to perfect their craft if the economic rewards are insufficient to meet their basic needs. As the Supreme Court observed, “copyright law celebrates the profit motive, recognizing that the incentive to profit from the exploitation of copyrights will redound to the public benefit by resulting in the proliferation of knowledge.” Eldred v. Ashcroft, 537 U.S. 186, 212 n.18 (2003) (quoting Am. Geophysical Union v. Texaco Inc., 802 F. Supp. 1, 27 (S.D.N.Y. 1992)). Accordingly, the Supreme Court and Congress have long recognized that copyright secures the fruits of intermediaries’ labors in their innovative development of distribution mechanisms of authors’ works. Copyright does not judge the value of a book by its cover price. Rather, core copyright policy recognizes that the profit motive drives the willingness ex ante to invest time and resources in creating both copyrighted works and the means to distribute them. In sum, commercialization is fundamental to a functioning copyright system that achieves its constitutional purpose.

IA’s unauthorized reproduction and duplication of complete works runs roughshod over this framework. Its concept of controlled digital lending (CDL) does not fall into any exception—certainly not any conception of fair use recognized by the courts or considered by Congress—and thus violates copyright owners’ exclusive rights. Expanding the fair use doctrine to immunize IA’s wholesale copying would upend Congress’s carefully-considered, repeated rejections of similar proposals.

Hoping to excuse its disregard for copyright law, IA and its amici attempt to turn the fair use analysis on its head. They acknowledge that the first sale exception does not permit CDL, as this Court made clear in Capitol Records, LLC v. ReDigi Inc., 910 F.3d 649 (2d Cir. 2018).[2] They also are aware that courts consistently have rejected variations on the argument that wholesale copying, despite a format shift, is permissible under fair use.[3] Nevertheless, IA and its amici ask this Court, for the first time in history, to create a first sale-style exemption within the fair use analysis. CDL is not the natural evolution of libraries in the digital age; rather, like Frankenstein’s monster, it is an abomination composed of disparate parts of copyright doctrine. If endorsed by this Court, it would undermine the constitutional foundation of copyright jurisprudence and the separation of powers.

The parties and other amici address the specific legal doctrines, as well as the technical and commercial context in which these doctrinal requirements apply in this case, and thus Amici provide additional information on the nature and function of copyright that should inform this Court’s analysis and decision.

First, although IA and its amici argue that there are public benefits to the copying in which IA has engaged that support a finding that CDL is fair use, their arguments ignore that copyright itself promotes the public good and the inevitable harms that would result if copyright owners were unable to enforce their rights against the wholesale, digital distribution of their works by IA.

Second, IA’s assertion of the existence of a so-called “digital first sale” doctrine—a principle that, unlike the actual first sale statute, would permit the reproduction, as well as the distribution, of copyrighted works—is in direct conflict with Congress and the Copyright Office’s repeated study (and rejection) of similar proposals. Physical and digital copies simply are different, and it is not an accident that first sale applies only to the distribution of physical copies. Ignoring decades of research and debate, IA pretends instead that Congress has somehow overlooked digital first sale, yet left it open to the courts to engage in policymaking by shoehorning it into the fair use doctrine. By doing so, IA seeks to thwart the democratic process to gain in the courts what CDL’s proponents have not been able to get from Congress.

Third, given that there is no statutory support for CDL, most libraries offer their patrons access to digital works by entering into licensing agreements with authors and their publishers. Although a minority of libraries have participated in IA’s CDL practice, and a few have filed amicus briefs in support of IA in this Court, the vast majority of libraries steer clear because they recognize that wholesale copying and distribution deters the creation of new works. As author Sandra Cisneros understands: “Real libraries do not do what Internet Archive does.” A-250 (Cisneros Decl.) ¶12. There are innumerable ways of accessing books, none of which require authors and publishers to live in a world where their books are illegally distributed for free.

No court has ever found that reproducing and giving away entire works—en masse, without permission, and without additional comment, criticism, or justification—constitutes fair use. IA’s CDL theory is a fantasy divorced from the Constitution, the laws enacted by Congress, and the longstanding policies that have informed copyright jurisprudence. This Court should reject IA’s effort to erase authors and publishers from the copyright system.

[1] The parties have consented to the filing of this brief. Amici Curiae and their counsel authored this brief. Neither a party, its counsel, nor any person other than Amici and their counsel contributed money that was intended to fund preparing or submitting this brief.

[2] See SPA-38 (“IA accepts that ReDigi forecloses any argument it might have under Section 109(a).”); Dkt. 60, Brief for Defendant-Appellant Internet Archive (hereinafter “IA Br.”) (appealing only the district court’s decision on fair use).

[3] See, e.g., ReDigi, 910 F.3d at 662; UMG Recordings, Inc. v. MP3.com, Inc., 92 F. Supp. 2d 349, 352 (S.D.N.Y. 2000); see also Disney Enters., Inc. v. VidAngel, Inc., 869 F.3d 848, 861–62 (9th Cir. 2017).

Continue reading
Intellectual Property & Licensing

Systemic Risk and Copyright in the EU AI Act

TOTM The European Parliament’s approval last week of the AI Act marked a significant milestone in the regulation of artificial intelligence. While the law’s final text . . .

The European Parliament’s approval last week of the AI Act marked a significant milestone in the regulation of artificial intelligence. While the law’s final text is less alarming than what was initially proposed, it nonetheless still includes some ambiguities that could be exploited by regulators in ways that would hinder innovation in the EU. 

Among the key features emerging from the legislation are its introduction of “general purpose AI” (GPAI) as a regulatory category and the ways that these GPAI might interact with copyright rules. Moving forward in what is rapidly becoming a global market for generative-AI services, it also bears reflecting on how the AI Act’s copyright provisions contrast with current U.S. copyright law. 

Read the full piece here.

Continue reading
Intellectual Property & Licensing

ICLE Comments to FTC on Children’s Online Privacy Protection Rule NPRM

Regulatory Comments Introduction We thank the Federal Trade Commission (FTC) for this opportunity to comment on the notice of proposed rulemaking (NPRM) to update the Children’s Online . . .

Introduction

We thank the Federal Trade Commission (FTC) for this opportunity to comment on the notice of proposed rulemaking (NPRM) to update the Children’s Online Privacy Protection Rule (“COPPA Rule”).

The International Center for Law and Economics (ICLE) is a nonprofit, nonpartisan research center whose work promotes the use of law & economics methodologies to inform public-policy debates. We believe that intellectually rigorous, data-driven analysis will lead to efficient policy solutions that promote consumer welfare and global economic growth.[1]

ICLE’s scholars have written extensively on privacy and data-security issues, including those related to children’s online safety and privacy. We also previously filed comments as part of the COPPA Rule Review and will make some of the same points below.[2]

The Children’s Online Privacy Protection Act (COPPA) sought to strike a balance in protecting children without harming the utility of the internet for children. As Sen. Richard Bryan (D-Nev.) put it when he laid out the purpose of COPPA:

The goals of this legislation are: (1) to enhance parental involvement in a child’s online activities in order to protect the privacy of children in the online environment; (2) to enhance parental involvement to help protect the safety of children in online fora such as chatrooms, home pages, and pen-pal services in which children may make public postings of identifying information; (3) to maintain the security of personally identifiable information of children collected online; and (4) to protect children’s privacy by limiting the collection of personal information from children without parental consent. The legislation accomplishes these goals in a manner that preserves the interactivity of children’s experience on the Internet and preserves children’s access to information in this rich and valuable medium.[3]

In other words, COPPA was designed to protect children from online threats by promoting parental involvement in a way that also preserves a rich and vibrant marketplace for children’s content online. Consequently, the pre-2013 COPPA Rule did not define personal information to include persistent identifiers standing alone. It is these persistent identifiers that are critical for the targeted advertising that funds the interactive online platforms and the creation of children’s content the legislation was designed to preserve.

COPPA applies to the “operator of any website or online service” that is either “directed to children that collects personal information from children” or that has “actual knowledge that it is collecting personal information from a child.”[4] These operators must “obtain verifiable parental consent for the collection, use, or disclosure of personal information.” The NPRM, following the mistaken 2013 amendments to the COPPA Rule, continues to define “personal information” to include persistent identifiers that are necessary for the targeted advertising undergirding the internet ecosystem.

Below, we argue that, before the FTC moves further toward restricting platform operators and content creators’ ability to monetize their work through targeted advertising, it must consider the economics of multisided platforms. The current path will lead to less available free content for children and more restrictions on their access to online platforms that depend on targeted advertising. Moreover, the proposed rules are inconsistent with the statutory text of COPPA, as persistent identifiers do not by themselves enable contacting specific individuals. Including them in the definition of “personal information” is also contrary to the statute’s purpose, as it will lead to a less vibrant internet ecosystem for children.

Finally, there are better ways to protect children online, including by promoting the use of available technological and practical solutions to avoid privacy harms. To comply with existing First Amendment jurisprudence regarding online speech, it is necessary to rely on these less-restrictive means to serve the goal of protecting children without unduly impinging their speech interests online.

I. The Economics of Online Multisided Platforms

Most of the “operators of websites and online services” subject to the COPPA Rule are what economists call multisided markets, or platforms.[5] Such platforms derive their name from the fact that they serve at least two different types of customers and facilitate their interaction. Multisided platforms generate “indirect network effects,” described by one economist as a situation where “participants on one side value being able to interact with participants on the other side… lead[ing] to interdependent demand.”[6]

Online platforms provide content to one side and access to potential consumers on the other side. In order to keep demand high, online platforms often offer free access to users, whose participation is subsidized by those participants on the other side of the platform (such as advertisers) that wish to reach them.[7] This creates a positive feedback loop in which more participants on one side of the platform leads to more participants on the other.

This dynamic is also true of platforms with content “directed to children.” Revenue is collected not from those users, but primarily from the other side of the platform—i.e., advertisers who pay for access to the platform’s users. To be successful, online platforms must keep enough—and the right type of—users engaged to maintain demand for advertising.

Moreover, many “operators” under COPPA are platforms that rely on user-generated content. Thus, they must also consider how to attract and maintain high-demand content creators, often accomplished by sharing advertising revenue. If platforms fail to serve the interests of high-demand content creators, those creators may leave the platform, thus reducing its value.

Online platforms acting within the market process are usually going to be the parties best-positioned to make decisions on behalf of platforms users. Operators with content directed to children may even compete on privacy policies and protections for children by providing tools to help users avoid what they (or, in this context, their parents and guardians) perceive to be harms, while keeping users on the platform and maintaining value for advertisers.[8]

There may, however, be examples where negative externalities[9] stemming from internet use are harmful to society more broadly. A market failure could result, for instance, if platforms’ incentives lead them to collect too much (or the wrong types of) information for targeted advertising, or to offer up content that is harmful for children or keeps them hooked to using the platform.

In situations where there are negative externalities from internet use, there may be a case to regulate online platforms in various ways. Any case for regulation must, however, acknowledge potential transaction costs, as well as how platforms and users may respond to changes in those costs. To get regulation right, the burden of avoiding a negative externality should fall on the least-cost avoider.

The Coase Theorem, derived from the work of Nobel-winning economist Ronald Coase[10] and elaborated on in the subsequent literature,[11] helps to explain the issue at-hand:

  1. The problem of externalities is bilateral;
  2. In the absence of transaction costs, resources will be allocated efficiently, as the parties bargain to solve the externality problem;
  3. In the presence of transaction costs, the initial allocation of rights does matter; and
  4. In such cases, the burden of avoiding the externality’s harm should be placed on the least-cost avoider, while taking into consideration the total social costs of the institutional framework.

In one of Coase’s examples, the noise from a confectioner using his candy-making machine is a potential cost to the doctor next door, who consequently cannot use his office to conduct certain testing. Simultaneously, the doctor moving his office next door to the confectioner is a potential cost to the confectioner’s ability to use his equipment.

In a world of well-defined property rights and low transaction costs, the initial allocation of rights would not matter, because the parties could bargain to overcome the harm in a mutually beneficial manner—i.e., the confectioner could pay the doctor for lost income or to set up sound-proof walls, or conversely, the doctor could pay the confectioner to reduce the sound of his machines.[12] But since there are transaction costs that prevent this sort of bargain, it is important whether the initial right is allocated to the doctor or the confectioner. To maximize societal welfare, the cost should be placed on the entity that can avoid the harm at the lowest cost.[13]

In the context of the COPPA Rule, website operators and online services create incredible value for their users, but they also can, at times, impose negative externalities relevant to children who use their services. In the absence of transaction costs, it would not matter whether operators must obtain verifiable parental consent before collecting, using, or disclosing personal information, or whether the initial burden is placed on parents and children to avoid the harms associated with such collection, use, or disclosure.

But given that there are transaction costs involved in obtaining (and giving) verifiable parental consent,[14] it matters how the law defines personal information (which serves as a proxy for a property right, in Coase’s framing). If personal information is defined too broadly and the transaction costs for providers to gain verifiable parental consent are too high, the result may be that the societal benefits of children’s internet use will be lost, as platform operators restrict access beyond the optimum level.

The threat of liability for platform operators under COPPA also risks excessive collateral censorship.[15] This arguably has already occurred, as operators like YouTube have restricted content creators’ ability to monetize their work through targeted advertising, leading on balance to less children’s content. By wrongly placing the burden on operators to avoid harms associated with targeted advertising, societal welfare is reduced, including the welfare of children who no longer get the benefits of that content.

On the other hand, there are situations where website operators and online services are the least-cost avoiders. For example, they may be the parties best-placed to monitor and control harms associated with internet use in cases where it is difficult or impossible to hold those using their platforms accountable for the harms they cause.[16] In other words, operators should still be held liable under COPPA when they facilitate adults’ ability to message children, or to identify a child’s location without parental consent, in ways that could endanger children.[17] Placing the burden on children or their parents to avoid such harms could allow operators to impose un- or undercompensated harms on society.

Thus, in order to get the COPPA Rule’s balance right, it is important to determine whether it is the operators or their users who are the least-cost avoiders. Placing the burden on the wrong parties would harm societal welfare, either by reducing the value that online platforms confer to their users, or in placing more uncompensated negative externalities on society.

II. Persistent Identifiers and ‘Personal Information’

As mentioned above, under COPPA, a website operator or online service that is either directed to children or that has actual knowledge that it collects personal information from a child must obtain “verifiable parental consent” for the “collection, use or disclosure” of that information.[18] But the NPRM continues to apply the expanded definition of “personal information” to include persistent identifiers from the 2013 amendments.

COPPA’s definition for personal information is “individually identifiable information” collected online.[19] The legislation included examples such as first and last name; home or other physical address; as well as email address, telephone number, or Social Security number.[20] These are all identifiers obviously connected to people’s real identities. COPPA does empower the FTC to determine whether other identifiers should be included, but the commission must permit “the physical or online contacting of a specific individual”[21] or “information concerning the child or the parents of that child that the website collects online from the child and combines with an identifier described in this paragraph.”[22]

In 2013, the FTC amended the definition of personal information to include:

A persistent identifier that can be used to recognize a user over time and across different Web sites or online services. Such persistent identifier includes, but is not limited to, a customer number held in a cookie, an Internet Protocol (IP) address, a processor or device serial number, or unique device identifier.[23]

The NPRM here continues this error.

Neither IP addresses nor device identifiers alone “permit the physical or online contacting of a specific individual,” as required by 15 U.S.C. §?6501(8)(F). A website or app could not identify personal identity or whether a person is an adult or child from these pieces of information alone. In order for persistent identifiers, like those relied upon for targeted advertising, to be counted as personal information under 15 U.S.C. §?6501(8)(G), they need to be combined with other identifiers listed in the definitions. In other words, it is only when a persistent identifier is combined with a first and last name, an address, an email, a phone number, or a Social Security number that it should be considered personal information protected by the statute.

While administrative agencies receive Chevron deference in court challenges when definitions are ambiguous, this text, when illuminated by canons of statutory construction,[24] is clear. The canon of ejusdem generis applies when general words follow an enumeration of two or more things.[25] The general words are taken to apply only to persons or things of the same general kind or class as those mentioned specifically. Persistent identifiers, such as cookies, bear little resemblance to the other examples of “personally identifiable information” listed in the statute, such as first and last name, address, phone, email, or Social Security number. Only when combined with such information could a persistent identifier become personal information.

The NPRM states that the Commission is “not persuaded” by this line of argumentation, pointing back to the same reasoning offered in the 2013 amendments. The NPRM states that it is “the reality that at any given moment a specific individual is using that device,” which “underlies the very premise behind behavioral advertising.”[26] Moreover the NPRM reasons that “while multiple people in a single home often use the same phone number, home address, and email address, Congress nevertheless defined these identifiers as ‘individually identifiable information’ in the COPPA statute.”[27] But this reasoning is flawed.

While multiple people regularly share an address, and sometimes even a phone number or email, each of these identifiers allows for contacting an individual person in a way that a persistent identifier simply does not. In each of those cases, bad actors can use such information to send direct messages to people (phone numbers and emails); find their physical location (address); and potentially to cause them harm.

A persistent identifier, on its own, is not the same. Without the subpoena of an internet service provider (ISP) or virtual private network (VPN), a bad actor that intended harm could not tell either where the person to whom the persistent identifier is assigned is located, or to message them directly. Persistent identifiers are useful primarily to online platforms in supporting their internal operations (which the NPRM continues to allow) and serving users targeted advertising.

Moreover, the fact that bills seeking to update COPPA—proposed but never passed by Congress—have proposed expanding the definition of personal information to include persistent identifiers suggests that the FTC has asserted authority that it does not have under the current statute.[28] Under Supreme Court precedent,[29] when considering whether an agency has the authority that it claims to pass rules, courts must consider whether Congress has rejected proposals to expand the agency’s jurisdiction in similar ways.

The NPRM also ignores the practical realities of the relationship between parents and children when it comes to devices and internet use. Parental oversight is already built into any type of advertisement (including targeted ads) that children see. Few children can view those advertisements without their parents providing them a device and the internet access to do so. Even fewer children can realistically make their own purchases. Consequently, the NPRM misunderstands targeted advertising in the context of children’s content, which is not based on any knowledge about the users as individuals, but on the browsing and search history of the device they happen to be using.

Children under age 13, in particular, are extremely unlikely to have purchased the devices they use; to have paid for the internet access to use those devices; or to have any disposable income or means to pay for goods and services online. Thus, contrary to the NPRM’s assumptions, the actual “targets” of this advertising—even on websites or online services that host children’s content—are the children’s parents.

This NPRM continues the 2013 amendments’ mistake and will continue to greatly reduce the ability of children’s content to generate revenue through the use of relatively anonymous persistent identifiers. As we describe in the next section, the damage done by the 2013 amendments is readily apparent, and the Commission should take this opportunity to rectify the problem.

III. More Parental Consent, Less Children’s Content

As outlined above, in a world without transaction costs—or, at least, one in which such costs are sufficiently low—verifiable parental consent would not matter, because it would be extremely easy for a bargain to be struck between operators and parents. In the real world, however, transaction costs exist. In fact, despite the FTC’s best efforts under the COPPA Rule, the transaction costs associated with obtaining verifiable parental consent continue to be sufficiently high as to prevent most operators from seeking that consent for persistent identifiers. As we stated in our previous comments, the economics are simple: if content creators lose access to revenue from targeted advertising, there will be less content created from which children can benefit.

FIGURE 1: Supply Curve for Children’s Online Content

The supply curve for children’s online content shifts left as the marginal cost of monetizing it increases. The marginal cost of monetizing such content is driven upward by the higher compliance costs of obtaining verifiable parental consent before serving targeted advertising. This supply shift means that less online content will be created for children.

These results are not speculative at this point. Scholars who have studied the issue have found the YouTube settlement, made pursuant to the 2013 amendments, has resulted in less child-directed online content, due to creators’ inability to monetize that content through targeted advertising. In their working paper “COPPAcalypse? The YouTube Settlement’s Impact on Kids Content,”[30] Garrett Johnson, Tesary Lin, James C. Cooper, & Liang Zhong summarized the issue as follows:

The Children’s Online Privacy Protection Act (COPPA), and its implementing regulations, broadly prohibit operators of online services directed at children under 13 from collecting personal information without providing notice of its data collection and use practices and obtaining verifiable parental consent. Because obtaining verifiable parental consent for free online services is difficult and rarely cost justified, COPPA essentially acts as a de facto ban on the collection of personal information by providers of free child-directed content. In 2013, the FTC amended the COPPA rules to include in the definition of personal information “persistent identifier that can be used to recognize a user over time and across different Web sites or online services,” such as a “customer number held in a cookie . . . or unique device identifier.” This regulatory change meant that, as a practical matter, online operators who provide child-directed content could no longer engage in personalized advertising.

On September 4, 2019, the FTC entered into a consent agreement with YouTube to settle charges that it had violated COPPA. The FTC’s allegations focused on YouTube’s practice of serving personalized advertising on child-directed content at children without obtaining verifiable parental consent. Although YouTube maintains it is a general audience website and users must be at least 13 years old to obtain a Google ID (which makes personalized advertising possible), the FTC complaint alleges that YouTube knew that many of its channels were popular with children under 13, citing YouTube’s own claims to advertisers. The settlement required YouTube to identify child-directed channels and videos and to stop collecting personal information from visitors to these channels. In response, YouTube required channel owners producing [“made-for-kids”] MFK content to designate either their entire channels or specific videos as MFK, beginning on January 1, 2020. YouTube supplemented these self-designations with an automated classifier designed to identify content that was likely directed at children younger than 13. In so doing, YouTube effectively shifted liability under COPPA to the channel owners, who could face up to $42,530 in fines per video if they fail to self-designate and are not detected by YouTube’s classifier.[31]

By requiring verifiable parental consent, the rule change and settlement increased the transaction costs imposed on online platforms that host content created by others. YouTube’s economically rational response was to restrict content creators’ ability to benefit from (considerably more lucrative) personalized advertising. The result was less content created for children, including by driving out less-profitable content creators:

Consistent with a loss in personalized ad revenue, we find that child-directed content creators produce 13% less content and pivot towards producing non-child-directed content. On the demand side, views of child-directed channels fall by 22%. Consistent with the platform’s degraded capacity to match viewers to content, we find that content creation and content views become more concentrated among top child-directed YouTube channels.[32]

This is not the only finding regarding COPPA’s role in reducing the production of content for children. Morgan Reed—president of the App Association, a global trade association for small and medium-sized technology companies—presented extensively at the FTC’s 2019 COPPA Workshop.[33] Reed’s testimony detailed that the transaction costs associated with obtaining verifiable parental consent did little to enhance parental control, but much to reduce the quality and quantity of content directed to children.

It is worth highlighting, in particular, Reed’s repeated use of the words “friction,” “restriction,” and “cost” to describe how COPPA’s institutional features affect the behavior of social-media platforms, parents, and children. While noting that general audience content is “unfettered, meaning that you do not feel restricted by what you can get to, how you do it. It’s easy, it’s low friction. Widely available. I can get it on any platform, in any case, in any context and I can get to it rapidly,” Reed said that COPPA-regulated apps and content are, by contrast, all about:

Friction, restriction, and cost. Every layer of friction you add alters parent behavior significantly. We jokingly refer to it as the over the shoulder factor. If a parent wants access to something and they have to pass it from the back seat to the front seat of the car more than one time, the parent moves on to the next thing. So the more friction you add to an application directed at children the less likely it is that the parent is going to take the steps necessary to get through it because the competition, of course, is as I said, free, unfettered, widely available. Restriction. Kids balk against some of the restrictions. I can’t get to this, I can’t do that. And they say that to the parent. And from the parent’s perspective, fine, I’ll just put in a different age date. They’re participating, they’re parenting but they’re not using the regulatory construction that we all understand.

The COPPA side, expensive, onerous or friction full. We have to find some way around that. Restrictive, fewer features, fewer capabilities, less known or available, and it’s entertaining-ish. …

Is COPPA the barrier? I thought this quote really summed it up. “Seamlessness is expected. But with COPPA, seamlessness is impossible.” And that has been one of the single largest areas of concern. Our folks are looking to provide a COPPA compliant environment. And they’re finding doing VPC is really hard. We want to make it this way, we just walked away. And why do they want to do it? We wanted to create a hub for kids to promote creativity. So these are not folks who are looking to take data and provide interest based advertising. They’re trying to figure out how to do it so they can build an engaging product. Parental consent makes the whole process very complicated. And this is the depressing part. …

We say that VPC is intentional friction. It’s clear from everything we’ve heard in the last two panels that the authors of COPPA, we don’t really want information collected on kids. So friction is intentional. And this is leading to the destruction of general audience applications basically wiping out COPPA apps off the face of the map.[34]

Reed’s use of the word “friction” is particularly enlightening. The economist Mike Munger of Duke University has often described transaction costs as frictions—explaining that, to consumers, all costs are transaction costs.[35] When higher transaction costs are imposed on social-media platforms, end users feel the impact. In this case, the result is that children and parents receive less quality children’s apps and content.

Thus, when the NPRM states that “the Commission [doesn’t] find compelling the argument that the 2013 persistent identifier modification has caused harm by hindering the ability of operators to monetize online content through targeted advertising,”[36] in part because “the 2013 Amendments permit monetization… through providing notice and seeking parental consent for the use of personal information for targeted advertising,”[37] it misses how transaction costs prevent this outcome. The FTC should not ignore the data provided by scholars who have researched the question, nor the direct testimony of app developers.

IV. Lower-Cost Ways to Avoid Harms to Children

Widely available practical and technological means are a lower-cost way to avoid the negative externalities associated with internet use, relative to verifiable-parental-consent laws. As NetChoice put it in the complaint the group filed against Arkansas’ social-media age-verification law, “[p]arents have myriad ways to restrict their children’s access to online services and to keep their children safe on such services.”[38]

NetChoice’s complaint recognized the subjective nature of negative externalities, stating:

Just as people inevitably have different opinions about what books, television shows, and video games are appropriate for minors, people inevitably have different views about whether and to what degree online services are appropriate for minors. While many minors use online services in wholesome and productive ways, online services, like many other technologies, can be abused in ways that may harm minors.[39]

They proceeded to list all the ways that parents can take control and help their children avoid online harms, including with respect to the decisions to buy devices for their children and to set terms for how and when they are permitted to use them.[40] Parents can also choose to use tools offered by cell-phone carriers and broadband providers to block certain apps and sites from their children’s devices, or to control with whom their children can communicate and for how long they can use the devices.[41]

NetChoice also pointed to wireless routers that allow parents to filter and monitor online content;[42] parental controls at the device level;[43] third-party filtering applications;[44] and numerous tools offered by NetChoice members that offer relatively low-cost monitoring and control by parents, or even by teen users acting on their own behalf.[45] Finally, they noted that, in response to market demand,[46] NetChoice members expend significant resources curating content to ensure that it is appropriate.[47]

Similarly, parents can protect their children’s privacy simply by taking control of the devices they allow their children to use. Tech-savvy parents can, if they so choose, install software or use ad-blockers to prevent collection of persistent identifiers.[48] Even less tech-savvy parents can make sure that their children are not subject to ads and tracking simply by monitoring their device usage and ensuring they only use YouTube Kids or other platforms created explicitly for children. In fact, most devices and operating systems now have built-in, easy-to-use controls that enable both monitoring and blocking of children’s access to specific apps and websites.[49]

This litany of less-restrictive means to accomplish the goal of protecting children online bears repeating, because even children have some First Amendment interests in receiving online speech.[50] If a court were to examine the COPPA Rule as a speech regulation that forecloses children’s access to online content, it would be subject to strict scrutiny. This means the rules would need to be the least-restrictive possible in order to fulfill the statute’s purpose. Educating parents and children on the available practical and technological means to avoid harms associated with internet use, including the collection of data for targeted advertising, would clearly be a less-restrictive alternative to a de facto ban of targeted advertising.

A less-restrictive COPPA rule could still enhance parental involvement and protect children from predators without impairing the marketplace for children’s online content significantly. Parents already have the ability to review their children’s content-viewing habits on devices they buy for them. A COPPA rule that enhances parental control by requiring verifiable parental consent when children are subject to sharing personal information—like first and last name, address, phone number, email address, or Social Security number—obviously makes sense, along with additions like geolocation data. But it is equally obvious that it is possible to avoid, at lower cost, the relatively anonymized collection of persistent identifiers used to support targeted ads through practical and technological means, without requiring costly verifiable parental consent.

V. Perils of Bringing More Entities Under the COPPA Rule

The costs of the COPPA Rule would be further exacerbated by the NPRM’s proposal to modify the criteria for determining whether a site or service is directed toward children.[51] These proposed changes, particularly the reliance on third-party services and comparisons with “similar websites or online services,” raise significant concerns about both their practical implementation and potential unintended consequences. The latter could include further losses of online content for both children and adults, as content creators drawn into COPPA’s orbit lose access to revenue from targeted advertising.

The FTC’s current practice employs a multi-factor test to ascertain whether a site or service is directed at children under 13. This comprehensive approach considers various elements, including subject matter, visual and audio content, and empirical evidence regarding audience composition.[52] The proposed amendments aim to expand this test by introducing such factors as marketing materials, representations to third parties and, notably, reviews by users or third parties and comparisons with similar websites or services.[53]

The inclusion of third-party reviews and comparisons with similar services as factors in determining a site’s target audience introduces a level of ambiguity and unreliability that would be counterproductive to COPPA’s goals. Without clear standards to evaluate their competence or authority, relying on third-party reviews would leave operators without a solid foundation upon which to assess compliance. This ambiguity could lead to overcompliance. In particular, online platforms that carry third-party content may err on the side of caution in order to align with the spirit of the rule. This threatens to stifle innovation and free expression by restricting creators’ ability to monetize content that has any chance to be considered “directed to children.” Moreover, to avoid this loss of revenue, content creators could shift their focus exclusively to content clearly aimed only at adults, rather than that which could be interesting to adults and children alike.

Similarly, the proposal to compare operators with “similar websites or online services” is fraught with challenges. The lack of guidance on how to evaluate similarity or to determine which service sets the standard for compliance would increase burdens on operators, with little evidence of tangible realized benefits. It’s also unclear who would make these determinations and how disputes would be resolved, leading to further compliance costs and potential litigation. Moreover, operators may be left in a position where it is impractical to accurately assess the audience of similar services, thereby further complicating compliance efforts.

Given these considerations, the FTC should not include reliance on third-party services or comparisons with similar websites or online services in its criteria for determining whether content is directed at children under 13. These approaches introduce a level of uncertainty and unreliability that could lead to overcompliance, increased costs, and unintended negative impacts on online content and services, including further restrictions on content creators who create content interesting to both adults and children. Instead, the FTC should focus on providing clear, direct guidelines that allow operators to assess their compliance with COPPA confidently, without the need to rely on potentially biased or manipulative third-party assessments. This approach will better serve the FTC’s goal of protecting children’s online privacy, while ensuring a healthy, innovative online ecosystem.

Conclusion

The FTC should reconsider the inclusion of standalone persistent identifiers in the definition of “personal information.” The NPRM continues to enshrine the primary mistake of the 2013 amendments. This change was inconsistent with the purposes and text of the COPPA statute. It already has reduced, and will continue to reduce, the availability of children’s online content.

[1] ICLE has received financial support from numerous companies, organizations, and individuals, including firms with interests both supportive of and in opposition to the ideas expressed in this and other ICLE-supported works. Unless otherwise noted, all ICLE support is in the form of unrestricted, general support. The ideas expressed here are the authors’ own and do not necessarily reflect the views of ICLE’s advisors, affiliates, or supporters.

[2] Much of these comments are adapted from ICLE’s 2019 COPPA Rule Review Comments, available at https://laweconcenter.org/wp-content/uploads/2019/12/COPPA-Comments-2019.pdf; Ben Sperry, A Law & Economics Approach to Social-Media Regulation, CPI TechREG Chronicle (Feb. 29, 2022), https://laweconcenter.org/resources/a-law-economics-approach-to-social-media-regulation; Ben Sperry, A Coasean Analysis of Online Age-Verification and Parental-Consent Regimes (ICLE Issue Brief, Nov. 9, 2023), available at https://laweconcenter.org/wp-content/uploads/2023/11/Issue-Brief-Transaction-Costs-of-Protecting-Children-Under-the-First-Amendment-.pdf.

[3] 144 Cong. Rec. 11657 (1998) (Statement of Sen. Richard Bryan), available at https://www.congress.gov/crec/1998/10/07/CREC-1998-10-07.pdf#page=303.

[4] 15 U.S.C. §?6502(b)(1)(A).

[5] See, e.g., Jean-Charles Rochet & Jean Tirole, Platform Competition in Two-Sided Markets, 1 J. Euro. Econ. Ass’n 990 (2003).

[6] David S. Evans, Multisided Platforms in Antitrust Practice, at 3 (Oct. 17, 2023), forthcoming, Michael Noel, Ed., Elgar Encyclopedia on the Economics of Competition and Regulation, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4606511.

[7] For instance, many nightclubs hold “ladies’ night” events in which female patrons receive free admission or discounted drinks in order to attract more men, who pay full fare for both.

[8] See, e.g., Ben Sperry, Congress Should Focus on Protecting Teens from Real Harms, Not Targeted Ads, The Hill (Feb. 16, 2023), https://thehill.com/opinion/congress-blog/3862238-congress-should-focus-on-protecting-teens-from-real-harms-not-targeted-ads.

[9] An externality is a side effect of an activity that is not reflected in the cost of that activity—basically, what occurs when we do something whose consequences affect other people. A negative externality occurs when a third party does not like the effects of an action.

[10] See Ronald H. Coase, The Problem of Social Cost, 3 J. L. & Econ. 1 (1960)

[11] See Steven G. Medema, The Coase Theorem at Sixty, 58 J. Econ. Lit. 1045 (2020).

[12] See Coase, supra note 8, at 8-10.

[13] See id. at 34 (“When an economist is comparing alternative social arrangements, the proper procedure is to compare the total social product yielded by these different arrangements.”).

[14] See Part III below.

[15] See Felix T. Wu, Collateral Censorship and the Limits of Intermediary Liability, 87 Notre Dame L. Rev. 293, 295-96 (2011); Geoffrey A. Manne, Ben Sperry, & Kristian Stout, Who Moderates the Moderators: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Computer & Tech. L J. 26, 39 (2022); Ben Sperry, The Law & Economics of Children’s Online Safety: The First Amendment and Online Intermediary Liability, Truth on the Market (May 12 2023), https://truthonthemarket.com/2023/05/12/the-law-economics-of-childrens-online-safety-the-firstamendment-and-online-intermediary-liability.

[16] See Geoffrey A. Manne, Kristian Stout, & Ben Sperry, Twitter v. Taamneh and the Law & Economics of Intermediary Liability, Truth on the Market (Mar. 8, 2023), https://truthonthemarket.com/2023/03/08/twitter-v-taamneh-and-the-law-economics-of-intermediary-liability; Ben Sperry, Right to Anonymous Speech, Part 2: A Law & Economics Approach, Truth on the Market (Sep. 6, 2023), https://truthonthemarket.com/2023/09/06/right-to-anonymous-speech-part-2-a-law-economics-approach.

[17] See Statement of Commissioner Alvaro M. Bedoya On the Issuance of the Notice of Proposed Rulemaking to Update the Children’s Online Privacy Protection Rule (COPPA Rule), at 3-4 (Dec. 20, 2023), available at https://www.ftc.gov/system/files/ftc_gov/pdf/BedoyaStatementonCOPPARuleNPRMFINAL12.20.23.pdf (listing examples of these types of enforcement actions).

[18] 15 U.S.C. §?6502(b)(1)(A)(ii).

[19] 15 U.S.C. §?6501(8).

[20] 15 U.S.C. §?6501(8)(A)-(E).

[21] 15 U.S.C. §?6501(8)(F).

[22] 15 U.S.C. §?6501(8)(G).

[23] 16 CFR § 312.2 (Personal information)(7).

[24] See Chevron U.S.A. Inc. v. Natural Resources Defense Council, Inc., 467 U. S. 837, 843 n.9 (1984) (“If a court, employing traditional tools of statutory construction, ascertains that Congress had an intention on the precise question at issue, that intention is the law and must be given effect.”).

[25] What is EJUSDEM GENERIS?, The Law Dictionary: Featuring Black’s Law Dictionary Free Online Legal Dictionary 2nd Ed. (last accessed Dec. 9, 2019), https://thelawdictionary.org/ejusdem-generis.

[26] NPRM at 2043.

[27] Id.

[28] See, e.g., Children and Teens’ Online Privacy Protection Act, S. 1418, §2(a)(3) 118th Cong. (2024).

[29] See FDA v. Brown & Williamson, 529 U.S. 120, 148-50 (2000).

[30] Garrett A. Johnson, Tesary Lin, James C. Cooper, & Liang Zhong, COPPAcalypse? The YouTube Settlement’s Impact on Kids Content, SSRN (Apr. 26, 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4430334.

[31] Id. at 6-7 (emphasis added).

[32] Id. at 1.

[33] The Future of the COPPA Rule: An FTC Workshop Part 2, Federal Trade Commission (Oct. 7, 2019), available at https://www.ftc.gov/system/files/documents/public_events/1535372/transcript_of_coppa_workshop_part_2_1.pdf.

[34] Id. at 6 (emphasis added).

[35] See Michael Munger, To Consumers, All Costs are Transaction Costs, Am. Inst. Econ. Rsch. (June 13, 2023), https://www.aier.org/article/to-consumers-all-costs-are-transaction-costs.

[36] NPRM at 2043.

[37] Id. at 2034, n. 121.

[38] See NetChoice Complaint, NetChoice LLC v. Griffin, NO. 5:23-CV-05105, 2023 U.S. Dist. LEXIS 154571 (W.D. Ark. 2023), available at https://netchoice.org/wp-content/uploads/2023/06/NetChoice-v-Griffin_-Complaint_2023-06-29.pdf.

[39] Id. at para. 13.

[40] See id. at para. 14

[41] See id.

[42] See id. at para 15.

[43] See id. at para 16.

[44] See id.

[45] See id. at para. 17, 19-21

[46] Sperry, supra note 8.

[47] See NetChoice Complaint, supra note 36, at para. 18.

[48] See, e.g., Mary James & Catherine McNally, The Best Ad Blockers 2024, all about cookies (last updated Feb. 29, 2024), https://allaboutcookies.org/best-ad-blockers.

[49] See, e.g., Parental Controls for Apple, Android, and Other Devices, internet matters (last accessed Mar. 7, 2024), https://www.internetmatters.org/parental-controls/smartphones-and-other-devices.

[50] See, e.g., Brown v. Ent. Merchants Ass’n, 564 U.S. 786, 794-95 (2011); NetChoice, LLC v. Griffin, 2023 WL 5660155, at *17 (W.D. Ark. Aug. 31, 2023) (finding Arkansas’s Act 689 “obviously burdens minors’ First Amendment rights” by “bar[ring] minors from opening accounts on a variety of social media platforms.”).

[51] See NPRM at 2047.

[52] See id. at 2046-47.

[53] Id. at 2047 (“Additionally, the Commission believes that other factors can help elucidate the intended or actual audience of a site or service, including user or third-party reviews and the age of users on similar websites or services.”).

Continue reading
Data Security & Privacy

ICLE Amicus in RE: Gilead Tenofovir Cases

Amicus Brief Dear Justice Guerrero and Associate Justices, In accordance with California Rule of Court 8.500(g), we are writing to urge the Court to grant the Petition . . .

Dear Justice Guerrero and Associate Justices,

In accordance with California Rule of Court 8.500(g), we are writing to urge the Court to grant the Petition for Review filed by Petitioner Gilead Sciences, Inc. (“Petitioner” or “Gilead”) on February 21, 2024, in the above-captioned matter.

We agree with Petitioner that the Court of Appeal’s finding of a duty of reasonable care in this case “is such a seismic change in the law and so fundamentally wrong, with such grave consequences, that this Court’s review is imperative.” (Pet. 6.) The unprecedented duty of care put forward by the Court of Appeal—requiring prescription drug manufacturers to exercise reasonable care toward users of a current drug when deciding when to bring a new drug to market (Op. 11)—would have far-reaching, harmful implications for innovation that the Court of Appeal failed properly to weigh.

If upheld, this new duty of care would significantly disincentivize pharmaceutical innovation by allowing juries to second-guess complex scientific and business decisions about which potential drugs to prioritize and when to bring them to market. The threat of massive liability simply for not developing a drug sooner would make companies reluctant to invest the immense resources needed to bring new treatments to patients. Perversely, this would deprive the public of lifesaving and less costly new medicines. And the prospective harm from the Court of Appeal’s decision is not limited only to the pharmaceutical industry.

We urge the Court to grant the Petition for Review and to hold that innovative firms do not owe the users of current products a “duty to innovate” or a “duty to market”—that is, that firms cannot be held liable to users of a current product for development or commercialization decisions on the basis that those decisions could have facilitated the introduction of a less harmful, alternative product.

Interest of Amicus Curiae

The International Center for Law & Economics (“ICLE”) is a nonprofit, non-partisan global research and policy center aimed at building the intellectual foundations for sensible, economically grounded policy. ICLE promotes the use of law and economics methodologies and economic learning to inform policy debates. It also has longstanding expertise in evaluating law and policy relating to innovation and the legal environment facing commercial activity. In this letter, we wish to briefly highlight some of the crucial considerations concerning the effect on innovation incentives that we believe would arise from the Court of Appeal’s ruling in this case.[1]

The Court of Appeal’s Duty of Care Standard Would Impose Liability Without Requiring Actual “Harm”

The Court of Appeal’s ruling marks an unwarranted departure from decades of products-liability law requiring plaintiffs to prove that the product that injured them was defective. Expanding liability to products never even sold is an unprecedented, unprincipled, and dangerous approach to product liability. Plaintiffs’ lawyers may seek to apply this new theory to many other beneficial products, arguing manufacturers should have sold a superior alternative sooner. This would wreak havoc on innovation across industries.

California Civil Code § 1714 does not impose liability for “fail[ing] to take positive steps to benefit others,” (Brown v. USA Taekwondo (2021) 11 Cal.5th 204, 215), and Plaintiffs did not press a theory that the medicine they received was defective. Moreover, the product included all the warnings required by federal and state law. Thus, Plaintiffs’ case—as accepted by the Court of Appeal—is that they consumed a product authorized by the FDA, that they were fully aware of its potential side effects, but maybe they would have had fewer side effects had Gilead made the decision to accelerate (against some indefinite baseline) the development of an alternative medicine. To call this a speculative harm is an understatement, and to dismiss Gilead’s conduct as unreasonable because motivated by a crass profit motive, (Op. at 32), elides many complicated facts that belie such a facile assertion.

A focus on the narrow question of profits for a particular drug misunderstands the inordinate complexity of pharmaceutical development and risks seriously impeding the rate of drug development overall. Doing so

[over-emphasizes] the recapture of “excess” profits on the relatively few highly profitable products without taking into account failures or limping successes experienced on the much larger number of other entries. If profits were held to “reasonable” levels on blockbuster drugs, aggregate profits would almost surely be insufficient to sustain a high rate of technological progress. . . . If in addition developing a blockbuster is riskier than augmenting the assortment of already known molecules, the rate at which important new drugs appear could be retarded significantly. Assuming that important new drugs yield substantial consumers’ surplus untapped by their developers, consumers would lose along with the drug companies. Should a tradeoff be required between modestly excessive prices and profits versus retarded technical progress, it would be better to err on the side of excessive profits. (F. M. Scherer, Pricing, Profits, and Technological Progress in the Pharmaceutical Industry, 7 J. Econ. Persp. 97, 113 (1993)).

Indeed, Plaintiffs’ claim on this ground is essentially self-refuting. If the “superior” product they claim was withheld for “profit” reasons was indeed superior, then Plaintiffs could have expected to make a superior return on that product. Thus, Plaintiffs claim they were allegedly “harmed” by not having access to a product that Petitioners were not yet ready to market, even though Petitioners had every incentive to release a potentially successful alternative as soon as possible, subject to a complex host of scientific and business considerations affecting the timing of that decision.

Related, the Court of Appeal’s decision rests on the unfounded assumption that Petitioner “knew” TAF was safer than TDF after completing Phase I trials. This ignores the realities of the drug development process and the inherent uncertainty of obtaining FDA approval, even after promising early results. Passing Phase I trials, which typically involve a small number of healthy volunteers, is a far cry from having a marketable drug. According to the Biotechnology Innovation Organization, only 7.9% of drugs that enter Phase I trials ultimately obtain FDA approval.[2] (Biotechnology Innovation Organization, Clinical Development Success Rates and Contributing Factors 2011-2020, Fig. 8b (2021), available at https://perma.cc/D7EY-P22Q.) Even after Phase II trials, which assess efficacy and side effects in a larger patient population, the success rate is only about 15.1%. (Id.) Thus, at the time Gilead decided to pause TAF development, it faced significant uncertainty about whether TAF would ever reach the market, let alone ultimately prove safer than TDF.

Moreover, the clock on Petitioner’s patent exclusivity for TAF was ticking throughout the development process. Had Petitioner “known” that TAF was a safer and more effective drug, it would have had every incentive to bring it to market as soon as possible to maximize the period of patent protection and the potential to recoup its investment. The fact that Petitioner instead chose to focus on TDF strongly suggests that it did not have the level of certainty the Court of Appeal attributed to it.

Although conventional wisdom has often held otherwise, economists generally dispute the notion that companies have an incentive to unilaterally suppress innovation for economic gain.

While rumors long have circulated about the suppression of a new technology capable of enabling automobiles to average 100 miles per gallon or some new device capable of generating electric power at a fraction of its current cost, it is rare to uncover cases where a worthwhile technology has been suppressed altogether. (John J. Flynn, Antitrust Policy, Innovation Efficiencies, and the Suppression of Technology, 66 Antitrust L.J. 487, 490 (1998)).

Calling such claims “folklore,” the economists Armen Alchian and William Allen note that, “if such a [technology] did exist, it could be made and sold at a price reflecting the value of [the new technology], a net profit to the owner.” (Armen A. Alchian & William R. Allen, Exchange & Production: Competition, Coordination, & Control (1983), at 292). Indeed, “even a monopolist typically will have an incentive to adopt an unambiguously superior technology.” (Joel M. Cohen and Arthur J. Burke, An Overview of the Antitrust Analysis of Suppression of Technology, 66 Antitrust L.J. 421, 429 n. 28 (1998)). While nominal suppression of technology can occur for a multitude of commercial and technological reasons, there is scant evidence that doing so coincides with harm to consumers, except where doing so affirmatively interferes with market competition under the antitrust laws—a claim not advanced here.

One reason the tort system is inapt for second-guessing commercial development and marketing decisions is that those decisions may be made for myriad reasons that do not map onto the specific safety concern of a products-liability action. For example, in the 1930s, AT&T abandoned the commercial development of magnetic recording “for ideological reasons. . . . Management feared that availability of recording devices would make customers less willing to use the telephone system and so undermine the concept of universal service.” (Mark Clark, Suppressing Innovation: Bell Laboratories and Magnetic Recording, 34 Tech. & Culture 516, 520-24 (1993)). One could easily imagine arguments that coupling telephones and recording devices would promote safety. But the determination of whether safety or universal service (and the avoidance of privacy invasion) was a “better” basis for deciding whether to pursue the innovation is not within the ambit of tort law (nor the capability of a products-liability jury). And yet, it would necessarily become so if the Court of Appeal’s decision were to stand.

A Proper Assessment of Public Policy Would Cut Strongly Against Adoption of the Court of Appeal’s Holding

The Court of Appeal notes that “a duty that placed manufacturers ‘under an endless obligation to pursue ever-better new products or improvements to existing products’ would be unworkable and unwarranted,” (Op. 10), yet avers that “plaintiffs are not asking us to recognize such a duty” because “their negligence claim is premised on Gilead’s possession of such an alternative in TAF; they complain of Gilead’s knowing and intentionally withholding such a treatment….” (Id).

From an economic standpoint, this is a distinction without a difference.

Both a “duty to invent” and a “duty to market” what is already invented would increase the cost of bringing any innovative product to market by saddling the developer with an expected additional (and unavoidable) obligation as a function of introducing the initial product, differing only perhaps by degree. Indeed, a “duty to invent” could conceivably be more socially desirable because in that case a firm could at least avoid liability by undertaking the process of discovering new products (a socially beneficial activity), whereas the “duty to market” espoused by the Court of Appeal would create only the opposite incentive—the incentive never to gain knowledge of a superior product on the basis of which liability might attach.[3]

And public policy is relevant. This Court in Brown v. Superior Court, (44 Cal. 3d 1049 (1988)), worried explicitly about the “[p]ublic policy” implications of excessive liability rules for the provision of lifesaving drugs. (Id. at 1063-65). As the Court in Brown explained, drug manufacturers “might be reluctant to undertake research programs to develop some pharmaceuticals that would prove beneficial or to distribute others that are available to be marketed, because of the fear of large adverse monetary judgments.” (Id. at 1063). The Court of Appeal agreed, noting that “the court’s decision [in Brown] was grounded in public policy concerns. Subjecting prescription drug manufacturers to strict liability for design defects, the court worried, might discourage drug development or inflate the cost of otherwise affordable drugs.” (Op. 29).

In rejecting the relevance of the argument here, however, the Court of Appeal (very briefly) argued a) that Brown espoused only a policy against burdening pharmaceutical companies with a duty stemming from unforeseeable harms, (Op. 49-50), and b) that the relevant cost here might be “some failed or wasted efforts,” but not a reduction in safety. (Op. 51).[4] Both of these claims are erroneous.

On the first, the legalistic distinction between foreseeable and unforeseeable harm was not, in fact, the determinative distinction in Brown. Rather, that distinction was relevant only because it maps onto the issue of incentives. In the face of unforeseeable, and thus unavoidable, harm, pharmaceutical companies would have severely diminished incentives to innovate. While foreseeable harms might also deter innovation by imposing some additional cost, these costs would be smaller, and avoidable or insurable, so that innovation could continue. To be sure, the Court wanted to ensure that the beneficial, risk-reduction effects of the tort system were not entirely removed from pharmaceutical companies. But that meant a policy decision that necessarily reduced the extent of tort-based risk optimization in favor of the manifest, countervailing benefit of relatively higher innovation incentives. That same calculus applies here, and it is this consideration, not the superficial question of foreseeability, that animated this Court in Brown.

On the second, the Court of Appeal inexplicably fails to acknowledge that the true cost of the imposition of excessive liability risk from a “duty to market” (or “duty to innovate”) is not limited to the expenditure of wasted resources, but the non-expenditure of any resources. The court’s contention appears to contemplate that such a duty would not remove a firm’s incentive to innovate entirely, although it might deter it slightly by increasing its expected cost. But economic incentives operate at the margin. Even if there remains some profit incentive to continue to innovate, the imposition of liability risk simply for the act of doing so would necessarily reduce the amount of innovation (in some cases, and especially for some smaller companies less able to bear the additional cost, to the point of deterring innovation entirely). But even this reduction in incentive is a harm. The fact that some innovation may still occur despite the imposition of considerable liability risk is not a defense of the imposition of that risk; rather, it is a reason to question its desirability, exactly as this Court did in Brown.

The Court of Appeal’s Decision Would Undermine Development of Lifesaving and Safer New Medicines

Innovation is a long-term, iterative process fraught with uncertainty. At the outset of research and development, it is impossible to know whether a potential new drug will ultimately prove superior to existing drugs. Most attempts at innovation fail to yield a marketable product, let alone one that is significantly safer or more effective than its predecessors. Deciding whether to pursue a particular line of research depends on weighing myriad factors, including the anticipated benefits of the new drug, the time and expense required to develop it, and its financial viability relative to existing products. Sometimes, potentially promising drug candidates are not pursued fully, even if theoretically “better” than existing drugs to some degree, because the expected benefits are not sufficient to justify the substantial costs and risks of development and commercialization.

If left to stand, the Court of Appeal’s decision would mean that whenever this stage of development is reached for a drug that may offer any safety improvement, the manufacturer will face potential liability for failing to bring that drug to market, regardless of the costs and risks involved in its development or the extent of the potential benefit. Such a rule would have severe unintended consequences that would stifle innovation.

First, by exposing manufacturers to liability on the basis of early-stage research that has not yet established a drug candidate’s safety and efficacy, the Court of Appeal’s rule would deter manufacturers from pursuing innovations in the first place. Drug research involves constant iteration, with most efforts failing and the potential benefits of success highly uncertain until late in the process. If any improvement, no matter how small or tentative, could trigger liability for failing to develop the new drug, manufacturers will be deterred from trying to innovate at all.

Second, such a rule would force manufacturers to direct scarce resources to developing and commercializing drugs that offer only small or incremental benefits because failing to do so would invite litigation. This would necessarily divert funds away from research into other potential drugs that could yield greater advancements. Further, as each small improvement is made, it reduces the relative potential benefit from, and therefore the incentive to undertake, further improvements. Rather than promoting innovation, the Court of Appeal’s decision would create incentives that favor small, incremental changes over larger, riskier leaps with the greatest potential to significantly advance patient welfare.

Third, and conversely, the Court of Appeal’s decision would set an unrealistic and dangerous standard of perfection for drug development. Pharmaceutical companies should not be expected to bring only the “safest” version of a drug to market, as this would drastically increase the time and cost of drug development and deprive patients of access to beneficial treatments in the meantime.

Fourth, the threat of liability would lead to inefficient and costly distortions in how businesses organize their research and development efforts. To minimize the risk of liability, manufacturers may avoid integrating ongoing research into existing product lines, instead keeping the processes separate unless and until a potential new technology is developed that offers benefits so substantial as to clearly warrant the costs and liability exposure of its development in the context of an existing drug line. Such an incentive would prevent potentially beneficial innovations from being pursued and would increase the costs of drug development.

Finally, the ruling would create perverse incentives that could actually discourage drug companies from developing and introducing safer alternative drugs. If bringing a safer drug to market later could be used as evidence that the first-generation drug was not safe enough, companies may choose not to invest in developing improved versions at all in order to avoid exposing themselves to liability. This would, of course, directly undermine the goal of increasing drug safety overall.

The Court of Appeal gave insufficient consideration to these severe policy consequences of the duty it recognized. A manufacturer’s decision when to bring a potentially safer drug to market involves complex trade-offs that courts are ill-equipped to second-guess—particularly in the limited context of a products-liability determination.

Conclusion

The Court of Appeal’s novel “duty to market” any known, less-harmful alternative to an existing product would deter innovation to the detriment of consumers. The Court of Appeal failed to consider how its decision would distort incentives in a way that harms the very patients the tort system is meant to protect. This Court should grant review to address these important legal and policy issues and to prevent this unprecedented expansion of tort liability from distorting manufacturers’ incentives to develop new and better products.

[1] No party or counsel for a party authored or paid for this amicus letter in whole or in part.

[2] It is important to note that this number varies with the kind of medicine involved, but across all categories of medicines there is a high likelihood of failure subsequent to Phase I trials.

[3] To the extent the concern is with disclosure of information regarding a potentially better product, that is properly a function of the patent system, which requires public disclosure of new ideas in exchange for the receipt of a patent. (See Brenner v. Manson, 383 U.S. 519, 533 (1966) (“one of the purposes of the patent system is to encourage dissemination of information concerning discoveries and inventions.”)). Of course, the patent system preserves innovation incentives despite the mandatory disclosure of information by conferring an exclusive right to the inventor to use the new knowledge. By contrast, using the tort system as an information-forcing device in this context would impose risks and costs on innovation without commensurate benefit, ensuring less, rather than more, innovation.

[4] The Court of Appeal makes a related argument when it claims that “the duty does not require manufacturers to perfect their drugs, but simply to act with reasonable care for the users of the existing drug when the manufacturer has developed an alternative that it knows is safer and at least equally efficacious. Manufacturers already engage in this type of innovation in the ordinary course of their business, and most plaintiffs would likely face a difficult road in establishing a breach of the duty of reasonable care.” (Op. at 52-3).

Continue reading
Innovation & the New Economy