Showing 9 of 129 Publications

Kristian Stout on Title II Net Neutrality

Presentations & Interviews ICLE Director of Innovation Policy Kristian Stout appeared as a guest on Minnesota Public Radio’s Marketplace in a segment on the Federal Communications Commission’s decision to . . .

ICLE Director of Innovation Policy Kristian Stout appeared as a guest on Minnesota Public Radio’s Marketplace in a segment on the Federal Communications Commission’s decision to reinstate so-called “net neutrality” for broadband providers.

But Kristian Stout, director of innovation policy at the International Center for Law and Economics, argues that we don’t need net neutrality as much as we once did because most of us are already online now. So how do we ensure access for every last American?

“You don’t do that by upending or frustrating the investment incentives that have made this work really well for 90 to 95% of the country. What you do is try to figure out targeted solutions,” Stout said.

Audio of the full segment is embedded below.

Continue reading
Telecommunications & Regulated Utilities

Competition in the Low-Earth-Orbit Satellite Industry

TOTM Amazon on Friday launched its first two prototype satellites for its planned Project Kuiper internet-satellite network. It was the latest milestone in the rapid evolution of the . . .

Amazon on Friday launched its first two prototype satellites for its planned Project Kuiper internet-satellite network. It was the latest milestone in the rapid evolution of the low-Earth-orbit (LEO) satellite industry, with companies like SpaceX and OneWeb joining Project Kuiper in launching thousands of satellites to provide broadband internet access globally.

As this nascent industry takes shape, it is important that U.S. policymakers understand its competitive dynamics. With the number of LEO satellites set to increase in the coming years, establishing a regulatory framework that spurs innovation and investment while fostering a competitive marketplace will be essential to ensure the industry’s growth benefits consumers. In this post, we will examine some of the most urgent public-policy issues that directly impact competitiveness in the LEO industry.

Read the full piece here.

Continue reading
Telecommunications & Regulated Utilities

The Modern Video Marketplace Does Not Need Help From the FCC

TOTM The Federal Communications Commission (FCC) is no stranger to undertaking controversial and potentially counterproductive regulatory projects. The commission’s digital-discrimination proceeding is expected to continue in November, and . . .

The Federal Communications Commission (FCC) is no stranger to undertaking controversial and potentially counterproductive regulatory projects. The commission’s digital-discrimination proceeding is expected to continue in November, and FCC Chair Jessica Rosenworcel just announced that the FCC will revive the warmed-over corpse of the 2015 Open Internet Order. This latter item highlights how the FCC’s Democratic majority has been emboldened to pursue risky regulatory adventures with the addition of recently confirmed Commissioner Anna Gomez.

But given that the FCC will already have a plate full of difficult docket items, it should continue to avoid a further landmine that some advocates have been pressing to take up this year: reopening former Chair Tom Wheeler’s proceeding on multichannel video programming distributors (MVPDs). First proposed in late 2014 but ultimately not adopted by the commission, the Wheeler FCC’s notice of proposed rulemaking (NPRM) would bring over-the-top linear-video providers like YouTube TV and Hulu Live under the FCC’s program access and carriage rules.

Read the full piece here.

Continue reading
Telecommunications & Regulated Utilities

The Federal Affordable Connectivity Program’s Funding Must Continue to Benefit NJ

Popular Media Nearly 93% of American households have at-home internet access. Most have subscriptions through an internet service provider and some have access without having to pay . . .

Nearly 93% of American households have at-home internet access. Most have subscriptions through an internet service provider and some have access without having to pay for a subscription. Even so, more than nine million U.S. household remain unconnected to the internet at home.

To close this gap, the federal government created the Affordable Connectivity Program in 2021 to help many of these households get connected and stay connected throughout the COVID-19 pandemic. Congress appropriated more than $14 billion to fund the program, which provides eligible low-income families with a $30 monthly discount on internet service and a one-time $100 subsidy to purchase equipment necessary for an internet connection.

Read the full piece here.

Continue reading
Telecommunications & Regulated Utilities

States Must Overcome Numerous Hurdles Before BEAD Will Be Able to Succeed

Popular Media As part of the $1.2 trillion Infrastructure Investment and Jobs Act that President Joe Biden signed in November 2021, Congress allocated $42.45 billion to create the Broadband . . .

As part of the $1.2 trillion Infrastructure Investment and Jobs Act that President Joe Biden signed in November 2021, Congress allocated $42.45 billion to create the Broadband Equity Access and Deployment program, a moonshot effort to close what has been called the “digital divide.” Alas, BEAD’s tumultuous kickoff is a vivid example of how federal plans can sometimes become a tangled web, impeding the very progress they set out to champion.

Read the full piece here.

Continue reading
Telecommunications & Regulated Utilities

Cultural Levies and the EU Audiovisual Market

ICLE Issue Brief The European Union has opened the door for national policymakers to expand preexisting policies to support or favor domestic content by placing new obligations on foreign streaming providers to invest in EU member states’ domestic markets. The risk, however, is that member states have such broad latitude in implementing these provisions that they stoke inflationary pressures that distort local content markets.

I.        Introduction

In the ever-evolving landscape of digital entertainment, European consumers enjoy a broad variety of viewing options, including substantial availability of non-European content offered by large international streaming services. This availability has raised red flags for some EU policymakers, however, who are concerned that the supply of and demand for domestic cultural products might suffer. Prompted by these concerns, the European Union has opened the door for national policymakers to expand preexisting policies to support or favor domestic content by placing new obligations on foreign streaming providers to invest in EU member states’ domestic markets. The risk, however, is that member states have such broad latitude in implementing these provisions that they stoke inflationary pressures that distort local content markets.

Amended in 2018, the EU Audiovisual Media Services Directive (AVMSD)[1] has two relevant provisions: 1. Article 13(1) sets a requirement that 30% of the works that on-demand audiovisual media service (“VOD”) providers carry be European in origin, and that these works be given those works prominent placement; and 2. Article 13(2) provides that member states may impose additional financial obligations on VOD providers and broadcasters (“media service providers”) based on the revenues these services generate in, or that are targeted toward, the member state’s territory, with the proceeds used to support the production of European works.

The second set of obligations, which depend on a member state enacting enabling legislation, can be pursued either through direct investment in the production of European works (sometimes with very prescriptive local language or independent producer sub-quotas, among other limitations), or through contributions to a national fund. Providers with no significant presence in a local market (i.e., with low turnover or an exceedingly small audience) are not typically subject to these obligations.      Member states also may waive such obligations where they would be impracticable or unjustified due to the nature or theme of the audiovisual media service in question.[2]

The AVMSD can thus be characterized as “a unique blend of the barrier lifting liberal market approach typical of the EU’s single market and classic protectionism stemming from a history of concern that American content and media services would dominate European screens, threatening its cultures and industries.”[3]

It is understandable, on many levels, why member states would want to ensure local production of cultural products.[4] The history of this sort of regulation in the EU and the basic economics underlying these schemes, however, both point to the risk of serious unintended consequences if lawmakers do not take market realities adequately into account.

II.      Previous Attempts to Ensure Cultural Production in the EU Audiovisual Market

The AVMSD amendments are part of a long history in the EU of regulating media distribution, with at least a partial eye toward culture-specific measures.[5] Although the EU has more recently been concerned with foreign streaming services, the early history of these regulations were focused on broadcast media. Under those different regulations, “EU institutions were required to take values such as cultural diversity into account. They also had to respect the fundamental contribution of public broadcasters to the ‘democratic, social and cultural needs of each society.’”[6]

Notably, pursuant to the Television without Frontiers Directive (TwFD) of 1989, member states were required to ensure that broadcasters reserve a minimum of 50% of television programming to European works and a minimum of 10% of either their transmission time or programming budgets to independent productions.[7]

Further, the previous version of the AVMSD (2010) imposed a general commitment for member states to ensure that VOD service providers promoted, “where practicable and by appropriate means,” the production of and the access to European works.[8] Such promotion could “relate, inter alia, to the financial contribution made by such services to the production and rights acquisition of European works or to the share and/or prominence of European works in the catalogue of programmes offered by the on-demand audiovisual media service.”[9]

Finally, member states are also permitted to sustain European audiovisual production through state aid (i.e., direct funding or tax incentives), which is considered an important tool in this regard by the European Commission. According to the Commission’s Communication on State Aid for Films and Audiovisual Works:

It is difficult for film producers to obtain a sufficient level of upfront commercial backing to put together a financial package so that production projects can proceed. The high risk associated with their businesses and projects, together with the perceived lack of profitability of the sector, make it dependent on State aid.[10]

Nonetheless, these efforts had not fully delivered the expected results. Notably, analysis of the European audiovisual market between 2011 and 2016 found that, while broadcasters met the requirements set in the AVMSD 2010 to reserve a proportional majority of their transmission time for European works, when it came to nonlinear media services, European works were significantly less present in the catalogues of VOD service providers and non-European audiovisual works dominated audience demand.[11] Against this background, the 2018 AVMSD provisions were introduced to better harmonize the treatment of traditional audiovisual players and VOD providers.[12]

Indeed, the European audiovisual market has been described as “a collection of diverse markets, with different languages, cultures and market sizes.”[13] In this sense, market factors (i.e., small market size and a limited number of companies) and linguistic and cultural differences make it more difficult to make profitable audiovisual content in Europe. Given that reality, the revised AVMSD aimed to provide member states with new opportunities to support their local audiovisual markets.

Earlier regulations were also not without side effects. Quotas have proven ineffective at ensuring cultural diversity and encouraging the circulation of European works. They also risk diminishing the quality of works and undermining the creation of a pan-European audiovisual industry.[14] Moreover, although the ultimate goal of cultural diversity should be achieved through promoting the production and distribution of European works,[15] these regulations encouraged the production of local works without adequately addressing pan-European distribution. That is, while member states would pour resources into creating new local works, they remained insufficiently committed to distributing the works of other member states. This caused an oversaturation in local markets and dried up opportunities for creators to generate revenue for their work across the EU.

National implementation of AVMSD Article 13(2) may duplicate this problem, insofar as it involves approaches that can promote “continued fragmentation” among EU member states, and “reinforce [] focus on production over circulation, and domestic over nonnational European works.”[16] Of course the AVMSD does not aim to do this, but is explicitly designed to promote European works generally. It is, instead, implicit in the design of the AVMSD, insofar as it empowers member states to determine how to impose national sub-quotas. The history noted above suggests that member states will continue to interpret these provisions in ways that preference national content rather than pan-European content, thus exacerbating the fragmentation problem.

Indeed, an analysis of the member states that have decided to introduce such measures suggests that these assessments have contributed to a highly fragmented regulatory framework, as the obligations differ significantly both in terms of form (i.e., levies, direct investments, or joint obligations for both levies and direct investment) and amount, ranging from 0.5% to 25% of VOD services’ revenues.[17] Further, as national policymakers have been interested primarily in protecting domestic works, rather than supporting nonnational European content, some member states have mandated sub-quotas that direct the total share of revenues disproportionately toward the promotion of national works. These new provisions, moreover, threaten to drive up the cost of local production and ultimately crowd out many smaller local producers.

III.    The Economics of the AVMSD Financial Obligations

As reported by the European Audiovisual Observatory,[18] and recently corroborated by the European Commission,[19] the quota requirement under Article 13(1) AVMSD 2018 is already essentially met. Despite ongoing concerns regarding difficulties in monitoring prominent placement on VOD services,[20] the share of European works in VOD catalogues currently amounts to between 32% and 37%.[21] Further, in transactional VOD services, there is no significant gap between the share of European works in catalogues and their share of promotion.[22]

While quota obligations originated in an era dominated by broadcast television, they have been extended over time to nonlinear services, where they have encountered a different set of challenges in securing compliance. Since the concept of “prime time” loses its essential meaning in nonlinear services as a tool to secure visibility of certain works, nonlinear providers rely on other measures of prominence. For example, some have created distinct platform categories to group European or domestic works or tags to ease search for those works.[23]

Significant doubts arise, however, about the effectiveness of Article 13(1) quotas to ensure cultural diversity and encourage the circulation of European works. Further, as previously mentioned, quotas may have the unintended consequences of lowering the quality of works and undermining the creation of a pan-European audiovisual industry.

But given that more dramatic problems can accompany poor implementation of the optional Article 13(2) AVMSD 2018, the remainder of this paper will consider the economic features of the latter, and offer recommendations for how member states should weigh the risks and benefits of various strategies to implement this provision.

A.      The Risks of Poor Article 13(2) Implementation

As noted above, Article 13(2) financial contribution requirements take several different forms. Member states can require some mixture of direct investment in local markets by VOD providers and/or mandate, by levies, contributions to national cultural funds. The former can take a number of forms, including co-production, direct development of content, or acquisition of existing rights.

It is useful to think of this scheme as a form of Pigouvian tax. Pigouvian taxes work by imposing a tax on activity that creates a negative externality.[24] The goal is to force producers to internalize the costs of the negative externality, rather than forcing society as a whole to bear those costs. Typically, a Pigouvian tax is levied directly on the externality itself.[25] A classic example is a tax imposed on the production of goods that create pollution or health harms, such as cigarettes. The goal of the tax is to increase the cost of producing harm such that, as a consequence, the final price of the goods will rise to a level that maximizes social benefits.

Here, the good in question is local-content production and the users/consumers in question are the producers of said content. The underlying presumption of the AVMSD seems to be that the operation of foreign streaming services displaces production and distribution of local content, and that this represents a negative externality for which foreign providers need to take account. In theory, at least, the financial obligations are intended to force VOD providers to internalize this cost.

Of course, this is not strictly a textbook case. Where member states require the tax to be directed into a national fund, it looks much more like a Pigouvian tax. Where providers are obligated to devote some percentage of their turnover directly to local production, it may look less so, depending on how those obligations are structured. Nonetheless, the basic dynamics of Article 13(2) are close enough for our purposes here.

To be clear, we do not believe that audiovisual products—whether local or foreign—should actually be regarded as harmful in the same ways that smoking or sugary foods are. But the utility of this example is to demonstrate the regulatory equivalence implicit in treating nonnational content as damaging to local cultures, particularly when local consumers have chosen to select that content.

Moreover, there is an obvious problem with the presumptions underlying the AVMSD that should serve as a limiting principle when considering possible implementation of Article 13(2). It should not be so readily assumed that foreign entities are actually or disproportionately displacing local content. The VOD providers have every incentive to provide local audiences whatever it is they want to consume, and evidence suggests that audiences demand local content.[26]

Indeed, this underlying reality points to a very real distortion that exceedingly high financial obligations can produce. If local content production is overstimulated, as was the case under earlier versions of the legislation, member states may drive up the prices for local production, while at the same time oversaturating local markets and providing little avenue for local creators to distribute and market their works more broadly.

IV.    Getting the Financial Obligations ‘Right’

Member states’ goal is to seek the best outcome for their audiovisual sectors. Even if we assume that a tax on VOD providers is necessary in some cases, that still leaves the questions of which cases and how much the tax should be. Without answers to those questions, there is little hope of achieving a socially beneficial tax assessment or of doing more than, at best, distorting local market signals or, at worst, undermining local audiovisual production. Thus, the EU and member states need to both continue and deepen their examinations of the state of the sector, identify any market failures, and address these with the regulatory tools at their disposal. If, as a result of this analysis, any financial obligations are to be put in place—which Article 13(2) AVMSD 2018 grants them the option to do, although it does not require it—then member states should tailor any such taxes to tackle the identified problems.

Indeed, implicit in the idea of Pigouvian taxes is the notion that we do not seek a costless end: there are always tradeoffs among competing goals. That is the very essence of using levies to mitigate externalities: there is some benefit that society is reaping, and some harm for which it has incorrectly accounted. Accounting for the harm will necessarily reduce some of the good.

One of the main problems that can arise with taxes of this type is the introduction of perverse incentives. As William Baumol noted of Pigouvian taxes:

[T]he appropriate price (compensation) to a user of a public good (victim of a public externality) is zero except, of course, for lump sum payments. Thus, perhaps, rather than saying there is no price that will yield an optimal quantity of a public good (externality), it may be more illuminating to say that a double price is required: a nonzero price (tax) to the supplier of the good, and a zero price to the consumer.[27]

In essence, treating a Pigouvian tax as a sort of transfer payment creates a system that encourages overconsumption of the public good. Thus, to the extent that member states mandate that foreign VOD providers contribute directly to local content production—that is, via direct payments to local content producers to produce more local content—we would expect an overproduction of such content.

Even with levies to mandate contributions to national funds, there will be some of this dynamic, although national authorities may be positioned to moderate the effect. National authorities face tradeoffs, insofar as any investments they make are, to some degree, uncoupled from organic demand. Thus, these national investments will generate at least some inefficiencies, to the extent that they divert investment from opportunities that would have otherwise been realized in the marketplace.

National authorities may, for instance, determine that there is little harm in having too many locally produced movies and television shows, particularly when digital storage is next to costless. But content does not spring into existence ex nihilo. It depends on the use of a vast array of scarce local labor and resources. In short, that means that financial obligations to contribute to local production can bid up the price of every resource involved in production, leading to fewer local producers being able to afford to compete. Eventually, this will make local production relatively more dependent on a smaller number of firms that can absorb the higher costs.

More broadly, these sorts of interventions also risk distorting investment by nonlocal firms in a way that discourages entry and encourages exit, thus resulting in overall less production than would have otherwise occurred without an intervention. This is particularly true to the extent that national authorities fail to consider the profitability of their investments. Over time, funding unprofitable projects will exacerbate this dynamic by making local production more reliant on subsidies (which, in effect, means that consumers are insufficiently interested in the product). Decoupled from demand, there will be an ever greater need to demand payment from nonlocal firms to prop up relatively unsuccessful local productions.

When these financial obligations go too far, they can create inflationary pressures that may dry up local production altogether. A recent study for the European Commission identifies “[i]ncreasing costs across the board, and in particular for costs on technical crew and creative talent” as principal risk factors for European audiovisual producers.[28] Financial obligations force streamers to demand more production. As the study observes, the resulting cost increases are “no surprise,” since “increased demand would normally increase supply, which would explain the inflated costs upstream.”[29]

In a world of normal production incentives, if a particular market reaches capacity and becomes expensive, the production community will shift to a different market in a different country to avoid the higher prices. To the extent that local content production remains (thanks to the financial-contribution  requirements), while the cost of production will go up, the actual volume of production might not increase very much.

In order to find the optimal level of contributions (that is, the level at which they minimally inflate local costs of production while maximally ensuring cultural production), authorities need to engage in an incremental learning process. In short, member states will need to discover a proper equilibrium that prevents the tax from instigating a cost spiral. This argues for regulatory caution. As Baumol further noted:

[S]uch a learning process always involves wastes and irreversabilities, just like the process of convergence of competitive prices to their equilibrium values in the absence of externalities. But if we follow the usual practice of assuming away these costs, one can show that the process may be expected to converge to the optimum, provided the equilibrium is unique and stable. That is, there is then nothing inherently different about gradually moving taxes and prices towards their equilibrium here, and the process of adjustment toward competitive equilibrium when there are no externalities.[30]

Thus, national authorities considering how to structure these obligations should bear in mind that: 1. There almost certainly will be some bidding up of prices; 2. At a certain point, the gains from trying to increase local content production will be swamped by these inflationary pressures; and 3. There is necessarily a learning process inherent in setting such financial obligations, owing to the serious danger of provoking a cost spiral.

V.      The Mirage of a ‘European Netflix’

Financial obligations imposed under Article 13(2) AVMSD 2018 may generate further unintended consequences.

As already illustrated, the extraordinary diversity of consumer preferences in, and resulting from, fragmentation of the European audiovisual market represents the main barrier to the circulation of European works. In particular, the significant linguistic and cultural differences that contribute to Europe’s celebrated cultural vibrancy also make it less feasible to treat Europe as a single      audiovisual market and more challenging to produce profitable content in Europe. The hurdles represented by language and cultural specificities have been confirmed by a recent study reporting that Netflix users have a strong preference for domestic productions.[31]

From this perspective, it is worth acknowledging, as noted in the literature, that “it took a U.S. player to develop a service that increased the pan-European circulation of audiovisual content and gave European audiences increased access to nonnational EU content, in an accessible and user-friendly manner.”[32]

Against this backdrop, Article 13(2) AVMSD 2018 may serve to further increase fragmentation of the European audiovisual market. Indeed, its implementation by some member states places greater emphasis on supporting domestic works than on supporting (nonnational) European content more broadly.

As a result, the AVMSD financial obligations provision will also preserve “a varied fabric” of European producers, making the emergence of European VOD service providers able to compete against foreign players on a level playing field even more unlikely.

VI.    Proceed with Care

Member states that have chosen to implement Article 13(2) have taken various approaches. Most of them have opted to introduce both direct investment obligations and levies to support a fund. Italy is the only country that has introduced a direct investment obligation as the sole option, while at least two member states (Germany and Poland) have introduced levies without any direct investment obligation thus far.[33]

Further fragmentation can be observed in disparities in the rates applied to turnover achieved in the respective member states. Even the base may sometimes differ. With regard to direct investment obligations, while some member states have employed fair measures, a handful have begun to impose steep obligations on VOD service providers.[34] On the more careful end are the Czech Republic, Netherlands, Portugal, Croatia, Spain, and Greece, which assess their direct investment obligations in the 1-5% range.[35] On the less careful end are countries like France (15%-25%) and Italy (18%-20%). With regard to indirect investment obligations, the rate is usually around 2%, with the exception of Denmark, Spain, Portugal, Romania, and France, where the rate is in the 4-6% range.[36]

The regulatory caution needed to avoid trapping local content-production industries in destructive cost spirals is embodied in the “proportionality principle,” which essentially requires that the costs of regulatory intervention not be disproportionate to the benefits sought.[37] Indeed, the risk of disproportionate implementation of Article 13(2) was so palpable to its drafters that they expressly mandated that any financial contribution required of a service provider “shall be proportionate.”

More data are needed to assess optimal financial contribution levels, but it appears highly risky to venture out as far on a limb as France and Italy have done. Assessing a total 20-25% financial obligation—whether in the form of a national fund levy or investment obligations on the turnover of multiple companies (some of them quite large)—in order to fund local production could easily have dramatic inflationary effects on local content markets.[38] Perhaps a large and wealthy country like France can absorb and offset some of these effects, but it would only be through heavy subsidization of the very industries the financial obligation otherwise threatens to destroy.

Moreover, this approach fails to deal with the distribution problems that these sorts of regulations have historically created in the EU. There is such a thing as too much content and too little distribution. Huge local catalogs can be generated and never adequately shared across member states. Indeed, as noted above, large VOD providers like Netflix have, to a large extent, actually solved this historical problem. Penalizing these providers for offering such solutions is a curious move.

An alternative approach, already pursued in some member states, is for local cultural authorities to use much more modest financial obligations to enhance cross-EU commercialization strategies for their local producers.

Of course, it should not be forgotten that member states are entirely at liberty not to implement 13(2) at all, a direction a number have taken.[39] This option is entirely consistent with preserving a vibrant audiovisual market based on the demand of local consumers, who are free to demand as much local content as they wish.

Ultimately, however, much care should be taken, particularly for member states with markets smaller and less subsidized than France.[40] As some members choose to experiment with these financial contribution rates, they should start with impact assessments and proceed from there incrementally, consistent with the principle of proportionality.

[1] Directive (EU) 2018/1808 Amending Directive 2010/13/EU on the Coordination of Certain Provisions Laid Down by Law, Regulation or Administrative Action in Member States Concerning the Provision of Audiovisual Media Services (Audiovisual Media Services Directive) in View of Changing Market Realities, [2018] OJ L 303/69.

[2] Ibid. at Article 13(6).

[3] Sally Broughton Micova, The Audiovisual Media Services Directive: Balancing Liberalisation and Protection, in Research Handbook on EU Media Law and Policy (E. Brogi and P.L. Parcu, eds.), Cheltenham:Edward Elgar Publishing (2021) at 264.

[4] It is important to note a latent tension, however, between the AVMSD’s focus on European content, which suggests a pan-European preference, versus the practical reality that member states may choose to preference their own national content. The latter would actually frustrate the general goal of the AVMSD in some important respects.

[5] Joe?lle Farchy, Gre?goire Bideau, & Steven Tallec, Content Quotas and Prominence on VOD Services: New Challenges for European Audiovisual Regulators, 28 Int’l J. Cultural Pol’y 419 (2022).

[6] Catalina Iordache, Tim Raats, & Karen Donders, The “Netflix Tax”: An Analysis of Investment Obligations for On-Demand Audiovisual Services in the European Union, 16 Int’l J. Comm. 545, 548 (2022).

[7] Directive 89/552/EEC on the Coordination of Certain Provisions Laid Down by Law, Regulation or Administrative Action in Member States Concerning the Pursuit of Television Broadcasting Activities [1989] OJ L 298/23, Articles 4 and 5.

[8] Directive 2010/13/EU on the Coordination of Certain Provisions Laid Down by Law, Regulation or Administrative Action in Member States Concerning the Provision of Audiovisual Media Services (Audiovisual Media Services Directive), [2010] OJ L 95/1, Article 13(1).

[9] Ibid.

[10] European Commission, Communication on State Aid for Films and Other Audiovisual Works, (2013) OJ C 332/1, para. 4.

[11] Attentional, KEA European Affairs, and Valdani Vicari & Associati, supra note 3, at 17. It should be noted, further, that in this time period, providers were still early in their efforts to develop the VOD market. Thus, the relative immaturity of that market shaped these outcomes to some extent.

[12] Marlen Komorowski, Catalina Iordache, Ivana Kostovska, Stephanie Tintel, & Tim Raats, Investment Obligations for VOD Providers to Financially Contribute to the Production of European Works, a 2021 Update, Studies Media Innovation Technology (2021) at 31, available at https://smit.vub.ac.be/wp-content/uploads/2021/06/A-European-comparison-of-investment-obligations-on-VOD-providers-to-financially-contribute-to-the-production-of-European-works_Report-2021_FINAL.pdf.

[13] Ibid. at 7.

[14] See Piero Papp, The Promotion of European Works: An Analysis on Quotas for European Audiovisual Works and their Effect on Culture and Industry, Stanford-Vienna European Union Law Working Paper No. 50 (2020), available at https://law.stanford.edu/wp-content/uploads/2020/10/papp_eulawwp50.pdf; and Sally Broughton Micova, Content Quotas: What and Whom are the Protecting? in Private Television in Western Europe: Content, Markets, Policies (K. Donders, C. Pauwels, and J. Loisen, eds.), Hampshire: Palgrave (2013) at 245.

[15] AVMSD 2010, supra note 8, at Recital 69.

[16] Iordache, Raats, & Donders, supra note 6, at 551.

[17] Investing in European Works: The Obligations on VOD Providers, European Audiovisual Observatory (2022), available at https://rm.coe.int/iris-plus-2022en2-financial-obligations-for-vod-services/1680a6889c.

[18] Yearbook 2022/2023 – Key Trends, European Audiovisual Observatory (2023), available at https://rm.coe.int/yearbook-key-trends-2022-2023-en/1680aa9f02.

[19] The European Media Industry Outlook, European Commission (2023), available at https://digital-strategy.ec.europa.eu/en/library/european-media-industry-outlook.

[20] Daphne R. Idiz, Kristina Irion, Joris Ebbers, & Rens Vliegenthart, European Audiovisual Media Policy in the Age of Global Video on Demand Services: A Case Study of Netflix in the Netherlands, 12 J. Digital Media & Pol’y 425 (2021).

[21] European Audiovisual Observatory, supra note 17, (finding 32%). The more recent European Commission study, supra note 19, found that EU works alone constituted 28% of VOD catalogs (evenly divided between national and nonnational works), while UK works (qualifying as European for AVMSD purposes) constituted an additional 9%, for a total of 37%.

[22] The Visibility of Audiovisual Works on TVOD – Edition 2021, European Audiovisual Observatory (2021), available at https://rm.coe.int/visibility-of-av-works-on-tvod-2021-edition/1680a59bc2.

[23] But according to the European Media Industry Outlook of the European Commission, supra 19, “Consumers are quite open to the country and language of origin.” And further: “Four out of five (80%) EU consumers say that they are likely to watch films or series from the US, followed by 76% that say they are likely to watch films or series from their home country. About seven in 10 (71%) EU consumers say that they are likely to watch films or series coming from other European countries.”

[24] See, e.g., William J. Baumol, On Taxation and the Control of Externalities, 62 Am. Econ. R. 307, 312 (1972).

[25] Ibid. at 307.

[26] For example, a recent report from the European Commission on the audiovisual market found that EU consumers expressed a roughly equal demand for both U.S. and national content. European Commission, supra 19, at 23. U.S. works represent just less than half (47%) of VOD providers’ catalogs, while EU works (national and nonnational) comprise 28% and UK works comprise 9%. Id. at 26. The report does not indicate from whence the remaining 16% originate, but we can surmise that it is material sourced from around the world.

[27] William J. Baumol, supra note 24, at 312.

[28] European Commission, supra note 28, at 48.

[29] Ibid.

[30] Ibid. at 315.

[31] Annette Broocks & Zuzanna Studnicka, Gravity and Trade in Video on Demand Services, JRC Digital Economy Working Paper 2021-12 (2021), available at https://joint-research-centre.ec.europa.eu/publications/gravity-and-trade-video-demand-services_en.

[32] Iordache, Raats, & Donders, supra note 6, 557.

[33] Svitlana Buriak & Dennis Weber, Investment Obligations and Levies on VOD Media Service Providers and Cultural Policies of Member States, 15 World Tax J. 2, 3-4 (2023), available at https://www.ibfd.org/shop/journal/investment-obligations-and-levies-vod-media-service-providers-and-cultural-policies.

[34] Ibid.

[35] Ibid. at 4.

[36] Ibid. at 28-30.

[37] The principle of proportionality requires that the legislator considering adoption of a new measure consider “the need for any burden” that that legislative act is likely to create “to be minimised and commensurate with the objective” pursued. Article 5, Protocol (No 2) on the application of the principles of subsidiarity and proportionality (OJ C 115), 9.5.2008, p. 206-209.

[38] See, e.g., Economic Analysis of the French Audiovisual Industry Main Trends and Focus on the Costs of High-End Fiction In France, Arcom (2023) at 13-18, available at https://www.arcom.fr/sites/default/files/2023-04/Presentation%20economic%20analysis%20of%20the%20french%20audiovisual%20industry_0.pdf.

[39] Svitlana Buriak & Dennis Weber, supra, note 33 at 4.

[40] In particular, smaller member states should take notice of the fact that France is pushing for aggressive obligations against the backdrop of a 2023 budget of 4.2 billion euros for the French Culture Ministry. See, Ministry of Culture Budget 2023 – Finance Bill, Ministere de la Culture (Sep. 28, 2022), https://www.culture.gouv.fr/en/Presse/Dossiers-de-presse/Budget-2023-du-ministere-de-la-Culture-Projet-de-loi-de-finances#:~:text=In%202023%2C%20the%20Ministry%20of,(up%20€527%20million).

Continue reading
Innovation & the New Economy

Congress Should Pull the Brakes on Redefining Railroads’ Common Carrier Obligations

Popular Media A longstanding principle of common law in both the United States and the United Kingdom recognizes the value of establishing “common carriers”— that is, entities . . .

A longstanding principle of common law in both the United States and the United Kingdom recognizes the value of establishing “common carriers”— that is, entities that transport goods, people or services for the benefit of the general public with an obligation not to discriminate among them. Unlike private or “contract” carriers, a common carrier operates under a license provided by a regulator, who retains authority to interpret the carrier’s obligations to the public.

Read the full piece here.

Continue reading
Telecommunications & Regulated Utilities

ICLE Comments to OSTP on National Priorities for Artificial Intelligence

Regulatory Comments We thank the Office of Science and Technology Policy (OSTP) for this opportunity to provide regulatory commentary on the pivotal subject of artificial-intelligence (AI) regulation. . . .

We thank the Office of Science and Technology Policy (OSTP) for this opportunity to provide regulatory commentary on the pivotal subject of artificial-intelligence (AI) regulation. AI technology, already a familiar part of American life, is poised to become among the most consequential technological advancements in the coming years. As the rate of innovation in AI technologies accelerates, there will be greater opportunity for an expanded spectrum of applications that increase social welfare. At the same time, we are cognizant of some potential risks that AI could pose.

The Biden administration has already taken commendable steps toward advancing innovation, safeguarding Americans’ rights and safety, and ensuring that the public can benefit from AI. The updated National AI R&D Strategic Plan,[1] the blueprint for an AI Bill of Rights,[2] and the AI Risk Management Framework,[3] among other initiatives, represent thoughtful efforts to grapple with the legal and social implications of AI technologies.

We firmly believe that the prime concern should be to avoid premature regulatory action. Each technology grouped under the broad umbrella of AI is unique and requires careful consideration and understanding on its own terms. It is crucial to take sufficient time to study these important distinctions and appreciate the specific challenges and opportunities inherent in each. Overarching or rushed regulations could stifle innovation, impede economic growth, and inadvertently undermine efforts to realize AI’s transformative potential.

Furthermore, when contemplating the adoption of a risk-based regulatory framework, we propose that the OSTP steer clear of overreliance on the precautionary principle. While intended to anticipate potential risks, the precautionary principle can over-index in the direction of caution and, due to its inherently conservative nature, serve as a barrier to innovation and progress. Instead, we recommend an approach that grounds any potential regulation in addressing real harms, with particular focus on preventing or minimizing those harms with a significant likelihood of occurring, that are more comprehensively understood, and that are tangible, rather than based on speculative or nebulous risks.

Developing a comprehensive national AI strategy is, indeed, a commendable undertaking and holds the promise that it could align various stakeholders’ interests and offer a holistic approach to address AI’s challenges. It is of paramount importance that this strategy remain responsive to the latest AI advances and global changes, considering the dynamic and evolving nature of AI technology. We are confident that the OSTP and the National AI Initiative Office will thoughtfully integrate the inputs provided through this Request for Information (RFI)[4] to inform the National AI Strategy’s development. We look forward to contributing our perspectives and suggestions to this critical dialogue.

Below we answer select questions in the RFI, we wanted to direct attention to a larger set of comments we submitted last month to the National Telecommunications and Information Administration’s separate inquiry on this topic.[5] Those comments are attached in full.

Understanding the Components of AI Must Come Before Regulation

  1. What specific measures – such as standards, regulations, investments, and improved trust and safety practices – are needed to ensure that AI systems are designed, developed, and deployed in a manner that protects people’s rights and safety? Which specific entities should develop and implement these measures?[6]

Before deciding what standards are necessary to regulate AI, it is necessary to develop some meaningful definition of what “AI” means. The present enthusiasm for AI has led to an oversimplification in the public discourse that can obscure how diverse the underlying technologies and their respective applications actually are. AI, in fact, covers a spectrum of technologies from large language models[7] to recommender systems[8] and beyond. These applications differ significantly from some of the more extravagant conceptions of AI, such as artificial general intelligence (AGI). A failure to distinguish among these technologies and their particular use cases can result in what we refer to as “regulatory overaggregation”—that is, a regulatory generalization that clouds the distinct aspects of each technology and may fail to address actual harms due to an inability to adequately address granular subjects.

The contemporary urge to overgeneralize the regulation of AI has parallels with the domains of “privacy rights” and “privacy regulation,” where sharply divergent potential harms are often conflated under the same broad topic. The concept of privacy often invokes an expectation of seclusion or allowing an individual to control their personal information.[9] This framing, however, is too general and cannot capture all actionable areas of law that implicate privacy, such as “revenge porn” or the unauthorized sale of cellphone location data. Overaggregating these distinct issues under a unified “law of privacy” may lead to regulations that fail to properly address each concern.

On the other hand, the domain of intellectual property (IP) demonstrates a more nuanced approach. Though it covers an array of legal constructs like copyright, patents, and trademarks, each area has specific legislation addressing unique rights, harms, and remedies. This approach fosters legislative richness and avoids the pitfall of overaggregation.

Lessons from both privacy law and intellectual property may be instructive for AI. Overly broad AI regulations risk stifling innovation and technological advancement, while potentially failing to address specific harms. Therefore, rather than a blanket regulatory approach, a detailed understanding of AI’s various subdomains is needed to target identifiable harms. This could be aided by OSTP facilitating the development of a comprehensive catalog of AI technologies and their potential risks, which could serve as a reference for regulators and courts.

Emphasize Harm-Based Approaches to AI Regulation and Require Cost-Benefit Analysis

Drawing upon the challenges associated with regulating emergent technologies such as AI, we could begin to explore this domain by considering an analogy to an older technology: photography. If camera technology were nascent, we might project myriad potential harms. But we can reflect from our position of having nearly two centuries of experience with the technology that a universal regulatory framework to manage all aspects of camera technology would be absurd. Instead, existing general laws more adequately address the specific harms that can be facilitated by camera technology, such as infringements on privacy rights arising from covert filming, use in the furtherance of criminal enterprises, or theft of trade secrets. In these instances, it is not the camera technology itself that forms the subject of legal concern, but the illicit actions carried out through its use.

Further, when assessing potential harms facilitated by new technology, a comprehensive analysis must consider the balance between the likelihood of harmful uses and the prospects of beneficial applications. Copyright law, as exemplified in the landmark Betamax case,[10] provides an insightful precedent. That case illustrated how law could adapt to new technology, in that instance underscoring the need for copyright law to accommodate “substantial noninfringing uses” of new technologies that could reproduce protected material.[11] The decision upheld that, while the technology may facilitate some infringement, it would be inappropriate to apply a broad presumption against its use.[12] Moreover, the case stressed the importance of examining each circumstance on a case-by-case basis.[13]

Regulation and accountability in the realm of AI should echo this approach, emerging organically through bottom-up, case-by-case processes that examine the relevant facts of any given situation and how they alter (or do not alter) our legal system’s baseline assumptions. New legislation, if required, should be incremental, guided by well-defined principles, and focused on identifiable harms, thus allowing law to fit specific circumstances without conflicting with established legal and regulatory principles.

AI, like any tool, can be misused, and any such misuse should incur legal consequences. Yet, the legal analysis should focus primarily not on the AI itself, but on the malefactors’ actions and the resulting harms. Attempting to construct a foolproof regulatory framework that precludes the misuse of AI may prove futile and could potentially stifle the development of socially beneficial tools.

Moreover, the fact that AI technology remains largely in the research and development phase complicates regulatory decisions. Proactive regulation based on the precautionary principle might thwart unforeseen benefits that could emerge as these technologies mature and find unique applications.[14] Even in high-risk industries like nuclear power, precautionary regulation often results in net social harms.[15]

When imagining the harms that could occur, it is crucial to distinguish two broad categories of AI-related concerns. First is the largely theoretical fear associated with AGI—the understandable apprehension many feel about inadvertently creating a superintelligence that could potentially extinguish human life.[16] If it is even possible to create AGI, about which there remains significant doubt, it is crucial to emphasize that current AI technologies are far from AGI. AI technologies today are essentially sophisticated prediction engines for dealing with text or pixels.[17] It is highly unlikely that we will accidentally stumble onto AGI by merely chaining thousands of these prediction engines together.

The second, more realistic set of concerns pertain to the misuse of AI technologies to perpetuate illicit activities. Specifically, these very impressive technologies might be misused to further discrimination and crime, or could have such a disruptive impact on areas like employment that they will quickly generate tremendous harms. When contemplating harms that could occur, however, it is also necessary to acknowledge the many significant benefits also could be generated. Moreover, as with earlier technologies, economic disruptions will provide both challenges and opportunities. It is easy to see the immediate effect on the jobs of content writers posed by ChatGPT, for example, but less easy to measure the benefits that would be realized by firms that can deploy this technology to “in-source” tasks.  Thus, static analyses of AI’s substitution power are likely to miss the bigger picture of social welfare that could be realized as organizations improve their efficiency through the adoption of AI tools.

Finally, it is important to remember that dynamic competition—where technology is continually evolving and firms are competing to provide consumers with innovative products and services—drives far more economic growth than static competition. As the economist Joseph Schumpeter noted, competition thrives not merely on price but on the advent of disruptive new commodities, technologies, and supply sources.[18]

Regulation of AI must be seen in the same light. To this end, we advocate a regulatory regime for AI that encourages sector-specific rules to emerge when regulators discover that their existing rules are inadequate for new AI-augmented technologies. This approach should be harm-based, rather than risk-based. In other words, regulations should focus on mitigating the known and likely harms caused by the misuse of AI rather than trying to predict and prevent every possible risk associated with it. A clear-eyed cost-benefit analysis should guide this process.

Rather than preemptively stifling innovation with burdensome regulations based on hypothetical risks, a more nuanced approach would be to respond to actual harms as they arise, carefully weighing the potential harms against the prospective benefits of AI technologies. Such a balanced approach would not only protect society from misuse of AI but would also allow for the continued development and beneficial application of these transformative technologies.

Adopting this approach will require an ongoing dialogue among all stakeholders and an openness to adjust our regulatory frameworks as our understanding of AI and its societal impact deepens. A harm-based, case-by-case approach to AI regulation is consistent with our common-law tradition and promises to be the most effective and flexible approach to guide the development and application of AI technologies.

The Implications of a Centralized Regulator for AI: Risks to Competition and Innovation

  1. … Which specific entities should develop and implement these measures?[19]

The prospect of creating a centralized regulator for emergent technologies like AI raises important concerns, particularly those relating to market competition. A central regulator may inadvertently favor established industry players like OpenAI, as new entrants might be hindered by regulations and compliance costs, which incumbents could manipulate to increase rivals’ costs.[20] The strategic promotion of a strong central regulator can thus serve to maintain or increase incumbents’ market dominance.

In recent U.S. Senate hearings, some witnesses and senators proposed a central regulator to create and administer a licensing regime for AI.[21] While licensing might be necessary for certain AI applications, such as military weaponry, it is broadly inadvisable due to the diverse nature of AI technologies. Developers of AI tools face numerous challenges, including assuring data collection and management, anticipating downstream usage of tools, and managing the complex chain of AI-system development and deployment. A centralized AI regulator would struggle to understand the nuances of each distinct industry, leading to ineffective or inappropriate licensing requirements.

Unlike such sectors as railroads and nuclear power, which have dedicated regulators, AI is more akin to a general-purpose tool, like chemicals or combustion engines. Different agencies regulate the use of these tools as appropriate for their context, without a central regulator overseeing every aspect of development and use. A licensing requirement could introduce undesirable delays into the process of commercializing AI technologies, significantly impeding technological progress and innovation, and potentially leaving the United States behind in the global AI race.

A better advisable approach would be to create product-centric and harm-centric frameworks that other sectoral regulators or competition authorities could incorporate into their rules for goods and services. For example, safety standards for medical devices should be maintained, whether or not AI is involved. But a thoughtful framework might raise questions that the Food and Drug Administration (FDA) finds are necessary to consider when implementing new regulations. This product-centric regulatory approach would ensure safety, quality, and effectiveness without stifling innovation. With their deep industry knowledge, sectoral regulators are better positioned to address the unique challenges posed by AI technology within their spheres of influence.

By contrast, there is a risk that a centralized regulator, operating with an overaggregated concept of AI, might design rules that slow or prevent AI-infused technologies from coming to market if they cannot navigate the complex tradeoffs among interested parties across all such technologies.[22] This could make society worse off and strengthen the position of global competitors. Therefore, it is crucial to approach the regulation of AI with careful consideration of its impacts on competition and innovation, advocating for a framework that encourages diversity and flexibility.

  1. What will the principal benefits of AI be for the people of the United States? How can the United States best capture the benefits of AI across the economy, in domains such as education, health, and transportation? How can AI be harnessed to improve consumer access to and reduce costs associated with products and services? How can AI be used to increase competition and lower barriers to entry across the economy?[23]

The advent of AI promises transformative potential across various domains, heralding numerous benefits for the people of the United States and beyond. Foremost, AI can drastically improve worker efficiency. Advanced AI algorithms could handle repetitive tasks swiftly and accurately, allowing employees to focus on more complex and strategic aspects of their jobs. In sectors ranging from manufacturing to health care to customer service, AI-driven automation can accelerate processes, minimize errors, and enhance productivity, ultimately leading to improved business performance and growth.

For instance, in health care, AI can help practitioners analyze complex medical data rapidly, improving diagnostic accuracy and speed. In manufacturing, AI-powered machines can manage labor-intensive tasks, reducing the possibility of human error and occupational injuries. These efficiencies can reduce costs, with the potential for savings to be passed on to consumers.

Furthermore, AI technology, like many disruptive technologies before it, may be capable not only of augmenting existing workforces but also of fostering new types of industries and opportunities. As AI becomes more sophisticated, we anticipate the emergence of entirely new job categories, similar to how the advent of the internet spurred professions in web design, digital marketing, and e-commerce.

AI can also improve consumer access to, and reduce costs associated with, various products and services. For instance, we have already seen AI-powered recommendation systems personalize the shopping experience, allowing consumers to find relevant products with ease. And in education, we’ve seen AI personalize learning for individual students, tailoring educational content to match each learner’s needs and pace and, in turn, improving educational outcomes and accessibility.

The promise of AI extends to increasing competition and lowering barriers to entry across the economy. By providing businesses with more information and greater efficiency, AI can give rise to more effective business strategies and models. It could level the playing field for small and medium-size enterprises, allowing them to compete with larger corporations by offering cost-effective solutions that previously required significant capital or resources.

  1. What specific measures – such as sector-specific policies, standards, and regulations – are needed to promote innovation, economic growth, competition, job creation, and a beneficial integration of advanced AI systems into everyday life for all Americans? Which specific entities should develop and implement these measures?[24]

As noted above, we believe that specific measures to promote innovation and the safety of advanced AI systems are best approached with a sector-specific focus. Due to the diverse nature of AI applications and the varying impacts on diverse industries, sector-specific policies and standards will be more effective and beneficial than broad, sweeping regulations.

For instance, in the health-care sector, safety and privacy standards must be upheld when deploying AI tools for diagnosing diseases or managing patient data. In such cases, regulators like the FDA or the Department of Health and Human Services could leverage their expertise to develop and implement targeted regulations that ensure safety without stifling innovation.

Similarly, in the automotive sector, where AI is used for autonomous vehicles, transportation authorities could create guidelines and standards to ensure road safety, while also promoting innovation. In finance, where AI algorithms are used for trading, credit scoring, and risk management, the Securities and Exchange Commission (SEC) and other relevant financial regulators can establish rules to prevent unfair practices and ensure market stability.

Conclusion

We again thank the OSTP for initiating this important and timely inquiry into AI regulation. It is through dialogues like these that we can collectively explore AI’s impacts on society. It is crucial to reiterate that regulation, while necessary, should be formulated with a nuanced understanding of the technology. Being eager to impose regulations prematurely could stifle the very innovation that we seek to cultivate and the potential benefits that we aim to harvest. AI has the potential to be a transformative force for the United States and the world, providing a multitude of benefits, and empowering us with the tools to address some of the most pressing challenges of our time. A measured and informed approach to AI regulation would further reinforce our nation’s position as a global leader in technological innovation.

[1] National Artificial Intelligence Research And Development Strategic Plan 2023 Update, Select Committee On Artificial Intelligence Of The National Science And Technology Council (May 2023), available at https://www.whitehouse.gov/wp-content/uploads/2023/05/National-Artificial-Intelligence-Research-and-Development-Strategic-Plan-2023-Update.pdf.

[2] Blueprint for an AI Bill of Rights, White House Office of Science and Technology Policy (2023), available at https://www.whitehouse.gov/ostp/ai-bill-of-rights.

[3] Artificial Intelligence Risk Management Framework (AI RMF 1.0), National Institute of Standards and Technology (Jan. 2023), available at https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.

[4] Request for Information National Priorities for Artificial Intelligence, 3270-F1, 88 FR 34194,White House Office of Science and Technology Policy (May 26, 2023) (“RFI”).

[5] Kristian Stout et al., ICLE Response to the AI Accountability Policy Request for Comment, International Center for Law & Economics (Jun. 2023), https://laweconcenter.org/resources/icle-response-to-the-ai-accountability-policy-request-for-comment (“ICLE NTIA Comments”).

[6] RFI at 34195.

[7] LLMs are a type of artificial-intelligence model designed to parse and generate human language at a highly sophisticated level. The deployment of LLMs has driven progress in fields such as conversational AI, automated content creation, and improved language understanding across a multitude of applications, even suggesting that these models might represent an initial step toward the achievement of artificial general intelligence (AGI). See Alejandro Pen?a et al., Leveraging Large Language Models for Topic Classification in the Domain of Public Affairs, arXiv (Jun. 5, 2023), https://arxiv.org/abs/2306.02864v1.

[8] Recommender systems are advanced tools currently used across a wide array of applications, including web services, books, e-learning, tourism, movies, music, e-commerce, news, and television programs, where they provide personalized recommendations to users. Despite recent advancements, there is a pressing need for further improvements and research in order to offer more efficient recommendations that can be applied across a broader range of applications. See Deepjyoti Roy & Mala Dutta, A Systematic Review and Research Perspective on Recommender Systems, 9 J. Big Data 59 (2022), available at https://journalofbigdata.springeropen.com/counter/pdf/10.1186/s40537-022-00592-5.pdf.

[9] The prototypical framing of this view is captured by the seminal work by Samuel D. Warren & Louis D. Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193 (1890).

[10] Sony Corp. of Am. v. Universal City Studios, Inc., 464 U.S. 417, 439 (1984).

[11] Id. In this case, the Supreme Court imported the doctrine of “substantial noninfringing uses” into copyright law from patent law.

[12] Id.

[13] Id.

[14] See Adam Thierer, Permissionless Innovation: The Continuing Case For Comprehensive Technological Freedom (2016).

[15] See, e.g., Matthew J. Neidell, Shinsuke Uchida, & Marcella Veronesi, The Unintended Effects from Halting Nuclear Power Production: Evidence from Fukushima Daiichi Accident, NBER Working Paper 26395 (2022), https://www.nber.org/papers/w26395 (Japan abandoning nuclear energy in the wake of the Fukushima disaster led to decreased energy consumption, which in turn led to increased mortality).

[16] See, e.g., Eliezer Yudkowsky, Pausing AI Developments Isn’t Enough. We Need to Shut it All Down, Time (Mar. 29, 2023), https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough.

[17] See, e.g., Will Knight, Some Glimpse AGI in ChatGPT. Others Call It a Mirage, Wired (Apr. 10, 2023), https://www.wired.com/story/chatgpt-agi-intelligence (“GPT-4, like its predecessors, had been fed massive amounts of text and code and trained to use the statistical patterns in that corpus to predict the words that should be generated in reply to a piece of text input.”)

[18] Joseph A. Schumpeter, Capitalism, Socialism And Democracy 74 (1976).

[19] RFI at 34195.

[20] This competition concern is one that is widely shared across the political spectrum. See, e.g., Cristiano Lima, Biden’s Former Tech Adviser on What Washington Is Missing about AI, The Washington Post (May 30, 2023), https://www.washingtonpost.com/politics/2023/05/30/biden-former-tech-adviser-what-washington-is-missing-about-ai (Tim Wu noting that he’s “not in favor of an approach that would create heavy compliance costs for market entry and that would sort of regulate more abstract harms”).

[21] Oversight of A.I.: Rules for Artificial Intelligence: Hearing Before the Subcomm. on Privacy, Technology, and the Law of the S. Comm. on the Judiciary, 118th Cong. (2023) (statement of Sam Altman, at 11), https://www.judiciary.senate.gov/download/2023-05-16-testimony-altman.

[22] This is a well-known problem that occurs in numerous regulatory contexts. See, e.g., Raymond J. March, The FDA and the COVID?19: A Political Economy Perspective, 87(4) S. Econ. J. 1210, 1213-16 (2021), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8012986 (discussing the political economy that drives bureaucratic agencies’ incentives in the context of the FDA’s drug-approval process).

[23] RFI at 34196.

[24] RFI at 34196.

Continue reading
Innovation & the New Economy

ICLE Response to the AI Accountability Policy Request for Comment

Regulatory Comments I. Introduction: How Do You Solve a Problem Like ‘AI’? On behalf of the International Center for Law & Economics (ICLE), we thank the National . . .

I. Introduction: How Do You Solve a Problem Like ‘AI’?

On behalf of the International Center for Law & Economics (ICLE), we thank the National Telecommunications and Information Administration (NTIA) for the opportunity to respond to this AI Accountability Policy Request for Comment (RFC).

A significant challenge that emerges in discussions concerning accountability and regulation for artificial intelligence is the broad and often ambiguous definition of “AI” itself. This is demonstrated in the RFC’s framing:

This Request for Comment uses the terms AI, algorithmic, and automated decision systems without specifying any particular technical tool or process. It incorporates NIST’s definition of an ‘‘AI system,’’ as ‘‘an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments.’’  This Request’s scope and use of the term ‘‘AI’’ also encompasses the broader set of technologies covered by the Blueprint: ‘‘automated systems’’ with ‘‘the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.’’[1]

As stated, the RFC’s scope could be read to cover virtually all software.[2] But it is essential to acknowledge that, for the purposes of considering potential regulation, we lack a definition of AI that is either sufficiently broad as to cover all or even most areas of concern, and sufficiently focused as to be a useful lens for analysis. That is to say, what we think of as AI encompasses a significant diversity of discrete technologies that will be put to a huge number of potential uses.

One useful recent comparison is with the approach the Obama administration took in its deliberations over nanotechnology regulation in 2011.[3] Following years of consultation and debate, the administration opted for a parsimonious, context-specific approach precisely because “nanotechnology” is not really a single technology. In that proceeding, the administration ultimately recognized that it was not the general category of “nanotechnology” that was relevant, nor the fact that nanotechnologies are those that operate at very small scales, but rather the means by and degree to which certain tools grouped under the broad heading of “nanotechnology” could “alter the risks and benefits of a specific application.”[4] This calls to mind Judge Frank Easterbrook’s famous admonition that a “law of cyberspace” would be no more useful than a dedicated “law of the horse.”[5] Indeed, we believe Easterbrook’s observation applies equally to the creation of a circumscribed “law of AI.”

While there is nothing inherently wrong with creating a broad regulatory framework to address a collection of loosely related subjects, there is a danger that the very breadth of such a framework might over time serve to foreclose more fruitful and well-fitted forms of regulation.

A second concern in the matter immediately at hand is, as mentioned above, the potential for AI regulation to be formulated so broadly as to encompass essentially all software. Whether by design or accident, this latter case runs a number of risks. First, since the scope of the regulation will potentially cover a much broader subject, the narrow discussion of “AI” will miss many important aspects of broader software regulation, and will, as a consequence, create an ill-fitted legal regime. Second, by sweeping in a far wider range of tools into such a regulation than the drafters publicly acknowledge, the democratic legitimacy of the process is undermined.

A.      The Danger of Regulatory Overaggregation

The current hype surrounding AI has been driven by popular excitement, as well as incentives for media to capitalize on that excitement. While this is understandable, it arguably has led to oversimplification in public discussions about the underlying technologies. In reality, AI is an umbrella term that encompasses a diverse range of technologies, each with its own unique characteristics and applications.

For instance, relatively lower-level technologies like large language models (LLMs)[6] differ significantly from diffusion techniques.[7] At the level of applications, recommender systems can employ a wide variety of different machine-learning (or even more basic statistical) techniques.[8] All of these techniques collectively called “AI” also differ from the wide variety of algorithms employed by search engines, social media, consumer software, video games, streaming services, and so forth, although each also contains software “smarts,” so to speak, that could theoretically be grouped under the large umbrella of “AI.”

And none of the foregoing bear much resemblance at all to what the popular imagination conjures when we speak of AI—that is, artificial general intelligence (AGI), which some experts argue may not even be achievable.[9]

Attempting to create a single AI regulatory scheme commits what we refer to as “regulatory overaggregation”—sweeping together a disparate set of more-or-less related potential regulatory subjects under a single category in a manner that overfocuses on the abstract term and obscures differences among the subjects. The domains of “privacy rights” and “privacy regulation” are illustrative of the dangers inherent in this approach. There are, indeed, many potential harms (both online and offline) that implicate the concept of “privacy,” but the differences among these recommend examining closely the various contexts that attend each.

Individuals often invoke their expectation of “privacy,” for example, in contexts where they want to avoid the public revelation of personal or financial information. This sometimes manifests as the assertion of a right to control data as a form of quasi-property, or as a form of a right to anti-publicity (that is, a right not to be embarrassed publicly). Indeed, writing in 1890 with his law partner Samuel D. Warren, future Supreme Court Justice Louis Brandeis posited a “right to privacy” as akin to a property right.[10] Warren & Brandeis argued that privacy is not merely a matter of seclusion, but extends to the individual’s control over their personal information.[11] This “right to be let alone” delineates a boundary against unwarranted intrusion, which can be seen as a form of intangible property right.[12]

This framing can be useful as an abstract description of a broad class of interests and concerns, but it fails to offer sufficient specificity to describe actionable areas of law. Brandeis & Warren were concerned primarily with publicity;[13] that is, with a property right to control one’s public identity as a public figure. This, in turn, implicates a wide range of concerns, from an individual’s interest in commercialization of their public image to their options for mitigating defamation, as well as technologies that range from photography to website logging to GPS positioning.

But there are clearly other significant public concerns that fall broadly under the heading of “privacy” that cannot be adequately captured by the notion of controlling a property right “to be let alone.” Consider, for example, the emerging issue of “revenge porn.” It is certainly a privacy harm in the Brandeisian sense that it implicates the property right not to have one’s private images distributed without consent. But that framing fails to capture the full extent of potential harms, such as emotional distress and reputational damage.[14] Similarly, cases in which an individual’s cellphone location data are sold to bounty hunters are not primarily about whether a property right has been violated, as they raise broader issues concerning potential abuses of power, stalking, and even physical safety.[15]

These examples highlight some of the ways that, in failing to take account of the distinct facts and contexts that can attend privacy harms, an overaggregated “law of privacy” may tend to produce regulations insufficiently tailored to address those diverse harms.

By contrast, the domain of intellectual property (IP) may serve as an instructive counterpoint to the overaggregated nature of privacy regulation. IP encompasses a vast array of distinct legal constructs, including copyright, patents, trade secrets, trademarks, and moral rights, among others. But in the United States—and indeed, in most jurisdictions around the world—there is no overarching “law of intellectual property” that gathers all of these distinct concerns under a singular regulatory umbrella. Instead, legislation is specific to each area, resulting in copyright-specific acts, patent-specific acts, and so forth. This approach acknowledges that, within IP law, each IP construct invokes unique rights, harms, and remedies that warrant a tailored legislative focus.

The similarity of some of these areas does lend itself to conceptual borrowing, which has tended to enrich the legislative landscape. For example, U.S. copyright law has imported doctrines from patent law.[16] Despite such cross-pollination, copyright law and patent law remain distinct. In this way, intellectual property demonstrates the advantages of focusing on specific harms and remedies. This could serve as a valuable model for AI, where the harms and remedies are equally diverse and context dependent.

If AI regulations are too broad, they may inadvertently encompass any algorithm used in commercially available software, effectively stifling innovation and hindering technological advancements. This is no less true of good-faith efforts to craft laws in any number of domains that nonetheless suffer from a host of unintended consequences.[17]

At the same time, for a regulatory regime covering such a broad array of varying technologies to be intelligible, it is likely inevitable that tradeoffs made to achieve administrative efficiency will cause at least some real harms to be missed. Indeed, NTIA acknowledges this in the RFC:

Commentators have raised concerns about the validity of certain accountability measures. Some audits and assessments, for example, may be scoped too narrowly, creating a ‘‘false sense’’ of assurance. Given this risk, it is imperative that those performing AI accountability tasks are sufficiently qualified to provide credible evidence that systems are trustworthy.[18]

To avoid these unintended consequences, it is crucial to develop a more precise understanding of AI and its various subdomains, and to focus any regulatory efforts toward addressing specific harms that would not otherwise be captured by existing laws. The RFC declares that its aim is “to provide assurance—that AI systems are legal, effective, ethical, safe, and otherwise trustworthy.”[19] As we discuss below, rather than promulgate a set of recommendations about the use of AI, NTIA should focus on cataloguing AI technologies and creating useful taxonomies that regulators and courts can use when they identify tangible harms.

II. AI Accountability and Cost-Benefit Analysis

The RFC states that:

The most useful audits and assessments of these systems, therefore, should extend beyond the technical to broader questions about governance and purpose. These might include whether the people affected by AI systems are meaningfully consulted in their design and whether the choice to use the technology in the first place was well-considered.[20]

It is unlikely that consulting all of the people potentially affected by a set of technological tools could fruitfully contribute to the design of any regulatory system other than one that simply bans those tools.[21] Any intelligible accountability framework must be dedicated to evaluating the technology’s real-world impacts, rather than positing thought experiments about speculative harms. Where tangible harms can be identified, such evaluations should encompass existing laws that focus on those harms and how various AI technologies might alter how existing law would apply. Only in cases where the impact of particular AI technologies represents a new kind of harm, or raises concerns that fall outside existing legal regimes, should new regulatory controls be contemplated.

AI technologies will have diverse applications and consequences, with the potential for both beneficial and harmful outcomes. Rather than focus on how to constrain either AI developers or the technology itself, the focus should be on how best to mitigate or eliminate any potential negative consequences to individuals or society.

NTIA asks:

AI accountability measures have been proposed in connection with many different goals, including those listed below. To what extent are there tradeoffs among these goals?[22]

This question acknowledges that, fundamentally, AI accountability comes down to cost-benefit analysis. In conducting such analysis, we urge that the NTIA and any other agencies be sure to account not only for potential harms, but to take very seriously the massive benefits these technologies might provide.

A.      The Law Should Identify and Address Tangible Harms, Incorporating Incremental Changes

To illustrate the challenges inherent to tailoring regulation of a new technology like AI to address the ways that it might generally create harm, it could be useful to analogize to a different existing technology: photography. If camera technology were brand new, we might imagine a vast array of harms that could arise from its use. But it should be obvious that creating an overarching accountability framework for all camera technology is absurd. Instead, laws of general applicability should address harmful uses of cameras, such as the invasion of privacy rights posed by surreptitious filming. Even where a camera is used in the commission of a crime—e.g., surveilling a location in preparation to commit a burglary—it is not typically the technology itself that is the subject of legal concern; rather, it is the acts of surveillance and burglary.

Even where we can identify a tangible harm that a new technology facilitates, the analysis is not complete. Instead, we need to balance the likelihood of harmful uses of that technology with the likelihood of nonharmful (or beneficial) uses of that technology. Copyright law provides an apt example.

Sony,[23] often referred to as the “Betamax case,” was a landmark U.S. Supreme Court case in 1984 that centered on Sony’s Betamax VCR—the first consumer device that could record television shows for later viewing, a concept now referred to as time-shifting.[24] Plaintiffs alleged that, by manufacturing and selling the Betamax VCRs, Sony was secondarily liable for copyright infringement carried out by its customers when they recorded television shows.[25] In a 5-4 decision, the Supreme Court ruled in favor of Sony, holding that the use of the Betamax VCR to record television shows for later personal viewing constituted “fair use” under U.S. copyright law.[26]

Critical for our purposes here was that the Court found that Sony could not be held liable for contributory infringement because the Betamax VCR was capable of “substantial noninfringing uses.”[27] This is to say that, faced with a new technology (recording relatively high-quality copies of television shows and movies at home), the Court recognized that, while the Betamax might facilitate some infringement, it would be inappropriate to apply a presumption against its use.

Sony and related holdings did not declare that using VCRs to infringe copyright was acceptable. Indeed, copyright enforcement for illegal reproduction has continued apace, even when using new technologies capable of noninfringing uses.[28] At the same time, the government did not create a new regulatory and licensing regime to govern the technology, despite the fact that it was a known vector for some illicit activity.

Note, the Sony case is also important for its fair-use analysis, and is widely cited for the proposition that so-called “time shifting” is permissible. That is not central to our point here, particularly as there is no analogue to fair use proposed in the AI context. But even here, it represents how the law adapts to develop doctrines that excuse conduct that would otherwise be a violation. In the case of copyright, unauthorized reproduction is infringement, period.[29] Fair use is raised as an affirmative defense[30] to excuse some unauthorized reproduction because courts have long recognized that, when viewed case-by-case, application of legal rules need to be tailored to make room for unexpected fact patterns where acts that would otherwise be considered violations yield some larger social benefit.

We are not suggesting the development of a fair-use doctrine for AI, but are instead insisting that AI accountability and regulation must be consistent with the case-by-case approach that has characterized the common law for centuries. Toward that end, it would be best for law relevant to AI to emerge through that same bottom-up, case-by-case process. To the extent that any new legislation is passed, it should be incremental and principles-based, thereby permitting the emergence of law that best fits particular circumstances and does not conflict with other principles of common law.

By contrast, there are instances where the law has recognized that certain technologies are more likely to be used for criminal purposes and should be strictly regulated. For example, many jurisdictions have made possession of certain kinds of weapons—e.g., nunchaku, shuriken “throwing stars,” and switchblade knives—per se illegal, despite possible legal uses (such as martial-arts training).[31] Similarly, although there is a strong Second Amendment protection for firearms in the United States, it is illegal for a felon to possess a firearm.[32] The reason these prohibitions developed is because it was deemed that possession of these devices in most contexts had no other possible use than the violation of the law. But these sorts of technologies are the exception, not the rule. Many chemicals that can be easily used as poisons are nonetheless available as, e.g., cleaning agents or fertilizers.

1.        The EU AI Act: An overly broad attempt to regulate AI

Nonetheless, some advocate regulating AI by placing new technologies into various broad categories of risk, each with their own attendant rules. For example, as proposed by the European Commission, the EU’s AI Act would regulate the use of AI systems that ostensibly pose risks to health, safety, and fundamental rights.[33] The proposal defines AI systems broadly to include essentially any software, and sorts them into three risk levels: unacceptable, high, and limited risk.[34] Unacceptable-risk systems are prohibited outright, while high-risk systems are subject to strict requirements, including mandatory conformity assessments.[35] Limited-risk systems face certain requirements related to adequate documentation and transparency.[36]

The AI Act defines AI so broadly that it would apply even to ordinary general-purpose software, as well as software that uses machine learning but does not pose significant risks.[37] The plain terms of the AI Act could be read to encompass common office applications, spam filters, and recommendation engines, thus potentially imposing considerable compliance burdens on businesses for their use of software that provides benefits dramatically greater than any expected costs.[38] A recently proposed amendment would “ban the use of facial recognition in public spaces, predictive policing tools, and to impose transparency measures on generative AI applications OpenAI’s ChatGPT.”[39]

This approach constitutes a hodge-podge of top-down tech policing and one-off regulations. The AI Act starts with the presumption that regulators can design an abstract, high-level set of categories that capture the risk from “AI” and then proceeds to force arbitrary definitions of particular “AI” implementations into those categories. This approach may get some things right and some things wrong, but none of what good it does will be with principled consistency. For example, it might be the case that “predictive policing” is a problem that merits per se prohibition, but is it really an AI problem? What happens if the police get exceptionally good at using publicly available data and spreadsheets to approximate 80% of what they are able to do with AI? Or even just 50% efficacy? Is it the use of AI that is a harm, or is it the practice itself?

Similarly, a requirement that firms expose the sources on which they train their algorithms might be good in some contexts, but useless or harmful in others.[40] Certainly, it can make sense when thinking about current publicly available generative tools that create images and video, and have no ability to point to a license or permission for their training data. Such cases have a high likelihood of copyright infringement. But should every firm be expected to do this? Surely there will be many cases where firms use their own internal data, or data not subject to property-rights protection at all, but where exposing those sources reveals sensitive internal information, like know-how or other trade secrets. In those cases, a transparency obligation could have a chilling effect.

By contrast, it seems hard to believe that every use of public facial recognition should be banned. For instance, what if local authorities had limited access to facial recognition to find lost children or victims of trafficking?

More broadly, a strict transparency requirement could essentially make advanced machine-learning techniques illegal. By their nature, machine-learning systems and applications that employ LLMs make inferences and predictions that are, very often, not replicable.[41] That is, by their very nature they are not reviewable in a way that would be easily explained to a human in a transparency review. This means that strong transparency obligations could make it legally untenable to employ those techniques.

The broad risk-based approach taken by the AI Act faces difficult enforcement hurdles as well, as demonstrated by the EU’s proposal to essentially ban the open-source community from providing access to generative models.[42] In other words, not only do the proposed amendments seek to prohibit large companies such as OpenAI, Google, Anthropic, Amazon, Microsoft, and IBM from offering API access to generative AI models, but they would also prohibit open-source developers and distributors such as GitHub from doing the same.[43] Moreover, the prohibitions have extraterritorial effects; for example, the EU might seek to impose large fines on U.S. companies for permitting access to their models in the United States, on grounds that those models could be imported into the EU by third parties.[44] These provisions reflect not only an attempt to control the distribution of AI technology but also the wider implications that such attempts would essentially require steering worldwide innovation down a narrow, heavily regulated path.

2.        Focus on the harm and the wrongdoers, not the innovators

None of the foregoing is to suggest that it is impossible for AI to be misused. Where it is misused, there should be actionable legal consequences. For example, if a real-estate developer intentionally used AI tools to screen out individuals on the basis of protected characteristics from purchasing homes, that should be actionable. If a criminal found a novel way to use Chat GPT to commit fraud, that should be actionable. If generative AI is used to create “deep fakes” that further some criminal plot, that should be actionable. But in all those cases, it is not the AI itself that is the relevant unit of legal analysis, but the action of the criminal and the harm he causes.

To try to build a regulatory framework that makes it impossible for bad actors to misuse AI will be ultimately fruitless. Bad actors will always find ways to misuse tools, and heavy-handed regulatory requirements (or even strong suggestions of such) might chill the development of useful tools that could generate an enormous amount of social welfare.

B.      Do Not Neglect the Benefits

A major complication in parsing the wisdom of potential AI regulation is that the technology remains largely in development. Indeed, this is the impetus for many of the calls to “do something” before it is “too late.”[45] The fear that some express is that, unless a wise regulator intervenes in the development process, the technology will inevitably develop in ways that yield more harm than good.[46]

But trying to regulate AI in accordance with the precautionary principle would almost certainly stifle development and dampen the tremendous, but unknowable, good that would emerge as these technologies mature and we find unique uses for them. Moreover, precautionary regulation, even in high-risk industries like nuclear power, can lead to net harms to social welfare.[47]

It is important here to distinguish two broad categories of concern about AI. First, there is the generalized concern about AGI, expressed as fear that we are inadvertently creating a super intelligence with the power to snuff out human life at its whim. We reject this fear as a legitimate basis for new regulatory frameworks, although we concede that it is theoretically possible that this presumption may need to be revisited as AI technologies progress. None of the technologies currently under consideration are anywhere close to AGI. They are essentially just advanced prediction engines, whether the predictions concern text or pixels.[48] It seems highly unlikely that we will accidentally stumble onto AGI by plugging a few thousand prediction engines into one another.

There are more realistic concerns that these very impressive technologies will be misused to further discrimination and crime, or will have such a disruptive impact on areas like employment that they will quickly generate tremendous harms. When contemplating harms that could occur, however, it is also necessary to recognize that many significant benefits could also be generated. Moreover, as with earlier technologies, economic disruptions will provide both challenges and opportunities. It is easy to see the immediate effect on the jobs of content writers, for instance, posed by ChatGPT, but less easy to measure the benefits that will be realized by firms that can deploy this technology to “in-source” tasks.

Firms often face what is called the “make-or-buy” decision. A firm that decides to purchase the services of an outside designer or copywriter has determined that doing so is more efficient than developing that talent in-house. But the fact that many firms employ a particular mix of outsourced and in-house talent to fulfill their business needs does not suggest a universally optimal solution to the make-or-buy problem. All we can do is describe how, under current conditions, firms solve this problem.

AI will surely augment the basis on which firms deal with the make-or-buy decision. Pre-AI, it might have made sense to outsource a good deal of work that was not core to a firm’s mission. Post-AI, it might be the case that the firm can afford to hire additional workers who can utilize AI tools to more quickly and affordably manage the work that had been previously outsourced. Thus, the ability of AI tools to shift the make-or-buy decision, in itself, says nothing about the net welfare effects to society. Arguments could very well be made for either side. If history is any guide, however, it appears likely that AI tools will allow firms to do more with less, while also enabling more individuals to start new businesses with less upfront expense.

Moreover, by freeing capital from easily automated tasks, existing firms and new entrepreneurs could better focus on their core business missions. Excess investments previously made in supporting, for example, the creation of marketing content could be repurposed into R&D-intensive work. Simplistic static analyses of the substitution power of AI tools will almost surely mislead us, and make us neglect the larger social welfare that could be gained from organizations improving their efficiency with AI tools.

Economists have consistently found that dynamic competition—characterized by firms vying to deliver novel and enhanced products and services to consumers—contributes significantly more to economic growth than static competition, where technology is held constant, and firms essentially compete solely on price. As Joseph Schumpeter noted:

[I]t is not [price] competition which counts but the competition from the new commodity, the new technology, the new source of supply, the new type of organization…. This kind of competition is as much more effective than the other as a bombardment is in comparison with forcing a door, and so much more important that it becomes a matter of comparative indifference whether competition in the ordinary sense functions more or less promptly; the powerful lever that in the long run expands output and brings down prices is in any case made of other stuff.[49]

Technological advancements yield substantial welfare benefits for consumers, and there is a comprehensive body of scholarly work substantiating the contributions of technological innovation to economic growth and societal welfare. [50] There is also compelling evidence that technological progress engenders extensive spillovers not fully appropriated by the innovators.[51] Business-model innovations—such as advancements in organization, production, marketing, or distribution—can similarly result in extensive welfare gains.[52]

AI tools obviously are delivering a new kind of technological capability for firms and individuals. The disruptions they will bring will similarly spur business-model innovation as firms scramble to find innovative ways to capitalize on the technology. The potential economic dislocations can, in many cases, amount to reconstitution: a person who was a freelance content writer can be shifted to a different position that manages the output of generative AI and provides human edits to ensure that content makes sense and is based in fact. In many other cases, the dislocations will likely lead to increased opportunities for workers of all sorts.

With this in mind, policymakers need to consider how to identify those laws and regulations that are most likely to foster this innovation, while also enabling courts and regulators to adequately deal with potential harms. Although it is difficult to prescribe particular policies to boost innovation, there is strong evidence about what sorts of policies should be avoided. Most importantly, regulation of AI should avoid inadvertently destroying those technologies.[53] As Adam Thierer has argued, “if public policy is guided at every turn by the fear of hypothetical worst-case scenarios and the precautionary mindset, then innovation becomes less likely.”[54]

Thus, policymakers must be cautious to avoid unduly restricting the range of AI tools that compete for consumer acceptance. Key to fostering investment and innovation is not merely the endorsement of technological advancement, but advocacy for policies that empower innovators to execute and commercialize their technology.

By contrast, consider again the way that some EU lawmakers want to treat “high risk” algorithms under the AI Act. According to recently proposed amendments, if a “high risk” algorithm learns something beyond what its developers expect it to learn, the algorithm would need to undergo a conformity assessment.[55]

One of the prime strengths of AI tools is their capacity for unexpected discoveries, offering potential insights and solutions that might not have been anticipated by human developers. As the Royal Society has observed:

Machine learning is a branch of AI that enables computer systems to perform specific tasks intelligently. Traditional approaches to programming rely on hardcoded rules, which set out how to solve a problem, step-by-step. In contrast, machine learning systems are set a task, and given a large amount of data to use as examples (and non-examples) of how this task can be achieved, or from which to detect patterns. The system then learns how best to achieve the desired output.[56]

By labeling unexpected behavior as inherently risky and necessitating regulatory review, we risk stifling this serendipitous aspect of AI technologies, potentially curtailing their capacity for innovation. It could contribute to a climate of regulatory caution that hampers swift progress in discovering the full potential and utility of AI tools.

C.     AI Regulation Should Follow the Model of Common Law

In a recent hearing of the U.S. Senate Judiciary Committee, OpenAI CEO Sam Altman suggested that the United States needs a central “AI regulator.”[57] As a general matter, we expect this would be unnecessarily duplicative. As we have repeatedly emphasized, the right approach to regulating AI is not the establishment of an overarching regulatory framework, but a careful examination of how AI technologies will variously interact with different parts of the existing legal system. We are not alone in this; former Special Assistant to the President for Technology and Competition Policy Tim Wu recently opined that federal agencies would be well-advised to rely on existing law and enhance that law where necessary in order to catch unexpected situations that may arise from the use of AI tools.[58]

As Judge Easterbrook famously wrote in the context of what was then called “cyberspace,” we do not need a special law for AI any more than we need a “law of the horse.”[59]

1.        An AI regulator’s potential effects on competition

More broadly, there are risks to competition that attend creating a centralized regulator for a new technology like AI. As an established player in the AI market, OpenAI might favor a strong central regulator because of the potential that such an agency could act in ways that hinder the viability of new entrants.[60] In short, an incumbent often can gain by raising its rivals’ regulatory costs, or by manipulating the relationship between its industry’s average and marginal costs. This dynamic can create strong strategic incentives for industry incumbents to promote regulation.

Economists and courts have long studied actions that generate or amplify market dominance by placing competitors at a disadvantage, especially by raising rivals’ costs.[61] There exist numerous strategies to put competitors at a disadvantage or push them out of the market without needing to compete on price. While antitrust action focuses on private actors and their ability to raises rival’s costs, it is well-accepted that “lobbying legislatures or regulatory agencies to create regulations that disadvantage rivals” has similar effects.[62]

Suppose a new regulation costs $1 million in annual compliance costs. Only companies that are sufficiently large and profitable will be able to cover those costs, which keeps out newcomers and smaller competitors. This effect of keeping out smaller competitors by raising their costs may more than offset the regulatory burden on the incumbent. New entrants typically produce on a smaller scale, and therefore find it more difficult to spread increased costs over a large number of units. This makes it harder for them to compete with established firms like OpenAI, which can absorb these costs more easily due to their larger scale of production.

This type of cost increase can often look benign. In the United Mine Workers vs. Pennington[63] case, a coal corporation was alleged to have conspired with the union representing its workforce to establish higher wage rates. How could higher wages be anticompetitive? This seemingly contradictory conclusion came from University of California at Berkeley economist Oliver Williamson, who interpreted the action to be an effort to maximize profits by raising entry barriers.[64] Using a model with a dominant incumbent and a fringe of other competitors, he demonstrated that wage-rate increases could lead to profit maximization if they escalated the fringe’s costs more than they did the dominant firm’s costs. Intuitively, while the dominant firm is dominant, the market price is determined by the marginal producers and the dominant company’s price is determined by the prices of its competitors. If a regulation raises the competitors’ per-unit costs by $2, the dominant company will be able to raise its price by as much as $2 per unit. Even if the regulation hurts the dominant firm, so long as its price increase exceeds its additional cost, the dominant firm can profit from the regulation.

As a result, while regulations might increase costs for OpenAI, they also serve to protect it from potential competition by raising the barriers to entry. In this sense, regulation can be seen as a strategic tool for incumbent firms to maintain or strengthen their market position. None of this analysis rests on OpenAI explicitly wanting to raise its rivals’ costs. That is just the competitive implication of such regulations. Thus, while there may be many benign reasons for a firm like OpenAI to call for regulation in good faith, the ultimate lesson presented by the economics of regulation should counsel caution when imposing strong centralized regulations on a nascent industry.

2.        A central licensing regulator for AI would be a mistake

NTIA asks:

Are there ways in which accountability mechanisms are unlikely to further, and might even frustrate, the development of trustworthy AI? Are there accountability mechanisms that unduly impact AI innovation and the competitiveness of U.S. developers?[65]

We are not alone in the  belief that imposing a licensing regime would present just such a barrier to innovation.[66] In the recent Senate hearings, the idea of a central regulator was endorsed as means to create and administer a licensing regime.[67] Perhaps in some narrow applications of particular AI technologies, there could be specific contexts in which licensing is appropriate (e.g., in providing military weapons), but broadly speaking, we believe this is inadvisable. Owing to the highly diverse nature of AI technologies, trying to license AI development is a fraught exercise, as NTIA itself acknowledges:

A developer training an AI tool on a customer’s data may not be able to tell how that data was collected or organized, making it difficult for the developer to assure the AI system. Alternatively, the customer may use the tool in ways the developer did not foresee or intend, creating risks for the developer wanting to manage downstream use of the tool. When responsibility along this chain of AI system development and deployment is fractured, auditors must decide whose data and which relevant models to analyze, whose decisions to examine, how nested actions fit together, and what is within the audit’s frame.[68]

Rather than design a single regulation to cover AI, ostensibly administered through a single licensing regime, NTIA should acknowledge the broad set of industries currently seeking to employ a diverse range of AI products that differ in fundamental ways. The implications of AI deployment in health care, for instance, vastly differ from those in transportation. A centralized AI regulator might struggle to comprehend the nuances and intricacies of each distinct industry, thus potentially leading to ineffective or inappropriate licensing requirements.

Analogies have been drawn between AI and sectors like railroads and nuclear power, which have dedicated regulators.[69] These sectors, however, are more homogenous and discrete than the AI industry (if such an industry even exists, apart from the software industry more generally). AI is much closer to a general-purpose tool, like chemicals or combustion engines. We do not enact central regulators to license every aspect of the development and use of chemicals, but instead allow different agencies to treat their use differently as is appropriate for the context. For example, the Occupational Safety and Health Administration (OSHA) will regulate employee exposure to dangerous substances encountered in the workplace, while various consumer-protection boards will regulate the adulteration of goods.

The notion of licensing implies that companies would need to obtain permission prior to commercializing a particular piece of code. This could introduce undesirable latency into the process of bringing AI technologies to market (or, indeed, even of correcting errors in already-deployed products). Given the expansive potential to integrate AI technologies into diverse products and services, this delay could significantly impede technological progress and innovation. Given the strong global interest in the subject, such delays threaten to leave the United States behind its more energetic competitors in the race for AI innovation.

As in other consumer-protection regimes, a better approach would be to eschew licensing and instead create product-centric and harm-centric frameworks that other sectoral regulators or competition authorities could incorporate into their tailored rules for goods and services.

For instance, safety standards for medical devices should be upheld, irrespective of whether AI is involved. This product-centric regulatory approach would ensure that the desired outcomes of safety, quality, and effectiveness are achieved without stymieing innovation. With their deep industry knowledge and experience, sectoral regulators will generally be better positioned to address the unique challenges and considerations posed by AI technology deployed within their spheres of influence.

NTIA alludes to one of the risks of an overaggregated regulator when it notes that:

For some trustworthy AI goals, it will be difficult to harmonize standards across jurisdictions or within a standard- setting body, particularly if the goal involves contested moral and ethical judgements. In some contexts, not deploying AI systems at all will be the means to achieve the stated goals.[70]

Indeed, the institutional incentives that drive bureaucratic decision making often converge on this solution of preventing unexpected behavior by regulated entities.[71] But at what cost? If a regulator is unable to imagine how to negotiate the complicated tradeoffs among interested parties across all AI-infused technologies, it will act to slow or prevent the technology from coming to market. This will make us all worse off, and will only strengthen the position of our competitors on the world stage.

D.      The Impossibility of Explaining Complexity

NTIA notes that:

According to NIST, ‘‘trustworthy AI’’ systems are, among other things, ‘‘valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with their harmful bias managed.’’[72]

And in the section titled “Accountability Inputs and Transparency, NTIA asks a series of questions designed to probe what can be considered a realistic transparency obligation for developers and deployers of AI systems. We urge NTIA to resist the idea that AI systems be “explainable,” for the reasons set forth herein.

One of the significant challenges in AI accountability is making AI systems explainable to users. It is crucial to acknowledge that providing a clear explanation of how an AI model—such as an LLM or a diffusion model—arrives at a specific output is an inherently complex task, and may not be possible at all. As the UK Royal Society has noted in its paper on AI explainability:

Much of the recent excitement about advances in AI has come as a result of advances in statistical techniques. These approaches – including machine learning – often leverage vast amounts of data and complex algorithms to identify patterns and make predictions. This complexity, coupled with the statistical nature of the relationships between inputs that the system constructs, renders them difficult to understand, even for expert users, including the system developers. [73]

These models are designed with intricate architectures and often rely on vast troves of data to arrive at outputs, which can make it nearly impossible to reverse-engineer the process. Due to these complexities, it may be unfeasible to make AI fully explainable to users. Moreover, users themselves often do not value explainability, and may be largely content with a “black box” system when it consistently provides accurate results.[74]

Instead, to the extent that regulators demand visibility into AIs, the focus should be on the transparency of the AI-development process, system inputs, and the general guidelines for AI that developers use in preparing their models. Ultimately, we suspect that, even here, such measures will do little to resolve the inherent complexity in understanding how AI tools produce their outputs.

In a more limited sense, we should consider the utility in transparency of AI-infused technology for most products and consumers. NTIA asks:

Given the likely integration of generative AI tools such as large language models (e.g., ChatGPT) or other general-purpose AI or foundational models into downstream products, how can AI accountability mechanisms inform people about how such tools are operating and/or whether the tools comply with standards for trustworthy AI?[75]

As we note above, the proper level of analysis for AI technologies is the product into which they are incorporated. But even there, we need to ask whether it matters to an end user whether a product they are using relies on ChatGPT or a different algorithm for predictively generating text. If the product malfunctions, what matters is the malfunction and the accountability for the product. Most users do not really care whether a developer writes a program using C++ or Java, and neither should they explicitly care whether he incorporates a generative AI algorithm to predict text, or uses some other method of statistical analysis. The presence of an AI component becomes analytically necessary when diagnosing how something went wrong, but ex ante, it is likely irrelevant from a consumer’s perspective.

Thus, it may be the case that a more fruitful avenue for NTIA to pursue would be to examine how a strict-liability or product-liability legal regime might be developed for AI. These sorts of legal frameworks put the onus on AI developers to ensure that their products behave appropriately­. Such legal frameworks also provide consumers with reassurance that they have recourse if and when they are harmed by a product that contains AI technology. Indeed, it could very well be the case that overemphasizing “trust” in AI systems could end up misleading users in important contexts.[76] This would strengthen the case for a predictable liability regime.

1.        The deepfakes problem demonstrates that we do not need a new body of law

The phenomenon of generating false depictions of individuals using advanced AI techniques—commonly called “deepfakes”—is undeniably concerning, particularly when it can be used to create detrimental false public statements,[77] facilitate fraud,[78] or create nonconsensual pornography.[79] But while deepfakes use modern technological tools, they are merely the most recent iteration of the age-old problem of forgery. Importantly, existing law already equips us with the tools needed to address the challenges posed by deepfakes, rendering many recent legislative proposals at the state level both unnecessary and potentially counterproductive. Consider one of the leading proposals offered by New York State.[80]

Existing laws in New York and at the federal level provide remedies for individuals aggrieved by deepfakes, and they do so within a legal system that has already worked to incorporate the context of these harms, as well as the restrictions of the First Amendment and related defenses. For example, defamation laws can be applied where a deepfake falsely suggests an individual has posed for an explicit photograph or video.[81] New York law also acknowledges the tort of intentional infliction of emotional distress, which likely could be applied to the unauthorized use of a person’s likeness in explicit content.[82] In addition, the tort of unjust enrichment can be brought to bear where appropriate, as can the Lanham Act §43(a), which prohibits false advertising and implied false endorsements.[83] Furthermore, victims may hold copyright in the photograph or video used in a deepfake, presenting grounds for an infringement action.[84]

Thus, while advanced deepfakes are new, the harms they can cause and the law’s ability to address those harms is not novel. Legislation that attempts to carve out new categories of harms in these situations are, at best, reinventing the wheel and, at worst, risk creating confusing tensions in the existing legal system.

III.      The Role of NTIA in AI Accountability

NTIA asks if “the lack of a federal law focused on AI systems [is] a barrier to effective AI accountability?”[85] In short, no, this is not a barrier, so long as the legal system is allowed to evolve to incorporate the novel challenges raised by AI technologies.

As noted in the previous section, there is a need to develop standards, both legal and technical. As we are in the early days of AI technology, the exact contours of the various legal changes that might be needed to incorporate AI tools into existing law remain unclear. At this point, we would urge NTIA—to the extent that it wants to pursue regulatory, licensing, transparency, and other similar obligations—to develop a series of workshops through which leading technology and legal experts could confer on developing a vision for how such legal changes would work in practice.

By gathering stakeholders and fostering an ongoing dialogue, NTIA can help to create a collaborative environment in which organizations can share knowledge, experiences, and innovations to address AI accountability and its associated challenges. By promoting industry collaboration, NTIA could also help build a foundation of trust and cooperation among organizations involved in AI development and deployment. This, in turn, will facilitate the establishment of standards and best practices that address specific concerns, while mitigating the risk of overregulation that could stifle innovation and progress. In this capacity, NTIA should focus on encouraging the development of context-specific best practices that prioritize the containment of identifiable harms. By fostering a collaborative atmosphere, the agency can support a dynamic and adaptive AI ecosystem that is capable of addressing evolving challenges while safeguarding the societal benefits of AI advancements.

In addressing AI accountability, it is essential for NTIA to adopt a harm-focused framework that targets the negative impacts of AI systems rather than the technology itself. This approach would recognize that AI technology can have diverse applications, with consequences that will depend on the context in which they are used. By prioritizing the mitigation of specific harms, NTIA can ensure that regulations are tailored to address real-world outcomes and provide a more targeted and effective regulatory response.

A harm-focused framework also acknowledges that different AI technologies pose differing levels of risk and potential for misuse. NTIA can play a proactive role in guiding the creation of policies that reflect these nuances, striking a balance between encouraging innovation and ensuring the responsible development and use of AI. By centering the discussion on actual harms and their causes, NTIA can foster meaningful dialogue among stakeholders and facilitate the development of industry best practices designed to minimize negative consequences.

Moreover, this approach ensures that AI accountability policies are consistent with existing laws and regulations, as it emphasizes the need to assess AI-related harms within the context of the broader legal landscape. By aligning AI accountability measures with other established regulatory frameworks, the NTIA can provide clear guidance to AI developers and users, while avoiding redundancy and conflicting regulations. Ultimately, a harm-focused framework allows the NTIA to better address the unique challenges posed by AI technology and foster an assurance ecosystem that prioritizes safety, ethics, and legal compliance without stifling innovation.

IV.    Conclusion

Another risk of the current AI hysteria is that fatigue will set in, and the public will become numbed to potential harms. Overall, this may shrink the public’s appetite for the kinds of legal changes that will be needed to address those actual harms that do emerge. News headlines that push doomsday rhetoric and a community of experts all too eager to respond to the market incentives for apocalyptic projections only exacerbate the risk of that outcome. A recent one-line letter, signed by AI scientists and other notable figures, highlights the problem:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.[86]

Novel harms absolutely will emerge from products that employ AI, as has been the case for every new technology. The introduction of automobiles created new risks of harm from high-speed auto-related deaths, for example. But rhetoric about AI being an existential risk on the level of a pandemic or nuclear war is irresponsible.

Perhaps one of the most important positions NTIA can assume, therefore, is that of a calm, collected expert agency that helps restrain the worst impulses to regulate AI out of existence due to blind fear.

In essence, the key challenge confronting policymakers lies in navigating the dichotomy of mitigating actual risks presented by AI, while simultaneously safeguarding the substantial benefits it offers. It is undeniable that the evolution of AI will bring about disruption and may provide a conduit for malevolent actors, just as technologies like the printing press and the internet have done in the past. This does not, however, merit taking an overly cautious stance that would suppress the potential benefits of AI.

As we formulate policy, it is crucial to eschew dystopian science-fiction narratives and instead ground our approach in realistic scenarios. The proposition that computer systems, even those as advanced as AI tools, could spell the end of humanity lacks substantial grounding.

The current state of affairs represents a geo-economic competition to harness the benefits of AI in myriad domains. Contrary to fears that AI poses an existential risk, the real danger may well lie in attempts to overly regulate and stifle the technology’s potential. The indiscriminate imposition of regulations could inadvertently thwart AI advancements, resulting in a loss of potential benefits that could be far more detrimental to social welfare.

[1] AI Accountability Policy Request for Comment, Docket No. 230407-0093, 88 FR 22433, National Telecommunications and Information Administration (Apr. 14, 2023) (“RFC”).

[2] Indeed, this approach appears to be the default position of many policymakers around the world. See, e.g., Mikolaj Barczentewicz, EU’s Compromise AI Legislation Remains Fundamentally Flawed, Truth on the Market (Feb. 8, 2022), https://truthonthemarket.com/2022/02/08/eus-compromise-ai-legislation-remains-fundamentally-flawed; The fundamental flaw of this approach is that, while AI techniques use statistics, “statistics also includes areas of study which are not concerned with creating algorithms that can learn from data to make predictions or decisions. While many core concepts in machine learning have their roots in data science and statistics, some of its advanced analytical capabilities do not naturally overlap with these disciplines.” See, Explainable AI: The Basics, The Royal Society (2019) at 7 available at https://royalsociety.org/-/media/policy/projects/explainable-ai/AI-and-interpretability-policy-briefing.pdf (“Royal Society Briefing”).

[3] John P. Holdren, Cass R. Sunstein, & Islam A. Siddiqui, Memorandum for the Heads of Executive Departments and Agencies, Executive Office of the White House (Jun. 9, 2011), available at https://obamawhitehouse.archives.gov/sites/default/files/omb/inforeg/for-agencies/nanotechnology-regulation-and-oversight-principles.pdf.

[4] Id.

[5] Frank H. Easterbrook, Cyberspace and the Law of the Horse, 1996 U. Chi. L. Forum 207 (1996).

[6] LLMs are a type of artificial-intelligence model designed to parse and generate human language at a highly sophisticated level. The deployment of LLMs has driven progress in fields such as conversational AI, automated content creation, and improved language understanding across a multitude of applications, even suggesting that these models might represent an initial step toward the achievement of artificial general intelligence (AGI). See Alejandro Pen?a et al., Leveraging Large Language Models for Topic Classification in the Domain of Public Affairs, arXiv (Jun. 5, 2023), https://arxiv.org/abs/2306.02864v1.

[7] Diffusion models are a type of generative AI built from a hierarchy of denoising autoencoders, which can achieve state-of-the-art results in such tasks as class-conditional image synthesis, super-resolution, inpainting, colorization, and stroke-based synthesis. Unlike other generative models, these likelihood-based models do not exhibit mode collapse and training instabilities. By leveraging parameter sharing, they can model extraordinarily complex distributions of natural images without necessitating billions of parameters, as in autoregressive models. See Robin Rombach et al., High-Resolution Image Synthesis with Latent Diffusion Models, arXiv (Dec. 20, 2021), https://arxiv.org/abs/2112.10752.

[8] Recommender systems are advanced tools currently used across a wide array of applications, including web services, books, e-learning, tourism, movies, music, e-commerce, news, and television programs, where they provide personalized recommendations to users. Despite recent advancements, there is a pressing need for further improvements and research in order to offer more efficient recommendations that can be applied across a broader range of applications. See Deepjyoti Roy & Mala Dutta, A Systematic Review and Research Perspective on Recommender Systems, 9 J. Big Data 59 (2022), available at https://journalofbigdata.springeropen.com/counter/pdf/10.1186/s40537-022-00592-5.pdf.

[9] AGI refers to hypothetical future AI systems that possess the ability to understand or learn any intellectual task that a human being can do. While the realization of AGI remains uncertain, it is distinct from the more specialized AI systems currently in use. For a skeptical take on the possibility of AGI, see Roger Penrose, The Emperor’s New Mind (Oxford Univ. Press 1989).

[10] Samuel D. Warren & Louis D. Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193 (1890).

[11] Id. at 200.

[12] Id. at 193.

[13] Id. at 196-97.

[14] Notably, courts do try to place a value on emotional distress and related harms. But because these sorts of violations are deeply personal, attempts to quantify such harms in monetary terms are rarely satisfactory to the parties involved.

[15] Martin Giles, Bounty Hunters Tracked People Secretly Using US Phone Giants’ Location Data, MIT Tech. Rev. (Feb. 7, 2019), https://www.technologyreview.com/2019/02/07/137550/bounty-hunters-tracked-people-secretly-using-us-phone-giants-location-data.

[16] See, e.g., Sony Corp. of Am. v. Universal City Studios, Inc., 464 U.S. 417, 439 (1984) (The Supreme Court imported the doctrine of “substantial noninfringing uses” into copyright law from patent law).

[17] A notable example is how the Patriot Act, written to combat terrorism, was ultimately used to take down a sitting governor in a prostitution scandal. See Noam Biale, Eliot Spitzer: From Steamroller to Steamrolled, ACLU, Oct. 29, 2007, https://www.aclu.org/news/national-security/eliot-spitzer-steamroller-steamrolled.

[18] RFC at 22437.

[19] Id. at 22433.

[20] Id. at 22436.

[21] Indeed, the RFC acknowledges that, even as some groups are developing techniques to evaluate AI systems for bias or disparate impact, “It should be recognized that for some features of trustworthy AI, consensus standards may be difficult or impossible to create.” RFC at 22437. Arguably, this problem is inherent to constructing an overaggregated regulator, particularly one that will be asked to consulting a broad public on standards and rulemaking.

[22] Id. at 22439.

[23] Sony Corp. of Am. v. Universal City Studios, Inc., 464 U.S.at 417.

[24] Id.

[25] Id.

[26] Id. at 456.

[27] Id.

[28] See, e.g., Defendant Indicted for Camcording Films in Movie Theaters and for Distributing the Films on Computer Networks First Prosecution Under Newly-Enacted Family Entertainment Copyright Act, U.S. Dept of Justice (Aug. 4, 2005), available at https://www.justice.gov/archive/criminal/cybercrime/press-releases/2005/salisburyCharge.htm.

[29] 17 U.S.C. 106.

[30] See 17 U.S.C. 107; Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569, 590 (1994) (“Since fair use is an affirmative defense, its proponent would have difficulty carrying the burden of demonstrating fair use without favorable evidence about relevant markets.”).

[31] See, e.g., N.Y. Penal Law § 265.01; Wash. Rev. Code Ann. § 9.41.250; Mass. Gen. Laws Ann. ch. 269, § 10(b).

[32] See, e.g., 18 U.S.C.A. § 922(g).

[33] Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM/2021/206 final. The latest proposed text of the AI Act is available at https://www.europarl.europa.eu/doceo/document/A-9-2023-0188_EN.html.

[34] Id. at amendment 36 recital 14.

[35] Id.

[36] Id.

[37] See e.g., Mikolaj Barczentewicz, supra note 2.

[38] Id.

[39] Foo Yun Chee, Martin Coulter & Supantha Mukherjee, EU Lawmakers’ Committees Agree Tougher Draft AI Rules, Reuters (May 11, 2023), https://www.reuters.com/technology/eu-lawmakers-committees-agree-tougher-draft-ai-rules-2023-05-11.

[40] See infra at notes 71-77 and accompanying text.

[41] Explainable AI: The Basics, supra note 2 at 8.

[42] See e.g., Delos Prime, EU AI Act to Target US Open Source Software, Technomancers.ai (May 13, 2023), https://technomancers.ai/eu-ai-act-to-target-us-open-source-software.

[43] Id.

[44] To be clear, it is not certain how such an extraterritorial effect will be obtained, and this is just a proposed amendment to the law. Likely, there will need to be some form of jurisdictional hook, i.e., that this applies only to firms with an EU presence.

[45]  Eliezer Yudkowsky, Pausing AI Developments Isn’t Enough. We Need to Shut it All Down, Time (Mar. 29, 2023), https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough.

[46] See, e.g., Kiran Stacey, UK Should Play Leading Role on Global AI Guidelines, Sunak to Tell Biden, The Guardian (May 31, 2023), https://www.theguardian.com/technology/2023/may/31/uk-should-play-leading-role-in-developing-ai-global-guidelines-sunak-to-tell-biden.

[47] See, e.g., Matthew J. Neidell, Shinsuke Uchida & Marcella Veronesi, The Unintended Effects from Halting Nuclear Power Production: Evidence from Fukushima Daiichi Accident, NBER Working Paper 26395 (2022), https://www.nber.org/papers/w26395 (Japan abandoning nuclear energy in the wake of the Fukushima disaster led to decreased energy consumption, which in turn led to increased mortality).

[48] See, e.g., Will Knight, Some Glimpse AGI in ChatGPT. Others Call It a Mirage, Wired (Apr. 10, 2023), https://www.wired.com/story/chatgpt-agi-intelligence (“GPT-4, like its predecessors, had been fed massive amounts of text and code and trained to use the statistical patterns in that corpus to predict the words that should be generated in reply to a piece of text input.”)

[49] Joseph A. Schumpeter, Capitalism, Socialism And Democracy 74 (1976).

[50] See, e.g., Jerry Hausman, Valuation of New Goods Under Perfect and Imperfect Competition, in The Economics Of New Goods 209–67 (Bresnahan & Gordon eds., 1997).

[51] William D. Nordhaus, Schumpeterian Profits in the American Economy: Theory and Measurement, NBER Working Paper No. 10433 (Apr. 2004) at 1, http://www.nber.org/papers/w10433 (“We conclude that only a miniscule fraction of the social returns from technological advances over the 1948-2001 period was captured by producers, indicating that most of the benefits of technological change are passed on to consumers rather than captured by producers.”).

[52] See generally Oliver E. Williamson, Markets And Hierarchies, Analysis And Antitrust Implications: A Study In The Economics Of Internal Organization (1975).

[53] See, e.g., Nassim Nicholas Taleb, Antifragile: Things That Gain From Disorder (2012) (“In action, [via negativa] is a recipe for what to avoid, what not to do.”).

[54] Adam Thierer, Permissionless Innovation: The Continuing Case For Comprehensive Technological Freedom (2016).

[55] See, e.g., Artificial Intelligence Act, supra note 33, at amendment 112 recital 66.

[56] Explainable AI: The Basics, supra note 2 at 6.

[57] Cecilia Kang, OpenAI’s Sam Altman Urges A.I. Regulation in Senate Hearing, NY Times (May 16, 2023), https://www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html; see also Mike Solana & Nick Russo, Regulate Me, Daddy, Pirate Wires (May 23, 2023), https://www.piratewires.com/p/regulate-me-daddy.

[58] Cristiano Lima, Biden’s Former Tech Adviser on What Washington is Missing about AI, The Washington Post (May 30, 2023), https://www.washingtonpost.com/politics/2023/05/30/biden-former-tech-adviser-what-washington-is-missing-about-ai.

[59] Frank H. Easterbrook, supra note 5.

[60]  See Lima, supra note 58 (“I’m not in favor of an approach that would create heavy compliance costs for market entry and that would sort of regulate more abstract harms.”)

[61] Steven C. Salop & David T. Scheffman, Raising Rivals’ Costs, 73:2 Am. Econ. R. 267, 267–71 (1983), http://www.jstor.org/stable/1816853.

[62] Steven C. Salop & David T. Scheffman, Cost-Raising Strategies, 36:1 J. Indus. Econ. 19 (1987), https://doi.org/10.2307/2098594.

[63] United Mine Workers of Am. v. Pennington, 381 U.S. 657, 661 (1965).

[64] Oliver E. Williamson, Wage Rates as a Barrier to Entry: The Pennington Case in Perspective, 82:1 Q. J. Econ. 85 (1968), https://doi.org/10.2307/1882246.

[65] RFC at 22439.

[66] See, e.g., Lima, supra note 58 (“Licensing regimes are the death of competition in most places they operate”).

[67] Kang, supra note 57; Oversight of A.I.: Rules for Artificial Intelligence: Hearing Before the Subcomm. on Privacy, Technology, and the Law of the S. Comm. on the Judiciary, 118th Cong. (2023) (statement of Sam Altman, at 11), available at https://www.judiciary.senate.gov/download/2023-05-16-testimony-altman.

[68] RFC at 22437.

[69] See, e.g., Transcript: Senate Judiciary Subcommittee Hearing on Oversight of AI, Tech Policy Press (May 16, 2023), https://techpolicy.press/transcript-senate-judiciary-subcommittee-hearing-on-oversight-of-ai (“So what I’m trying to do is make sure that you just can’t go build a nuclear power plant. Hey Bob, what would you like to do today? Let’s go build a nuclear power plant. You have a nuclear regulatory commission that governs how you build a plant and is licensed.”)

[70] RFC at 22438.

[71] See, e.g., Raymond J. March, The FDA and the COVID?19: A Political Economy Perspective, 87(4) S. Econ. J. 1210, 1213-16 (2021), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8012986 (discussing the political economy that drives incentives of bureaucratic agencies in the context of the FDA’s drug-approval process).

[72] RFC at 22434.

[73] Explainable AI: The Basics, supra, note 2 at 12.

[74] Id. at 20.

[75] Id. at 22439.

[76] Explainable AI: The Basics, supra note 2 at 22. (“Not only is the link between explanations and trust complex, but trust in a system may not always be a desirable outcome. There is a risk that, if a system produces convincing but misleading explanations, users might develop a false sense of confidence or understanding, mistakenly believing it is trustworthy as a result.”)

[77] Kate Conger, Hackers’ Fake Claims of Ukrainian Surrender Aren’t Fooling Anyone. So What’s Their Goal?, NY Times (Apr. 5, 2022), https://www.nytimes.com/2022/04/05/us/politics/ukraine-russia-hackers.html.

[78] Pranshu Verma, They Thought Loved Ones Were Calling for Help. It Was an AI Scam, The Washington Post (Mar. 5, 2023), https://www.washingtonpost.com/technology/2023/03/05/ai-voice-scam.

[79] Video: Deepfake Porn Booms in the Age of A.I., NBC News (Apr. 28, 2023), https://www.nbcnews.com/now/video/deepfake-porn-booms-in-the-age-of-a-i-171726917562.

[80] S5857B, NY State Senate (2018), https://www.nysenate.gov/legislation/bills/2017/s5857/amendment/b.

[81] See, e.g., Rejent v. Liberation Publications, Inc., 197 A.D.2d 240, 244–45 (1994); see also, Leser v. Penido, 62 A.D.3d 510, 510–11 (2009).

[82] See, e.g., Howell v. New York Post Co,. 612 N.E.2d 699 (1993).

[83] See, e.g., Mandarin Trading Ltd. v. Wildenstein, 944 N.E.2d 1104 (2011); 15 U.S.C. §1125(a).

[84] 17 U.S.C. 106.

[85] RFC at 22440.

[86] Statement on AI Risk, Center for AI Safety, https://www.safe.ai/statement-on-ai-risk (last visited Jun. 7, 202).

Continue reading
Innovation & the New Economy