Showing 9 of 176 Publications in Data Security & Privacy

Lessons from GDPR for AI Policymaking

Scholarship Abstract The ChatGPT chatbot has not just caught the public imagination; it is also amplifying concern across industry, academia, and government policymakers interested in the . . .

Abstract

The ChatGPT chatbot has not just caught the public imagination; it is also amplifying concern across industry, academia, and government policymakers interested in the regulation of Artificial Intelligence (AI) about how to understand the risks and threats associated with AI applications. Following the release of ChatGPT, some EU regulators proposed changes to the EU AI Act to classify AI systems like ChatGPT that generate complex texts without any human oversight as “high-risk” AI systems that would fall under the law’s requirements. That classification was a controversial one, with other regulators arguing that technologies like ChatGPT, which merely generate text, are “not risky at all.” This controversy risks disrupting coherent discussion and progress toward formulating sound AI regulations for Large Language Models (LLMs), AI, or ICTs more generally. It remains unclear where ChatGPT fits within AI and where AI fits within the larger context of digital policy and the regulation of ICTs in spite of nascent efforts by OECD.AI and the EU.

This paper aims to address two research questions around AI policy: (1) How are LLMs like ChatGPT shifting the policy discussions around AI regulations? (2) What lessons can regulators learn from the EU’s General Data Protection Regulation (GDPR) and other data protection policymaking efforts that can be applied to AI policymaking?

The first part of the paper addresses the question of how ChatGPT and other LLMs have changed the policy discourse in the EU and other regions around regulating AI and what the broader implications for these shifts may be for AI regulation more widely. This section reviews the existing proposal for an EU AI Act and its accompanying classification of high-risk AI systems, considers the changes prompted by the release of ChatGPT and examines how LLMs appear to have altered policymakers’ conceptions of the risks presented by AI. Finally, we present a framework for understanding how the security and safety risks posed by LLMs fit within the larger context of risks presented by AI and current efforts to formulate a regulatory framework for AI.

The second part of the paper considers the similarities and differences between the proposed AI Act and GDPR in terms of (1) organizations being regulated, or scope, (2) reliance on organizations’ self-assessment of potential risks, or degree of self-regulation, (3) penalties, and (4) technical knowledge required for effective enforcement, or complexity. For each of these areas, we consider how regulators scoped or implemented GDPR to make it manageable, enforceable, meaningful, and consistent across a wide range of organizations handling many different kinds of data as well as the extent to which they were successful in doing so. We then examine different ways in which those same approaches may or may not be applicable to the AI Act and the ways in which AI may prove more difficult to regulate than issues of data protection and privacy covered by GDPR. We also look at the ways in which AI may make it more difficult to enforce and comply with GDPR since the continued evolution of AI technologies may create cybersecurity tools and threats that will impact the efficacy of GDPR and privacy policies. This section argues that the extent to which the proposed AI Act relies on self-regulation and the technical complexity of enforcement are likely to pose significant challenges to enforcement based on the implementation of the most technologically and self-regulation-focused elements of GDPR.

Continue reading
Innovation & the New Economy

Gus Hurwitz on AI Regulation

Presentations & Interviews ICLE Director of Law & Economics Programs Gus Hurwitz was a guest on The Cyberlaw Podcast to discuss commitments made last week by leaders of the . . .

ICLE Director of Law & Economics Programs Gus Hurwitz was a guest on The Cyberlaw Podcast to discuss commitments made last week by leaders of the artificial-intelligence (AI) industry to political leaders in Washington, as well the European Commission’s struggles to get other jurisdictions to adopt the EU’s regulatory framework for AI.

Other topics included the Federal Communication Commission’s new cybersecurity label for IoT devices, the Environmental Protection Agency’s regulations for water-system cybersecurity, and the latest U.S. Justice Department/Federal Trade Commission draft merger-review guidelines.

The full episode is embedded below.

Continue reading
Data Security & Privacy

Norwegian Decision Banning Behavioral Advertising on Facebook and Instagram

TOTM The Norwegian Data Protection Authority (DPA) on July 14 imposed a temporary three-month ban on “behavioural advertising” on Facebook and Instagram to users based in Norway. The . . .

The Norwegian Data Protection Authority (DPA) on July 14 imposed a temporary three-month ban on “behavioural advertising” on Facebook and Instagram to users based in Norway. The decision relied on the “urgency procedure” under the General Data Protection Regulation (GDPR), which exceptionally allows direct regulatory interventions by other national authorities than the authority of the country where the business is registered (here: Ireland).

My initial view of the decision is that it is both a misuse of the urgency procedure and mischaracterizes the leading judgment from the EU Court of Justice (CJEU) on which it purports to rely (see my analysis of that judgment: part 1 and part 2). The decision misses the critical legal issue that it’s unclear to what extent the CJEU’s analysis applies to first-party personal data (collected by Facebook and Instagram) as the Court’s judgment expressly covered third-party data (collected “off-platform”).

Read the full piece here.

Continue reading
Data Security & Privacy

How the New Interoperability Mandate Could Violate the EU Charter

Popular Media Among the regulatory tools created by the European Union’s Digital Markets Act (DMA)—landmark competition legislation that took effect across the EU last November—is a mandate that . . .

Among the regulatory tools created by the European Union’s Digital Markets Act (DMA)—landmark competition legislation that took effect across the EU last November—is a mandate that the largest digital-messaging services must be made interoperable. In the name of promoting fairness in digital markets, these gatekeeper services are asked to allow external services to connect with them, enabling new and smaller players to compete.

Read the full piece here.

Continue reading
Data Security & Privacy

The CJEU’s Decision in Meta’s Competition Case: Sensitive Data and Privacy Enforcement by Competition Authorities (Part 2)

TOTM Yesterday, I delved into the recent judgment in the Meta case (Case C-252/21) from the Court of Justice of the European Union (CJEU). I gave a preliminary . . .

Yesterday, I delved into the recent judgment in the Meta case (Case C-252/21) from the Court of Justice of the European Union (CJEU). I gave a preliminary analysis of the court’s view on some of the complexities surrounding the processing of personal data for personalized advertising under the GDPR, focusing on three lawful bases for data processing: contractual necessity, legitimate interests, and consent. I emphasized the importance of a nuanced understanding of the CJEU decision and pointed out that the decision does not determine definitively whether Meta can rely on legitimate interests or fall back on user consent for personalized advertising.

Read the full piece here.

Continue reading
Data Security & Privacy

The CJEU’s Decision in Meta’s Competition Case: Consequences for Personalized Advertising Under the GDPR (Part 1)

TOTM Today’s judgment from the Court of Justice of the European Union (CJEU) in Meta’s case (Case C-252/21) offers new insights into the complexities surrounding personalized . . .

Today’s judgment from the Court of Justice of the European Union (CJEU) in Meta’s case (Case C-252/21) offers new insights into the complexities surrounding personalized advertising under the EU General Data Protection Regulation (GDPR). In the decision, in which the CJEU gave the green light to an attempt by the German competition authority (FCO) to rely on the GDPR, the court also explored the lawful bases for data processing under the GDPR, notably for personalized advertising.

Read the full piece here.

Continue reading
Data Security & Privacy

Antitrust at the Agencies Roundup: You Will Absolutely Work in This Town Again Edition

TOTM Readers might recall my recent discussion of the Federal Trade Commission’s (FTC) new Bureau of Let’s Sue Meta, in which I covered, among other things, the . . .

Readers might recall my recent discussion of the Federal Trade Commission’s (FTC) new Bureau of Let’s Sue Meta, in which I covered, among other things, the commission’s proposal to modify its 2020 Decision and Order In the Matter of Facebook Inc. (now Meta). The 2020 order included complex behavioral requirements, in addition to a record-setting $5 billion penalty. One supposes that the consumer harm had been inestimable, given that the commission never did estimate it.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

Even Meta Deserves the Rule of Law

Popular Media In Robert Bolt’s play “A Man for All Seasons,” the character of Sir Thomas More argues at one point that he would “give the Devil . . .

In Robert Bolt’s play “A Man for All Seasons,” the character of Sir Thomas More argues at one point that he would “give the Devil benefit of law, for my own safety’s sake!” Defending the right to due process for a broadly disliked company is similarly not the most popular position, but nonetheless, even Meta deserves the rule of law.

Read the full piece here.

Continue reading
Data Security & Privacy

Mikołaj Barczentewicz on Ireland’s Meta Fine

Presentations & Interviews ICLE Senior Scholar Miko?aj Barczentewicz joined the Mobile Dev Memo podcast to discuss the Irish Data Protection Commission’s recent $1.3 billion levied against Meta over . . .

ICLE Senior Scholar Miko?aj Barczentewicz joined the Mobile Dev Memo podcast to discuss the Irish Data Protection Commission’s recent $1.3 billion levied against Meta over its transmission of EU resident data to the United States, and what the case means for the future of U.S.-EU data flows. The full episode is embedded below.

Continue reading
Data Security & Privacy