Showing 9 of 32 Publications in GDPR

Confronting the DMA’s Shaky Suppositions

TOTM It’s easy for politicians to make unrealistic promises. Indeed, without a healthy skepticism on the part of the public, they can grow like weeds. In . . .

It’s easy for politicians to make unrealistic promises. Indeed, without a healthy skepticism on the part of the public, they can grow like weeds. In the world of digital policy, the European Union’s Digital Markets Act (DMA) has proven fertile ground for just such promises. We’ve been told that large digital platforms are the source of many economic and social ills, and that handing more discretionary power to the government can solve these problems with no apparent side effects or costs.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

EU Authorities on ‘Pay or Consent’: Mid-April 2024 Update

Popular Media Due to Meta’s adoption of a “pay or consent” model for Facebook and Instagram, the model became a key issue not only under EU privacy . . .

Due to Meta’s adoption of a “pay or consent” model for Facebook and Instagram, the model became a key issue not only under EU privacy law but also under the new digital regulations: the Digital Services Act (DSA) and the Digital Markets Act (DMA). Given the barrage of pay or consent-related news in the past months, I thought it would be a good idea to take stock of where we are now.

Read the full piece here.

Continue reading
Data Security & Privacy

A Report Card on the Impact of Europe’s Privacy Regulation (GDPR) on Digital Markets

Scholarship Abstract This Article will evaluate the consequences of the General Data Protection Regulation (“GDPR”) implemented by the European Union (“EU”) in 2018. Despite its aim . . .

Abstract

This Article will evaluate the consequences of the General Data Protection Regulation (“GDPR”) implemented by the European Union (“EU”) in 2018. Despite its aim to bolster user privacy, empirical evidence from thirty-one studies suggests a nuanced impact on market dynamics. While GDPR modestly enhanced user data protection, it also triggered adverse effects, including diminished startup activity, innovation, and increased market concentration. Further, this Article will uncover a complex narrative where gains in privacy are offset by compliance costs disproportionately affecting smaller firms. This Article will also highlight the challenges of regulating highly innovative markets, which is particularly important given subsequent EU regulations, such as the Digital Markets Act (“DMA”) and Digital Services Act (“DSA”). As other jurisdictions consider similar privacy regulations, the GDPR experience is a cautionary tale.

Read at SSRN.

Continue reading
Data Security & Privacy

Does the DMA Let Gatekeepers Protect Data Privacy and Security?

TOTM It’s been an eventful two weeks for those following the story of the European Union’s implementation of the Digital Markets Act. On April 18, the . . .

It’s been an eventful two weeks for those following the story of the European Union’s implementation of the Digital Markets Act. On April 18, the European Commission began a series of workshops with the companies designated as “gatekeepers” under the DMA: Apple, Meta, Alphabet, Amazon, ByteDance, and Microsoft. And even as those workshops were still ongoing, the Commission announced noncompliance investigations against Alphabet, Apple, and Meta. Finally, the European Parliament’s Internal Market and Consumer Protection Committee (IMCO) held its own session on DMA implementation.

Many aspects of those developments are worth commenting on, and you can expect more competition-related analysis on Truth on the Market soon. Here, I will focus on what these developments mean for data privacy and security.

Read the full piece here.

Continue reading
Data Security & Privacy

Lessons from GDPR for AI Policymaking

Scholarship Abstract The ChatGPT chatbot has not just caught the public imagination; it is also amplifying concern across industry, academia, and government policymakers interested in the . . .

Abstract

The ChatGPT chatbot has not just caught the public imagination; it is also amplifying concern across industry, academia, and government policymakers interested in the regulation of Artificial Intelligence (AI) about how to understand the risks and threats associated with AI applications. Following the release of ChatGPT, some EU regulators proposed changes to the EU AI Act to classify AI systems like ChatGPT that generate complex texts without any human oversight as “high-risk” AI systems that would fall under the law’s requirements. That classification was a controversial one, with other regulators arguing that technologies like ChatGPT, which merely generate text, are “not risky at all.” This controversy risks disrupting coherent discussion and progress toward formulating sound AI regulations for Large Language Models (LLMs), AI, or ICTs more generally. It remains unclear where ChatGPT fits within AI and where AI fits within the larger context of digital policy and the regulation of ICTs in spite of nascent efforts by OECD.AI and the EU.

This paper aims to address two research questions around AI policy: (1) How are LLMs like ChatGPT shifting the policy discussions around AI regulations? (2) What lessons can regulators learn from the EU’s General Data Protection Regulation (GDPR) and other data protection policymaking efforts that can be applied to AI policymaking?

The first part of the paper addresses the question of how ChatGPT and other LLMs have changed the policy discourse in the EU and other regions around regulating AI and what the broader implications for these shifts may be for AI regulation more widely. This section reviews the existing proposal for an EU AI Act and its accompanying classification of high-risk AI systems, considers the changes prompted by the release of ChatGPT and examines how LLMs appear to have altered policymakers’ conceptions of the risks presented by AI. Finally, we present a framework for understanding how the security and safety risks posed by LLMs fit within the larger context of risks presented by AI and current efforts to formulate a regulatory framework for AI.

The second part of the paper considers the similarities and differences between the proposed AI Act and GDPR in terms of (1) organizations being regulated, or scope, (2) reliance on organizations’ self-assessment of potential risks, or degree of self-regulation, (3) penalties, and (4) technical knowledge required for effective enforcement, or complexity. For each of these areas, we consider how regulators scoped or implemented GDPR to make it manageable, enforceable, meaningful, and consistent across a wide range of organizations handling many different kinds of data as well as the extent to which they were successful in doing so. We then examine different ways in which those same approaches may or may not be applicable to the AI Act and the ways in which AI may prove more difficult to regulate than issues of data protection and privacy covered by GDPR. We also look at the ways in which AI may make it more difficult to enforce and comply with GDPR since the continued evolution of AI technologies may create cybersecurity tools and threats that will impact the efficacy of GDPR and privacy policies. This section argues that the extent to which the proposed AI Act relies on self-regulation and the technical complexity of enforcement are likely to pose significant challenges to enforcement based on the implementation of the most technologically and self-regulation-focused elements of GDPR.

Continue reading
Innovation & the New Economy

From Europe, with Love: Lessons in Regulatory Humility Following the DMA Implementation

TOTM The European Union’s implementation of the Digital Markets Act (DMA), whose stated goal is to bring more “fairness” and “contestability” to digital markets, could offer some important . . .

The European Union’s implementation of the Digital Markets Act (DMA), whose stated goal is to bring more “fairness” and “contestability” to digital markets, could offer some important regulatory lessons to those countries around the world that have been rushing to emulate the Old Continent.

Read the full piece here.

 

Continue reading
Antitrust & Consumer Protection

Consent for Everything? EDPB Guidelines on URL, Pixel, IP Tracking

Popular Media You may know that the culprit behind cookie consent banners is not the GDPR but the older ePrivacy Directive, specifically its Article 5(3). The EDPB, a . . .

You may know that the culprit behind cookie consent banners is not the GDPR but the older ePrivacy Directive, specifically its Article 5(3). The EDPB, a representative body of EU national data protection authorities, has just issued new Guidelines on this law. Setting aside that they arguably didn’t have the authority to issue the Guidelines, this new interpretation is very expansive. They would expect consent for e-mail pixel tracking, URL tracking, and IP tracking. In general, in their view, consent would be required for all Internet communication unless very limited exceptions apply (even more restrictive than under the GDPR).

Read the full piece here.

Continue reading
Data Security & Privacy

Netflix, Disney+, and Meta: What’s an ‘Appropriate Fee’ for a Subscription?

Popular Media “What is an appropriate fee?” is among the key questions in the current conversation around Meta’s move to introduce paid subscription options with no ads . . .

“What is an appropriate fee?” is among the key questions in the current conversation around Meta’s move to introduce paid subscription options with no ads on Facebook and Instagram. As I discussed previously, the EU’s highest court suggested that businesses may be allowed under the GDPR to offer their users a choice between (1) agreeing to personalised advertising and (2) “if necessary” paying “an appropriate fee” for an alternative service tier. In that text, I also raised some of the legal and economic difficulties in determining an appropriate fee. Eric Seufert followed with a thoughtful analysis. (By the way, don’t miss the next episode of Eric’s podcast in which we’ll discuss this and related issues.) Eric proposed two alternative “conditions for calculating whether a ‘pay-or-okay’ price point represents an ‘appropriate fee’”:

  1. The price achieves, at most, overall ARPU parity between the pre-subscription and post-subscription periods, and;
  2. The fee doesn’t materially exceed those charged by comparable services.

Read the full piece here.

Continue reading
Data Security & Privacy

Meta’s Paid Subscriptions: Are They Legal? What Will EU Authorities Do?

Popular Media Meta gave European users of Facebook and Instagram a choice between paying for a no-ads experience or keeping the services free of charge and with . . .

Meta gave European users of Facebook and Instagram a choice between paying for a no-ads experience or keeping the services free of charge and with ads. As I discussed previously (Facebook, Instagram, “pay or consent” and necessity to fund a service and EDPB: Meta violates GDPR by personalised advertising. A “ban” or not a “ban”?), the legal reality behind that choice is more complex. Users who continue without paying are asked to consent for their data to be processed for personalized advertising. In other words, this is a “pay or consent” framework for processing first-party data.

I was asked by IAPP, “the largest privacy association in the world and a leader in the privacy industry,” to discuss this. I also thought that the text I wrote for them could use some additional explanations for this substack’s audience. What follows is an expanded version of the text published by IAPP. (If this text is too long, I suggest reading just the next section).

Read the full piece here.

Continue reading
Data Security & Privacy