What are you looking for?

Showing 9 of 274 Results in EU

The Missing Element in the Google Case

TOTM Through laudable competition on the merits, Google achieved a usage share of nearly 90% in “general search services.” About a decade later, the government alleged . . .

Through laudable competition on the merits, Google achieved a usage share of nearly 90% in “general search services.” About a decade later, the government alleged that Google had maintained its dominant share through exclusionary practices violating Section 2 of the Sherman Antitrust Act. The case was tried in U.S. District Court in Washington, D.C. last fall, and the parties made post-trial filings this year.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

Lazar Radic on DMA Implementation

Presentations & Interviews ICLE Senior Scholar Lazar Radic joined Associazione Copernicani and moderator Carlo Alberto Carnevale Maffè for a discussion (in Italian) of the European Union’s Digital Markets . . .

ICLE Senior Scholar Lazar Radic joined Associazione Copernicani and moderator Carlo Alberto Carnevale Maffè for a discussion (in Italian) of the European Union’s Digital Markets Act and the European Commission’s noncompliance investigations of Alphabet, Apple, and Meta. Video of the full discussion is embedded below.

Continue reading
Antitrust & Consumer Protection

India Should Question Europe’s Digital-Regulation Strategy

TOTM Ayear after it was created by the Government of India’s Ministry of Corporate Affairs to examine the need for a separate law on competition in . . .

Ayear after it was created by the Government of India’s Ministry of Corporate Affairs to examine the need for a separate law on competition in digital markets, India’s Committee on Digital Competition Law (CDCL) in February both published its report recommending adoption of such rules and submitted the draft Digital Competition Act (DCA), which is virtually identical to the European Union’s Digital Markets Act (DMA).

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

The View from Australia: A TOTM Q&A with Allan Fels

TOTM Our latest guest in Truth on the Market’s “Global Voices Forum” series is Professor Allan Fels, AO, of the University of Melbourne Law School. Allan is . . .

Our latest guest in Truth on the Market’s “Global Voices Forum” series is Professor Allan Fels, AO, of the University of Melbourne Law School. Allan is the retired foundation dean of the Australia and New Zealand School of Government (ANZSOG). Perhaps more famously, he was the chair of the Australian Competition & Consumer Commission (ACCC) from its inception in 1995 until June 2003.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

A Report Card on the Impact of Europe’s Privacy Regulation (GDPR) on Digital Markets

Scholarship Abstract This Article will evaluate the consequences of the General Data Protection Regulation (“GDPR”) implemented by the European Union (“EU”) in 2018. Despite its aim . . .

Abstract

This Article will evaluate the consequences of the General Data Protection Regulation (“GDPR”) implemented by the European Union (“EU”) in 2018. Despite its aim to bolster user privacy, empirical evidence from thirty-one studies suggests a nuanced impact on market dynamics. While GDPR modestly enhanced user data protection, it also triggered adverse effects, including diminished startup activity, innovation, and increased market concentration. Further, this Article will uncover a complex narrative where gains in privacy are offset by compliance costs disproportionately affecting smaller firms. This Article will also highlight the challenges of regulating highly innovative markets, which is particularly important given subsequent EU regulations, such as the Digital Markets Act (“DMA”) and Digital Services Act (“DSA”). As other jurisdictions consider similar privacy regulations, the GDPR experience is a cautionary tale.

Read at SSRN.

Continue reading
Data Security & Privacy

The Future of the DMA: Judge Dredd or Juror 8?

TOTM When it was passed into law, the European Union’s Digital Markets Act (DMA) was heralded by supporters as a key step toward fairness and contestability . . .

When it was passed into law, the European Union’s Digital Markets Act (DMA) was heralded by supporters as a key step toward fairness and contestability in online markets. It has unfortunately become increasingly clear that reality might not live up to those expectations. Indeed, there is mounting evidence that European consumers’ online experiences have been degraded following the DMA’s entry into force.

The perception that the DMA has been a failure is beginning to motivate a not insignificant amount of finger pointing in Brussels. So-called “gatekeeper” firms have blamed heavy-handed regulation for their degraded services, while smaller rivals finger “malicious compliance.”

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

Does the DMA Let Gatekeepers Protect Data Privacy and Security?

TOTM It’s been an eventful two weeks for those following the story of the European Union’s implementation of the Digital Markets Act. On April 18, the . . .

It’s been an eventful two weeks for those following the story of the European Union’s implementation of the Digital Markets Act. On April 18, the European Commission began a series of workshops with the companies designated as “gatekeepers” under the DMA: Apple, Meta, Alphabet, Amazon, ByteDance, and Microsoft. And even as those workshops were still ongoing, the Commission announced noncompliance investigations against Alphabet, Apple, and Meta. Finally, the European Parliament’s Internal Market and Consumer Protection Committee (IMCO) held its own session on DMA implementation.

Many aspects of those developments are worth commenting on, and you can expect more competition-related analysis on Truth on the Market soon. Here, I will focus on what these developments mean for data privacy and security.

Read the full piece here.

Continue reading
Data Security & Privacy

Lessons from GDPR for AI Policymaking

Scholarship Abstract The ChatGPT chatbot has not just caught the public imagination; it is also amplifying concern across industry, academia, and government policymakers interested in the . . .

Abstract

The ChatGPT chatbot has not just caught the public imagination; it is also amplifying concern across industry, academia, and government policymakers interested in the regulation of Artificial Intelligence (AI) about how to understand the risks and threats associated with AI applications. Following the release of ChatGPT, some EU regulators proposed changes to the EU AI Act to classify AI systems like ChatGPT that generate complex texts without any human oversight as “high-risk” AI systems that would fall under the law’s requirements. That classification was a controversial one, with other regulators arguing that technologies like ChatGPT, which merely generate text, are “not risky at all.” This controversy risks disrupting coherent discussion and progress toward formulating sound AI regulations for Large Language Models (LLMs), AI, or ICTs more generally. It remains unclear where ChatGPT fits within AI and where AI fits within the larger context of digital policy and the regulation of ICTs in spite of nascent efforts by OECD.AI and the EU.

This paper aims to address two research questions around AI policy: (1) How are LLMs like ChatGPT shifting the policy discussions around AI regulations? (2) What lessons can regulators learn from the EU’s General Data Protection Regulation (GDPR) and other data protection policymaking efforts that can be applied to AI policymaking?

The first part of the paper addresses the question of how ChatGPT and other LLMs have changed the policy discourse in the EU and other regions around regulating AI and what the broader implications for these shifts may be for AI regulation more widely. This section reviews the existing proposal for an EU AI Act and its accompanying classification of high-risk AI systems, considers the changes prompted by the release of ChatGPT and examines how LLMs appear to have altered policymakers’ conceptions of the risks presented by AI. Finally, we present a framework for understanding how the security and safety risks posed by LLMs fit within the larger context of risks presented by AI and current efforts to formulate a regulatory framework for AI.

The second part of the paper considers the similarities and differences between the proposed AI Act and GDPR in terms of (1) organizations being regulated, or scope, (2) reliance on organizations’ self-assessment of potential risks, or degree of self-regulation, (3) penalties, and (4) technical knowledge required for effective enforcement, or complexity. For each of these areas, we consider how regulators scoped or implemented GDPR to make it manageable, enforceable, meaningful, and consistent across a wide range of organizations handling many different kinds of data as well as the extent to which they were successful in doing so. We then examine different ways in which those same approaches may or may not be applicable to the AI Act and the ways in which AI may prove more difficult to regulate than issues of data protection and privacy covered by GDPR. We also look at the ways in which AI may make it more difficult to enforce and comply with GDPR since the continued evolution of AI technologies may create cybersecurity tools and threats that will impact the efficacy of GDPR and privacy policies. This section argues that the extent to which the proposed AI Act relies on self-regulation and the technical complexity of enforcement are likely to pose significant challenges to enforcement based on the implementation of the most technologically and self-regulation-focused elements of GDPR.

Continue reading
Innovation & the New Economy

The Apple Music Streaming Case: The Good, The Bad, and The Ugly

Popular Media On March 4, 2024, the European Commission fined Apple €1.84 billion “over abusive App store rules for music streaming providers”.1 In particular, the Commission was concerned . . .

On March 4, 2024, the European Commission fined Apple €1.84 billion “over abusive App store rules for music streaming providers”.1 In particular, the Commission was concerned about the anti-steering provisions that Apple imposed on these providers. Although the full decision has not yet been published (I am told it could be a matter of months), the public information underpinning this decision is already interesting on several levels. In the following, I explore the good (1.), the bad (2.), and the ugly (3.) of the “App Store Practices (music streaming)” decision based on the information available as of this writing.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection