ICLE Brief Examines the EU’s Overly Broad Artificial Intelligence Act

LONDON (March 1, 2022) — A proposal currently under consideration to regulate artificial intelligence (AI) systems in the European Union (EU) employs definitions that threaten to turn the regulation into a law governing all software, according to a new issue brief published by the International Center for Law & Economics (ICLE).

Authored by ICLE Senior Scholar Miko?aj Barczentewicz, the brief examines Articles 3 and 5 of the Artificial Intelligence Act (AIA), both in the original version of the legislation published by the European Commission in April 2021 and in the compromise text promulgated by the European Council’s Presidency in November 2021. He notes that the overly broad definitions employed in the text differ widely from how the AIA has been presented to the public and to legislators.

“The general definition should be much more aligned with people’s intuitive understanding of what an AI system is”, Barczentewicz writes. “This would help to avoid the outcome of the AIA having significant unexpected effects on EU businesses and citizens, thereby offending the basic principles of the rule of law”.

According to Barczentewicz, the AIA text also includes poorly drafted and inadequately justified prohibitions of AI-based “social scoring” that could be interpreted to make illegal the kinds of risk scoring systems currently employed by lenders and insurers. He adds that the prohibition on use of “subliminal techniques” is similarly ill-founded and suggests “the drafters may be conflating science and science fiction”.

Full text of the issue brief is available here. Miko?aj can be reached via email at [email protected]; he tweets at @MBarczentewicz.