Computational Antitrust Within Agencies: 3rd Annual Report


In the first quarter of 2024, the Stanford Computational Antitrust project team invited its partnering antitrust agencies to share their advances in implementing computational tools. The 16 contributions received provide a distinctly broad geographical representation, offering an overview of global developments. In terms of the substantive focus of the agencies, the main trends that can be discerned are the following: i) emphasis on the detection of bid rigging practices; ii) focus on practices that harm end consumers (e.g., price increases in fuels, airplane tickets; detection of dark patterns); and iii) significant investments in detecting the public perception of market competition. In terms of the computational tools used, the key developments are: i) the gradual integration of large language models (LLMs) in the daily operations of the agencies; ii) increasing reliance on machine learning (ML) tools for analyzing sizeable volumes of textual data; and iii) the development of Proof-of-Concept Application Programming Interfaces (APIs).

Read at SSRN.