Section 2 Symposium: David Evans–An Economist’s View
David Evans is Head, Global Competition Policy Practice, LECG; Executive Director, Jevons Institute for Competition Law and Economics, and Visiting Professor, University College London; and Lecturer, University of Chicago.
The treatment of unilateral conduct remains an intellectual and policy mess as we finish out the first decade of the 21st century. There were signs of hope a few years ago. The European Commission embarked on an effort to adopt an effects-based approach to unilateral conduct and to move away from the analytically-empty, object-based approach developed by the European Courts. Meanwhile the Federal Trade Commission and the U.S. Department of Justice embarked on a series of hearings on unilateral conduct that brought the best thinkers together and hoped to achieve some consensus. Hopes were dashed in 2008. The Justice Department and the FTC splintered. The DOJ issued a lengthy report that for all intents and purposes argued for significantly limiting the circumstances under which a business practice could be found to constitute anticompetitive unilateral conduct. Three of the four sitting Federal Trade Commissioners quickly asserted their fundamental disagreement. Towards the end of the year the European Commission finally issued a document that adopted an effects-based approach, sort of, but only for guidance for its prosecutorial discretion over which cases it would focus its resources on. I say “sort of” because although much of the framework it adopts is quite sensible, the Commission places virtually insurmountable obstacles to considering efficiencies. (For a comparative review of the EC and DOJ reports see my article here.)
The incoherence and discord in this area of antitrust law fundamentally result from a failure on the part of economists and other antitrust scholars to roll up their sleeves and to do significant empirical work. Instead, two kinds of flags get waved to urge the respective followers on. The first involves the famous economic possibility theorem-“it could, therefore it will.” That’s a statement based on a seemingly scientific economic model that says something might happen under some conditions. Oftentimes those conditions are unstated, buried in footnotes, or buried in mathematics that only the careful reader uncovers. (See my article with Jorge Padilla here). Many are guilty of waving possibility theorems around. But in my experience the pro-intervention crowd, including the competition authorities, are the most enamored with untested assumption-driven economic theory.
The second involves error costs. These costs can go in either direction, but it is the non-intervention crowd that has cited error costs the most. The argument is that courts will mistakenly condemn some pro-competitive practices in the course of condemning anti-competitive ones and that these mistakes will tend to discourage firms in the economy from engaging in pro-competitive behavior-like offering really low prices. This error cost framework and the notion that false positives are highly likely and costly forms the spine of the Justice Department report. Unfortunately, while one can debate the plausibility of the likelihood of errors and their costs (and that’s a worthwhile exercise), there is essentially no empirical evidence on them. Error costs are easy to invoke, hard to demonstrate.
If the debate on unilateral practices continues in its current vein it will be resolved by who can scream the loudest, get elected, or appoint the judiciary. That really isn’t a very satisfactory outcome for two disciplines-law and economics-that pride themselves, in different ways, on uncovering truths. To make any progress the antitrust profession-and its industrial economics handmaiden-need to place greater value of empirical work and get on with developing fact-based analyses. As Michael Salinger and I have observed one can learn a lot about business practices by understanding whether and to what extent they are used by competitive firms. There are many other avenues for empirical research. The United States provides a useful laboratory: has anticompetitive predation increased significantly since the courts made it almost impossible for a plaintiff to win? At the same time, divergent antitrust rules (and private enforcement) in the 50 US states provide another remarkable natural laboratory in which to test the efficiency of various practices and the efficacy of various enforcement decisions. Meanwhile the antitrust profession needs to impose analytical and empirical rigor on the error cost framework. There is virtually no empirical work on the frequency of errors in antitrust matters or their costs. At least part of that empirical work needs to come in the form of retrospectives on cases in the US and elsewhere in which plaintiffs or defendants have won.
We have made great progress over the last 50 years in turning antitrust into a rigorous discipline based on theory and evidence. There’s no reason that can’t be extended to unilateral conduct despite the hiccups of 2008.