Showing Latest Publications

Microsoft undermines its own case

Popular Media One of my favorite stories in the ongoing saga over the regulation (and thus the future) of Internet search emerged earlier this week with claims . . .

One of my favorite stories in the ongoing saga over the regulation (and thus the future) of Internet search emerged earlier this week with claims by Google that Microsoft has been copying its answers–using Google search results to bolster the relevance of its own results for certain search terms.  The full story from Internet search journalist extraordinaire, Danny Sullivan, is here, with a follow up discussing Microsoft’s response here.  The New York Times is also on the case with some interesting comments from a former Googler that feed nicely into the Schumpeterian competition angle (discussed below).  And Microsoft consultant (“though on matters unrelated to issues discussed here”)  and Harvard Business prof Ben Edelman coincidentally echoes precisely Microsoft’s response in a blog post here.

What I find so great about this story is how it seems to resolve one of the most significant strands of the ongoing debate–although it does so, from Microsoft’s point of view, unintentionally, to be sure.

Here’s what I mean.  Back when Microsoft first started being publicly identified as a significant instigator of regulatory and antitrust attention paid to Google, the company, via its chief competition counsel, Dave Heiner, defended its stance in large part on the following ground:

All of this is quite important because search is so central to how people navigate the Internet, and because advertising is the main monetization mechanism for a wide range of Web sites and Web services. Both search and online advertising are increasingly controlled by a single firm, Google. That can be a problem because Google’s business is helped along by significant network effects (just like the PC operating system business). Search engine algorithms “learn” by observing how users interact with search results. Google’s algorithms learn less common search terms better than others because many more people are conducting searches on these terms on Google.

These and other network effects make it hard for competing search engines to catch up. Microsoft’s well-received Bing search engine is addressing this challenge by offering innovations in areas that are less dependent on volume. But Bing needs to gain volume too, in order to increase the relevance of search results for less common search terms. That is why Microsoft and Yahoo! are combining their search volumes. And that is why we are concerned about Google business practices that tend to lock in publishers and advertisers and make it harder for Microsoft to gain search volume. (emphasis added).

Claims of “network effects” “increasing returns to scale” and the absence of “minimum viable scale” for competitors run rampant (and unsupported) in the various cases against Google.  The TradeComet complaint, for example, claims that

[t]he primary barrier to entry facing vertical search websites is the inability to draw enough search traffic to reach the critical mass necessary to become independently sustainable.

But now we discover (what we should have known all along) that “learning by doing” is not the only way to obtain the data necessary to generate relevant search results: “Learning by copying” works, as well.  And there’s nothing wrong with it–in fact, the very process of Schumpeterian creative destruction assumes imitation.

As Armen Alchian notes in describing his evolutionary process of competition,

Neither perfect knowledge of the past nor complete awareness of the current state of the arts gives sufficient foresight to indicate profitable action . . . [and] the pervasive effects of uncertainty prevent the ascertainment of actions which are supposed to be optimal in achieving profits.  Now the consequence of this is that modes of behavior replace optimum equilibrium conditions as guiding rules of action. First, wherever successful enterprises are observed, the elements common to these observable successes will be associated with success and copied by others in their pursuit of profits or success. “Nothing succeeds like success.”

So on the one hand, I find the hand wringing about Microsoft’s “copying” Google’s results to be completely misplaced–just as the pejorative connotations of “embrace and extend” deployed against Microsoft itself when it was the target of this sort of scrutiny were bogus.  But, at the same time, I see this dynamic essentially decimating Microsoft’s (and others’) claims that Google has an unassailable position because no competitor can ever hope to match its size, and thus its access to information essential to the quality of search results, particularly when it comes to so-called “long-tail” search terms.

Long-tail search terms are queries that are extremely rare and, thus, for which there is little user history (information about which results searchers found relevant and clicked on) to guide future search results.  As Ben Edelman writes in his blog post (linked above) on this issue (trotting out, even while implicitly undercutting, the “minimum viable scale” canard):

Of course the reality is that Google’s high market share means Google gets far more searches than any other search engine. And Google’s popularity gives it a real advantage: For an obscure search term that gets 100 searches per month at Google, Bing might get just five or 10. Also, for more popular terms, Google can slice its data into smaller groups — which results are most useful to people from Boston versus New York, which results are best during the day versus at night, and so forth. So Google is far better equipped to figure out what results users favor and to tailor its listings accordingly. Meanwhile, Microsoft needs additional data, such as Toolbar and Related Sites data, to attempt to improve its results in a similar way.

But of course the “additional data” that Microsoft has access to here is, to a large extent, the same data that Google has.  Although Danny Sullivan’s follow up story (also linked above) suggests that Bing doesn’t do all it could to make use of Google’s data (for example, Bing does not, it seems, copy Google search results wholesale, nor does it use user behavior as extensively as it could (by, for example, seeing searches in Google and then logging the next page visited, which would give Bing a pretty good idea which sites in Google’s results users found most relevant)), it doesn’t change the fundamental fact that Microsoft and other search engines can overcome a significant amount of the so-called barrier to entry afforded by Google’s impressive scale by simply imitating much of what Google does (and, one hopes, also innovating enough to offer something better).

Perhaps Google is “better equipped to figure out what users favor.”  But it seems to me that only a trivial amount of this advantage is plausibly attributable to Google’s scale instead of its engineering and innovation.  The fact that Microsoft can (because of its own impressive scale in various markets) and does take advantage of accessible data to benefit indirectly from Google’s own prowess in search is a testament to the irrelevance of these unfortunately-pervasive scale and network effect arguments.

Filed under: antitrust, armen alchian, business, google, markets, monopolization, technology Tagged: antitrust, Armen Alchian, Bing, Danny Sullivan, economies of scale, google, Google Search, Internet search, microsoft, minimum viable scale, network effects

Continue reading
Antitrust & Consumer Protection

Is Antitrust Too Complicated for Generalist Judges? The Impact of Economic Complexity & Judicial Training on Appeals

Scholarship Abstract The recent increase in the demand for expert economic analysis in antitrust litigation has improved the welfare of economists; however, the law and economics . . .

Abstract

The recent increase in the demand for expert economic analysis in antitrust litigation has improved the welfare of economists; however, the law and economics literature is silent on the effects of economic complexity or judges’ economic training on judicial decision-making. We use a unique data set on antitrust litigation in federal district and administrative courts during 1996-2006 to examine whether economic complexity impacts antitrust decisions, and provide a novel test of the hypothesis that antitrust analysis has become too complex for generalist judges. We also examine the impact of basic economic training by judges. We find that decisions involving the evaluation of complex economic evidence are significantly more likely to be appealed, and decisions of judges trained in basic economics are significantly less likely to be appealed than are decisions by their untrained counterparts. Our analysis supports the hypothesis that some antitrust cases are too complicated for generalist judges.

Continue reading
Antitrust & Consumer Protection

Epstein on Obama at U of C

Popular Media It’s pretty hard to cycle through the University of Chicago Law School (or at least it used to be back when I was a student) . . .

It’s pretty hard to cycle through the University of Chicago Law School (or at least it used to be back when I was a student) without gaining an appreciation for the extent to which markets, while subject to occasional failures, enhance human welfare by channeling resources to their highest and best ends. It’s also hard to spend much time at Chicago without coming to understand that government interventions, despite the best intentions of the planners, often fail, given planners’ limited knowledge (see, e.g., Hayek) and bureaucrats’ tendencies to act, like the rest of us, in a self-interested fashion (see, e.g., the public choice literature). Indeed, even left-leaning Chicagoans like Cass Sunstein, for whom I have tremendous respect, appreciate these ideas and therefore tend to advocate (somewhat) limited government interventions that are targeted at real market failures and that preserve space for private ordering. (See, e.g., Sunstein’s “libertarian paternalism,” which has occasionally been derided on this blog but is a far cry from the “paternalist paternalism” we’ve been seeing from the current Administration.)

When President Obama was elected, I hoped and expected that his time at Chicago would influence his policy prescriptions. It hasn’t done so. Not only did he push through two of the most market-insensitive and government-confident pieces of legislation in modern history (the stimulus and the health care law), but even his “move to the center” speech following a mid-term shellacking advocated central planning in the form of pick-the-winner “investments” in green technologies, high-speed rail, etc. His answer to economic stagnation isn’t sound money and the creation of institutions that permit entrepreneurs to flourish without fear of excessive regulation and confiscation. Instead, he wants government to step up and push America forward, as it did in the space race with the Russians: “This is our generation’s Sputnik moment!”

I’ve often wondered how Mr. Obama managed to spend so many years at Chicago without absorbing the ideas that seem to saturate the place. Richard Epstein offers an answer in today’s Wall Street Journal (which quotes an interview Epstein gave to Reason TV):

Reason: The economy has lost 3.3 million jobs, consumer confidence is half its historical average, and unemployment is 9 percent. To what extent is Obama responsible for this?

Richard Epstein: He’s not largely or exclusively responsible, but he’s certainly added another nail into the coffin. The early George Bush—I think he got a little bit better through his term—and Obama have a lot in common. Bush wanted a pint-sized stimulus program that failed and Obama wanted a giant-sized stimulus program that failed. Neither of them is a strong believer in laissez-faire principles. The difference between them, which is why Obama is the more dangerous man ultimately, is he has very little by way of a skill set to understand the complex problems he wants to address, but he has this unbounded confidence in himself.

Reason: So he’s the perfect Chicago faculty member.

Epstein: He was actually a bad Chicago faculty member in this sense: He was an adjunct, and we always hoped he’d participate in the general intellectual discourse, but he was always so busy with collateral adventures that he essentially kept to himself. The problem when you keep to yourself is you don’t get to hear strong ideas articulated by people who disagree with you. So he passed through Chicago without absorbing much of the internal culture.

So that’s it….

Filed under: markets, musings, politics

Continue reading

Pro-Business v. Pro-Growth

Popular Media Don Boudreaux explains the distinction with reference to President Obama’s State of the Union address. Filed under: business, economics, politics

Don Boudreaux explains the distinction with reference to President Obama’s State of the Union address.

Filed under: business, economics, politics

Continue reading
Financial Regulation & Corporate Governance

GMU Law & Economics Center Workshop on Empirical and Experimental Methods for Law Professors

Popular Media Details are available here.  It should be an excellent program and I’m very pleased to be a part of it.  If you are a law . . .

Details are available here.  It should be an excellent program and I’m very pleased to be a part of it.  If you are a law professor and interested, but have questions, please don’t hesitate to contact me.   The link for applications is below.
Location: George Mason University School of Law | Event Date: Monday, May 23 to Thursday, May 26, 2011

The Workshop on Empirical and Experimental Methods for Law Professors is designed to teach law professors the conceptual and practical skills required to (1) understand and evaluate others’ empirical studies, and (2) design and implement their own empirical studies. Participants are not expected to have background in statistical knowledge or empirical skills prior to enrollment. Instructors have been selected in part to demonstrate the development of empirical studies in a wide-range of legal and institutional settings including: antitrust, business law, bankruptcy, class actions, contracts, criminal law and sentencing, federalism, finance, intellectual property, and securities regulation. Class sessions will provide participants opportunities to learn through faculty lectures, drawing upon data and examples for cutting edge empirical legal studies, and participating in experiments. There will be numerous opportunities for participants to discuss their own works-in-progress or project ideas with the instructors.

WORKSHOP FACULTY:

David Abrams, Ph.D., University of Pennsylvania School of Law, http://www.law.upenn.edu/cf/faculty/dabrams/

Eric Helland, Ph.D., Claremont-McKenna College, http://www.cmc.edu/academic/faculty/profile.asp?Fac=159

Jonathan Klick, J.D., Ph.D., University of Pennsylvania School of Law, http://www.law.upenn.edu/cf/faculty/jklick/

Bruce Kobayashi, Ph.D., George Mason University School of Law, http://www.law.gmu.edu/faculty/directory/fulltime/kobayashi_bruce

Kevin McCabe, Ph.D., George Mason University School of Economics and Law, http://www.law.gmu.edu/faculty/directory/fulltime/mccabe_kevin

Joshua Wright, J.D., Ph.D., George Mason University School of Law, http://www.law.gmu.edu/faculty/directory/fulltime/wright_joshua

SCHEDULE:

The Workshop will take place at:
George Mason University School of Law
3301 N. Fairfax Drive
Arlington, VA 22201
http://law.gmu.edu

The Workshop will begin on Monday May 23, at 8:30 a.m. and conclude on Thursday May 26, at 12 pm. Classes on May 23, 24 and 25 will run from 8:30 am to 5pm, and include lectures, group sessions, and opportunities for participants to present their own empirical projects or “works in progress.”

Topics covered include:

• Research Design
• Finding Data
• Basic Probability Theory
• Descriptive Statistics
• Formulating Testable Hypotheses
• Specification
• Statistical Inference
• Cross-Sectional Regression
• Time Series Regression
• Panel Data Techniques
• Sensitivity Analysis
• Experimental Methods

REGISTRATION AND TUITION:

Tuition for the Workshop on Empirical and Experimental Methods is $850 for the first professor from a law school and $500 for additional registrants from the same school.

Tuition includes all session materials, access to statistical software (STATA), three lunches, four continental breakfasts, and one evening reception. You will need a laptop for this workshop. A check for $850 made payable to George Mason University Foundation (please note on the check that it is for “Empirical Workshop Tuition”) must be included with the registration form. Registration and payment should be received by May 13, 2011. Space is limited and will be allocated on a first-come, first-accepted basis.

CANCELLATION POLICY:

Full refunds for cancellation of attendance to the Workshop on Empirical and Experimental Methods will be made for all written cancellations received before 5:00 p.m. on Monday, May 16th. No refunds will be given for any cancellations received after Monday, May 16th.

ACCOMMODATIONS:

GMU School of Law is conveniently located across the Potomac from Washington, DC with easy access to the Metro’s Orange Line (Virginia Square Station).

Special hotel rates for workshop participants are available at two hotels within walking distance of the GMU School of Law: the Comfort Inn Ballston, (1211 N. Glebe Rd) at a group rate of $149 per night, and the AKA Virginia Square (3409 Wilson Blvd) at a group rate of $211 per night.

To make a reservation at the Comfort Inn call (703) 247-3399. The hotel’s website can be viewed at:
http://www.comfortinn.com/hotel-arlington-virginia-VA417

To make a reservation at the AKA Virginia Square contact Scott Foster at (202) 904-2505, or by email at [email protected]. The hotel’s website can be viewed at: http://www.hotelaka.com/locations/virginia_square/default.aspx

To obtain the workshop rates, please mention the George Mason School of Law Workshop on Empirical and Experimental Methods. In order to receive these special rates you must book your room by April 23, 2011. There are limited rooms available so please make your reservations as soon as possible.

LEC CONTACT:

Jeff Smith
703-993-8101
[email protected]

Click Here to Apply

Filed under: announcements, economics, george mason university school of law, law and economics, law school, legal scholarship, scholarship

Continue reading

No, Nudge Was Not on Trial

Popular Media Slate’s David Weigel ran an otherwise informative piece on Cass Sunstein’s testimony, as head of OIRA, at a recent House Energy and Commerce Committee.  The . . .

Slate’s David Weigel ran an otherwise informative piece on Cass Sunstein’s testimony, as head of OIRA, at a recent House Energy and Commerce Committee.  The headline?  Nudge on Trial: Cass Sunstein Defends the White House Against a Republican Attack.  From Weigel’s description of the hearing, there was some general hand wringing about whether there is too much or too little regulation, whether the number of regulations is higher or lower under the Obama administration than during the George W. Bush administration, some run-of-the-mill posturing from questioners.  Weigel concludes the hearing ended “with Sunstein having done no obvious harm to his mission.”

All fine.  And like I said — the article was informative with regard to the hearing.  But its a pretty sloppy and misleading headline.   There are, I think, some interesting issues concerning whether the Obama administration has retreated from behavioral economics a la Nudge, whether (as Ezra Klein contends) Republicans hate behavioral economics, and the role of behavioral economics in administrative agencies and regulation.   Unfortunately, the article really wasn’t about Nudging or behavioral-economics based regulation at all.  If Nudge is going to have its trial — it will be another day.

In the meantime, you can start here for some good background reading on the topic.

Filed under: behavioral economics, free to choose symposium, journalism, regulation

Continue reading
Financial Regulation & Corporate Governance

Does the Voluntary Industry “Agreement” to Ban Phosphates in Dishwasher Detergents Violate Section 1?

Popular Media Apparently, the detergent industry has entered into what has been described as a “voluntary agreement” to reduce the use of phosphates in detergents (HT: Ted . . .

Apparently, the detergent industry has entered into what has been described as a “voluntary agreement” to reduce the use of phosphates in detergents (HT: Ted Frank).  A press release from Clean Water Action describes the agreement as follows:

On July 1, 2010 a voluntary ban on phosphates in dishwasher detergents will be implemented by many members of the American Cleaning Council (formerly the Soap and Detergent Association), a manufacturer’s trade group representing most detergent companies.”Industry’s announcement on phosphates in dishwasher detergents is welcome news, indeed, if somewhat overdue,” said Jonathan Scott, a spokesman for Clean Water Action, founded in the early 1970’s to fight for clean, safe water. “Even small amounts of phosphates can wreak havoc when they get into our water,” Scott says, “so it’s the last thing you want as an ingredient in detergents, which are specifically designed to end up in the water by way of household appliances and drain pipes.”

It is also apparent that some our none too pleased with the effects of reducing phosphate levels in detergents — with the primary downside being that the new product doesn’t work too well.  An article in the Weekly Standard describes the impact of the reduction:

The result is detergents that don’t work very well. There have been a handful of stories in the media about consumer complaints. The New York Times noted that on the review section of the website for Cascade—Procter & Gamble’s market-leading brand—ratings plummeted after the switch, with only 11 percent of consumers saying they would recommend the product. One woman in Florida told National Public Radio that she called Procter & Gamble to complain about how its detergent no longer worked. The customer rep told her to consider handwashing the dishes instead.

Some NPR commenters agreed. “Like so many -others, I had disassembled my dishwasher, run multiple empty ‘cleaning cycles’ using all kinds of various chemical treatments, all trying to get my dishwasher ‘fixed,’ ” said one. “We assumed that something was wrong with the machine, that it was limed up, and we tried vinegar and other remedies with limited success,” wrote another. “We do wash some dishes by hand now, using more hot water than before, and also have simply lowered our standards for what constitutes ‘clean.’ ” Another commenter complained: “I live in AZ and had the same thing happen last year when it was introduced out here. I thought it was a reaction between the ‘Green’ soap and the hard water. I wrote to the company and they sent me about $30 in coupons—for other items and for their non-green soap. I dumped the 3 unopened bottles plus the one I was using.”

The detergents were so problematic that they caused environmental delinquency even among NPR listeners. One disappointed commenter rationalized his backsliding:

We first heard about the new phosphate-free detergent formulations almost a year ago. Wanting to do the Right Thing we rushed out and bought some and immediately began using it. The results, although not as bad as reported by some, were still pretty underwhelming. Our dishes and glassware were covered by a gritty film and so was the inside of the dishwasher. We are in Southern California and have very hard water. Adding vinegar to the rinse cycle helped *some* but still we found excessive buildup on our dishes. Disgusted with the new detergent, we decided to go back to something with phosphate. We were not able to find phosphate detergent at the supermarket, but some local discount stores sell supplies that are apparently remaindered by the manufacturers. We bought six boxes of old Cascade with phosphate—about a year’s supply. We figured someone would buy it—might as well be us.

When Consumer Reports did laboratory testing on the new nil-phosphate detergents, they concluded that none of them “equaled the excellent (but now discontinued) product that topped our Ratings in August 2009.”

There is, of course, an interesting antitrust angle here.  Thom has posted previously on another voluntary industry agreement in the soda industry to refrain from selling high calorie soda (and limiting the size of even healthy drinks) in schools.  In the comments to that post I suggest that one important issue is whether the soda players actually reached an agreement:

The passing or collective endorsement of a set of “best practices” to which members of the industry can voluntary choose to adhere to or not is not necessarily an actionable antitrust conspiracy. Of course, calling something “voluntary guidelines” won’t immunize an actual agreement if it is there. But it seems that the parties were pretty careful — EXCEPT for in their commercials and in print!!! — to make sure to emphasize that the antitrust-relevant choices were made unilaterally. But I can’t imagine antitrust counsel would have given the thumbs up to the commercials…

So it is here.  I’ve no doubt that such an agreement, if it exists, is reachable under Section of the Sherman Act.  The question is whether the detergent industry has taken some steps to protect themselves from an “agreement” finding under Section 1.  I don’t have enough detail to know whether that is the case — but if anybody does, please send it along.

Filed under: antitrust, cartels, economics, environment

Continue reading
Antitrust & Consumer Protection

The Sound of One Hand Clapping: The 2010 Merger Guidelines and the Challenge of Judicial Adoption

Popular Media Along with co-author Judd Stone, I’ve posted to SSRN our contribution to the Review of Industrial Organization‘s symposium on the 2010 Horizontal Merger Guidelines — . . .

Along with co-author Judd Stone, I’ve posted to SSRN our contribution to the Review of Industrial Organization‘s symposium on the 2010 Horizontal Merger Guidelines — The Sound of One Hand Clapping: The 2010 Horizontal Merger Guidelines and the Challenge of Judicial Adoption.

The paper focuses on the Guidelines’ efficiencies analysis.  We argue that while the 2010 HMGs “update” the Guidelines’ analytical approach in generally desirable ways, these updates are largely asymmetrical in nature: while the new Guidelines update economic thinking on one “side” of the ledger (changes that make the plaintiff’s prima facie burden easier to satisfy, ceteris paribus), they do not do so with respect to efficiencies analysis on the other side of the ledger.  These asymmetrical changes thereby undermine the new Guidelines’ institutional credibility.

In particular, we focus on the Guidelines’ treatment of so-called “out-of-market” efficiencies as well as fixed cost savings.  In both cases we argue that updates were appropriate and consistent with the Agencies’ expressed preference to more accurately reflect economic thinking and shift from proxies to direct assessment of competitive effects.   If anything, the Guidelines appear to be more skeptical of efficiencies arguments than the previous version, adding “the Agencies are mindful that the antitrust laws give competition, not internal operational efficiency, primacy in protecting customers.”  We then turn to discussing the implications of this “asymmetrical update” for judicial adoption of the Guidelines.  Some have discussed the possibility that these Guidelines will be less successful with federal courts because they downplay market definition.  As I’ve said here many times, I do not think the Agencies (if out of nothing but self-interest) will avoid market definition.  However, we argue that the asymmetrical updating problem is a more serious one, and that widespread and wholesale adoption of the HMGs should not be taken for granted.

Here is the abstract:

There is ample justification for the consensus view that the Horizontal Merger Guidelines have proven one of antitrust law’s great successes in the grounding of antitrust doctrine within economic learning. The foundation of the Guidelines’ success has been its widespread adoption by federal courts, which have embraced its rigorous underlying economic logic and analytical approach to merger analysis under the Clayton Act. While some have suggested that the Guidelines’ most recent iteration might jeopardize this record of judicial adoption by downplaying the role of market definition and updating its unilateral effects analysis, we believe these updates are generally beneficial and include long-overdue shifts away from antiquated structural presumptions in favor of analyzing competitive effects directly where possible. However, this article explores a different reason to be concerned that the 2010 Guidelines may not enjoy widespread judicial adoption: the 2010 Guidelines asymmetrically update economic insights underlying merger analysis. While the 2010 Guidelines’ updated economic thinking on market definition and unilateral effects will likely render the prima facie burden facing plaintiffs easier to satisfy in merger analysis moving forward, and thus have significant practical impact, the Guidelines do not correspondingly update efficiencies analysis, leaving it as largely as it first appeared 13 years earlier. We discuss two well-qualified candidates for “economic updates” of efficiencies analysis under the Guidelines: (1) out-of-market efficiencies and (2) fixed cost savings. We conclude with some thoughts about the implications of the asymmetric updates for judicial adoption of the 2010 Guidelines.

Download and read the whole thing.

Filed under: antitrust, business, economics, legal scholarship, merger guidelines, mergers & acquisitions, scholarship

Continue reading
Antitrust & Consumer Protection

Intel Case the Model for the FTC?

Popular Media So says this BusinessWeek headline based on an interview with Federal Trade Commission Chairman Jon Leibowitz.   However, most of the article appears to be about establishing the . . .

So says this BusinessWeek headline based on an interview with Federal Trade Commission Chairman Jon Leibowitz.   However, most of the article appears to be about establishing the Commission seeking to advance the proposition that the FTC Act expands beyond the scope of the antitrust laws.   For example, the Chairman is quoted as saying “We would like to see it tested by appellate courts because we think the legislation, the plain language of the legislation, makes it crystal clear that our jurisdiction goes beyond the antitrust laws” rather than anything specific about the Intel investigation.  I take this to mean, more accurately, that the FTC views Section 5 as a core statute in enforcing its competition mission.  For better or worse, a different statement than the one in the headline.

Continue reading
Antitrust & Consumer Protection