What are you looking for?

Showing 9 of 3993 Results

Microsoft comes full circle

Popular Media I am disappointed but not surprised to see that my former employer filed an official antitrust complaint against Google in the EU.  The blog post . . .

I am disappointed but not surprised to see that my former employer filed an official antitrust complaint against Google in the EU.  The blog post by Microsoft’s GC, Brad Smith, summarizing its complaint is here.

Most obviously, there is a tragic irony to the most antitrust-beleaguered company ever filing an antitrust complaint against its successful competitor.  Of course the specifics are not identical, but all of the atmospheric and general points that Microsoft itself made in response to the claims against it are applicable here.  It smacks of competitors competing not in the marketplace but in the regulators’ offices.  It promotes a kind of weird protectionism, directing the EU’s enforcement powers against a successful US company . . . at the behest of another US competitor.  Regulators will always be fighting last year’s battles to the great detriment of the industry.  Competition and potential competition abound, even where it may not be obvious (Linux for Microsoft; Facebook for Google, for example).  Etc.  Microsoft was once the world’s most powerful advocate for more sensible, restrained, error-cost-based competition policy.  That it now finds itself on the opposite side of this debate is unfortunate for all of us.

Brad’s blog post is eloquent (as he always is) and forceful.  And he acknowledges the irony.  And of course he may be right on the facts.  Unfortunately we’ll have to resort to a terribly-costly, irretrievably-flawed and error-prone process to find out–not that the process is likely to result in a very reliable answer anyway.  Where I think he is most off base is where he draws–and asks regulators to draw–conclusions about the competitive effects of the actions he describes.  It is certain that Google has another story and will dispute most or all of the facts.  But even without that information we can dispute the conclusions that Google’s actions, if true, are necessarily anticompetitive.  In fact, as Josh and I have detailed at length here and here, these sorts of actions–necessitated by the realities of complex, innovative and vulnerable markets and in many cases undertaken by the largest and the smallest competitors alike–are more likely pro-competitive.  More important, efforts to ferret out the anti-competitive among them will almost certainly harm welfare rather than help it–particularly when competitors are welcomed in to the regulators’ and politicians’ offices in the process.

As I said, disappointing.  It is not inherently inappropriate for Microsoft to resort to this simply because it has been the victim of such unfortunate “competition” in the past, nor is Microsoft obligated or expected to demonstrate intellectual or any other sort of consistency.  But knowing what it does about the irretrievable defects of the process and the inevitable costliness of its consequences, it is disingenuous or naive (the Nirvana fallacy) for it to claim that it is simply engaging in a reliable effort to smooth over a bumpy competitive landscape.  That may be the ideal of antitrust enforcement, but no one knows as well as Microsoft that the reality is far from that ideal.  To claim implicitly that, in this case, things will be different is, as I said, disingenuous.  And likely really costly in the end for all of us.

Filed under: antitrust, business, exclusionary conduct, law and economics, markets, monopolization, politics, regulation, technology Tagged: Brad Smith, Competition law, European Commission, google, microsoft, politics, regulation

Continue reading
Antitrust & Consumer Protection

Antitrust in Tech Industries

Popular Media Two stories about Google indicate the dangers of antitrust in fast moving tech industries.  Microsoft is urging the EU antitrust authorities to sue Google.   . . .

Two stories about Google indicate the dangers of antitrust in fast moving tech industries.  Microsoft is urging the EU antitrust authorities to sue Google.   (Microsoft was itself the victim of a massive antitrust action. I guess it is true that abusers are likely to have been themselves abused.)  At the same time Google is rolling out a new social tweak to its search function because it is losing business to Facebook.  Let’s hope Google doesn’t sue Facebook.  The Web was better when it was the Wild West.

Filed under: antitrust, technology

Continue reading
Antitrust & Consumer Protection

The Industrial Organization of Food Carts

Popular Media As Harold Demsetz notes, “the problem of defining ownership is precisely that of creating properly scaled legal barriers to entry.”   Taxi medallions, meet food cart . . .

As Harold Demsetz notes, “the problem of defining ownership is precisely that of creating properly scaled legal barriers to entry.”   Taxi medallions, meet food cart permits.  From the WSJ:

The city’s competitive street food culture has created a thriving black market for mobile food vending permits issued by the Department of Health and Mental Hygiene. The city charges a mere $200 for most food-cart permits, which must be paid every two years when they are renewed. But it only issues 3,100 year-round permits plus an additional 1,000 seasonal permits—not enough to satisfy demand. Transferring or renting these permits to another vendor is illegal but everyone, including the city’s Health Department, acknowledges, that it happens.

Meanwhile, demand for permits and their black-market prices continue to climb as street food’s popularity soars with blogs like Midtown Lunch chronicling vendors’ moves and some gourmet food trucks developing cult-like followings. Some permits fetch as much as $20,000 for two years, vendors say. In the case of Ms. Sultana, the Bronx food vendor, she says the permit holder told her someone else was willing to pay $15,000 for the permit she previously paid $7,000 for two years ago.

Mohammed Rahman, who has operated the popular Kwik Meal cart in midtown for 11 years, says he pays $15,000 every two years for his permit. “The city charges only $200, why should I have to pay $15,000? All the profits go to someone else.”

Obtaining a food cart or truck permit in one’s own name can take a decade or more, according to vendors. There are 2,080 people currently on the citywide waiting list for a two-year permit. The list is compiled of license holders and it’s not uncommon for families to get licenses for every member of their family—even if they don’t work at a cart—to increase their chances of obtaining a permit.

In a related story, the food carts in New York City now have a trade association (website here).

Even more closely related is the battle over green cart permits in NYC, and the competitive response from supermarkets:

The city started the green cart program almost three years ago to bring more fruits and vegetables to “underserved” neighborhoods with high rates of diet-related illnesses. Today, the city has issued about 450 permits to operate green carts in large swaths of the Bronx and upper Manhattan, as well as parts of Brooklyn, Queens and Staten Island. While most normal food carts can operate anywhere and tend to congregate in high-traffic neighborhoods like midtown Manhattan, green carts can sell only in designated zones.

Some lawmakers like Peter Koo, a City Council member who represents Flushing, thinks green carts shouldn’t be allowed within a certain distance from supermarkets.

The green carts have their fans in the community. George Wright recently walked up to the two carts a block from Ms. Kim’s store. “It’s cheaper than other stores,” he said, “and the fruit is very good.”

Ms. Kim says the monthly revenue in her store has dropped to $5,000 a month from $10,000 a month because of the carts, the tough economy and nearby construction. That’s before she shells out $3,500 for rent, buys the produce and pays an employee. The result is that she’s losing money each month.

She figures it’s going to get worse. Most of her customers pay with electronic food stamps and recently some green carts got portable devices so they can accept them as well.

“I have to go bankrupt,” she says.

Here is Yglesias on DC food cart deregulation.

Filed under: business, economics, food, markets, regulation

Continue reading
Financial Regulation & Corporate Governance

The Antitrust (and Business) Risk of a Concerted Response to the U.S. News Rankings

Popular Media I’ve been in a blue funk since last Tuesday, when my home institution, the University of Missouri Law School, fell into the third tier in . . .

I’ve been in a blue funk since last Tuesday, when my home institution, the University of Missouri Law School, fell into the third tier in the U.S. News & World Report annual ranking of law schools. Since the rankings began, Missouri has pretty consistently ranked in the 50s and 60s. Last year, we fell to 93. This year, to 107. That’s pretty demoralizing.

It’s completely ridiculous, of course. On the metrics that really matter (academic reputation, student quality, bar passage, etc.), we do pretty well — near the top of tier 2 (schools 50-100). With respect to scholarly productivity, our faculty ranks sixth among law schools outside the top fifty. We do less well with employment, but that’s largely because (1) we don’t manipulate the numbers, as many schools do, and (2) many of our graduates go into prosecution and public defense, where hiring decisions are not made until after the bar examination. Where we really get beat up is on expenditures per “full-time equivalent” student. Last year, we ranked 173 out of 190 on that measure. In my view, that means we’re efficient — we get a heck of a lot out of our financial resources. According to U.S. News, though, the fact that we spend less money educating our students means that the quality of our educational offering must be sub-par. Non sequitur, anyone?

Despite the stupidity of the U.S. News rankings, they matter. We will have a harder time attracting top students next year. In the past, we’ve been able to attract sharp students that were accepted at, say, Iowa, Illinois, or Washington University because our tuition (especially in-state tuition) is much, much lower. Given all this talk of higher education bubbles and the widespread questioning of whether law school is really worth the steep price, this should be an ideal time for Missouri to exploit its low tuition. Unfortunately, that’s tougher to do when you’ve fallen into the U.S. News third tier and prospective students, who don’t yet realize the insanity of the rankings metrics, wrongly perceive that you’re selling a shoddy product. We may also have a harder time attracting high-quality faculty, though this fall’s outstanding class of entrants (two John Roberts clerks, a Jose Cabranes clerk, and an outstanding Virginia J.D./Ph.D) will surely help on that front. We Missouri professors may even have a harder time placing our scholarship, given that the third-year law students who select articles for publication tend to evaluate scholarship, in part, on the basis of the author’s “prestige” as measured by the ranking of her home institution.

So what should we do? If I were dean, I believe I would simply opt out of U.S. News. I’m serious. We know the rankings are a joke, and they’re actually hurting us. I would simply refuse to fill out the magazine’s survey form and then take out explanatory ads, on the day the 2012 rankings were released, in the New York Times and Wall Street Journal. Reed College has taken this sort of principled stand in the U.S. News college rankings and has gotten loads of favorable media attention.  I believe its stance has actually boosted its excellent reputation.

Of course, if a school fails to fill out the U.S. News form, the magazine will simply incorporate a somewhat punitive “estimate” of the uncooperative school’s data, so its ranking may be artificially depressed. But at this point, what do we at Missouri have to lose?  We’re already down to 107!  Anyone who does the slightest bit of investigation will see that Missouri Law — one of the oldest law schools west of the Mississippi River, the flagship public law school in a fairly populous state with two significant legal markets, the home of a productive faculty that also cares deeply about teaching — is not what participants on the Princeton Review’s old message board used to call a “Third Tier Toilet.” If we opt out of the rankings (a decision U.S. News will have to note), readers will surmise that our low ranking results from our decision not to play with U.S. News. Right now, they think there’s something wrong with Missouri, not with the screwy rankings system. Our opt-out would at least draw attention to the stupidity of the ranking metrics.

Of course, this move would entail significant risk. As it did with Reed College, U.S. News would likely adopt punitive estimates of the data we refused to provide, causing us to fall further in the rankings. Readers might not notice the disclaimer that we refused to return our survey and that our ranking is therefore based on estimated data. The media (mainstream and other) might not draw as much attention to our bold stand as I expect they would. While I think it would take a perfect storm for an opt-out strategy to tarnish our reputation even further, such storms do occasionally occur.

We could reduce the riskiness of our strategy if we could persuade some other law schools — perhaps other low-tuition, efficient schools that find themselves similarly disadvantaged by the rankings’ inapposite focus on expenditures per student — to withhold data from U.S. News. This would require U.S. News to include more “based on estimated data” asterisks, which would reveal the punitive nature of the magazine’s estimates and undermine confidence in the flawed ranking system.

But would this sort of concerted strategy run afoul of the antitrust laws? Initially, I thought it might. After all, what I’m contemplating is essentially an agreement among competitors to withhold information from a publication that tends to enhance competition among those very rivals. Moreover, the cooperating rivals would be withholding this information precisely because they think the competition stimulated by the publication is, to use the old fashioned term, “ruinous.”  It smells pretty fishy.

The more I think about it, though, the less troubling I find this strategy. The fact is, the methodology underlying the U.S. News rankings is so unsound that the rankings themselves are misleading.  And the misrepresentations they convey actually hurt a number of schools like Missouri.  I believe we who are unfairly disadvantaged by the U.S. News methodology could, without impunity, bind together in an attempt to undermine the flawed rankings.  Indeed, it is in our individual competitive interests to do so.

So how would a court evaluate a boycott of U.S. News by a group of law schools that perceive themselves to be disadvantaged by the magazine’s ranking methodology (say, less expensive, more efficient law schools with low per-student expenditures)?

First, the court would likely determine that the agreement not to participate in the ranking survey is ancillary, not naked.  As Herb Hovenkamp has explained, “[a] serviceable definition of a naked restraint is one whose profitability depends on the exercise of market power” (i.e., on a constriction of output aimed at artificially raising prices so as to enhance profits).  The agreement I’m contemplating makes perfect business sense apart from any exercise of market power. Each law school that would participate in the agreement is personally injured by the screwy rankings scheme, and each has an independent incentive — regardless of what other schools do — to refrain from participation. The participating law schools, it is true, would prefer to have others join them, but that is not because they are seeking to exercise market power; rather, they realize that the message their non-participation will convey (i.e., that U.S. News’s rankings methodology is nonsense) will be stronger if more schools join the boycott.

Since the restraint I contemplate is ancillary, not naked, it would be evaluated under the rule of reason. Indeed, any court that sought to utilize a less probing analysis (per se or quick look) would have to confront the Supreme Court’s California Dental decision, which held that a pretty doggone naked restraint among competing dentists was entitled to a full rule of reason analysis because it could enhance competition by reducing fraudulent advertising.

Under the rule of reason, the arrangement I’m contemplating would likely pass muster. Because widespread misinformation among consumers reduces the competitiveness of a market, an effort to reduce such misinformation, even a concerted effort, is pro-, not anti-, competitive.  Because the “agreement” aspect of my contemplated restraint increases the degree to which the arrangement undermines the misleading, competition-impairing U.S. News rankings, it enhances the restraint’s procompetitive effect. 

So what do others think?  Am I underestimating the antitrust risk of this strategy?  The business risk?  My TOTM colleagues from Illinois and George Mason, both of which do quite well under the U.S. News formula, probably have little personal interest in these musings.  But I suspect others do.  What do you think?

Filed under: antitrust, cartels, Education, law school, universities

Continue reading
Antitrust & Consumer Protection

Google Book Project

Popular Media Google’s efforts to make out of print books available online has run into a major stumbling block. Judge Chin ordered that books can only be . . .

Google’s efforts to make out of print books available online has run into a major stumbling block. Judge Chin ordered that books can only be digitized by Google if the author opts in; the agreement which he through out called for opt out.  This is an shame and a highly inefficient result.  As reported, the intricacies of copyright law and the unavailability of many rights holders means that opt in is not feasible in many cases.  As a result, thousands of books will not be digitized at all.  Instead of transferring rights to authors (which was apparently Judge Chin’s intent) he has simply destroyed valuable property rights.  This case was argued as an issue of the distribution of rights, but it is really about the creation of  rights — or, as it turns out, their non-creation.

Filed under: copyright, google, litigation Tagged: property rights

Continue reading
Intellectual Property & Licensing

Search Bias and Antitrust

Popular Media There is an antitrust debate brewing concerning Google and “search bias,” a term used to describe search engine results that preference the content of the . . .

There is an antitrust debate brewing concerning Google and “search bias,” a term used to describe search engine results that preference the content of the search provider.  For example, Google might list Google Maps prominently if one searches “maps” or Microsoft’s Bing might prominently place Microsoft affiliated content or products.

Apparently both antitrust investigations and Congressional hearings are in the works; regulators and commentators appear poised to attempt to impose “search neutrality” through antitrust or other regulatory means to limit or prohibit the ability of search engines (or perhaps just Google) to favor their own content.  At least one proposal goes so far as to advocate a new government agency to regulate search.  Of course, when I read proposals like this, I wonder where Google’s share of the “search market” will be by the time the new agency is built.

As with the net neutrality debate, I understand some of the push for search neutrality involves an intense push to discard traditional economically-grounded antitrust framework.  The logic for this push is simple.  The economic literature on vertical restraints and vertical integration provides no support for ex ante regulation arising out of the concern that a vertically integrating firm will harm competition through favoring its own content and discriminating against rivals.  Economic theory suggests that such arrangements may be anticompetitive in some instances, but also provides a plethora of pro-competitive explanations.  Lafontaine & Slade explain the state of the evidence in their recent survey paper in the Journal of Economic Literature:

We are therefore somewhat surprised at what the weight of the evidence is telling us. It says that, under most circumstances, profit-maximizing vertical-integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view. Although there are isolated studies that contradict this claim, the vast majority support it. Moreover, even in industries that are highly concentrated so that horizontal considerations assume substantial importance, the net effect of vertical integration appears to be positive in many instances. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked. Furthermore, we have found clear evidence that restrictions on vertical integration that are imposed, often by local authorities, on owners of retail networks are usually detrimental to consumers. Given the weight of the evidence, it behooves government agencies to reconsider the validity of such restrictions.

Of course, this does not bless all instances of vertical contracts or integration as pro-competitive.  The antitrust approach appropriately eschews ex ante regulation in favor of a fact-specific rule of reason analysis that requires plaintiffs to demonstrate competitive harm in a particular instance. Again, given the strength of the empirical evidence, it is no surprise that advocates of search neutrality, as net neutrality before it, either do not rely on consumer welfare arguments or are willing to sacrifice consumer welfare for other objectives.

I wish to focus on the antitrust arguments for a moment.  In an interview with the San Francisco Gate, Harvard’s Ben Edelman sketches out an antitrust claim against Google based upon search bias; and to his credit, Edelman provides some evidence in support of his claim.

I’m not convinced.  Edelman’s interpretation of evidence of search bias is detached from antitrust economics.  The evidence is all about identifying whether or not there is bias.  That, however, is not the relevant antitrust inquiry; instead, the question is whether such vertical arrangements, including preferential treatment of one’s own downstream products, are generally procompetitive or anticompetitive.  Examples from other contexts illustrate this point.

Grocery product manufacturers contract for “bias” with supermarkets through slotting contracts and other shelf space payments.  The bulk of economic theory and evidence on these contracts suggest that they are generally efficient and a normal part of the competitive process.   Vertically integrated firms may “bias” their own content in ways that increase output.  Whether bias occurs within the firm (as is the case with Google favoring its own products) or by contract (the shelf space example) should be of no concern for Edelman and those making search bias antitrust arguments.  Economists have known since Coase — and have been reminded by Klein, Alchian, Williamson and others — that firms may achieve by contract anything they could do within the boundaries of the firm.  The point is that, in the economics literature, it is well known that content self-promoting incentives in a vertical relationship can be either efficient or anticompetitive depending on the circumstances of the situation.  The empirical literature suggests that such relationships are mostly pro-competitive and that restrictions upon the abilities of firms to enter them generally reduce consumer welfare.

Edelman is an economist, and so I find it a bit odd that he has framed the “bias” debate without reference to any of this literature.  Instead, his approach appears to be that bias generates harm to rivals and that this harm is a serious antitrust problem.  (Or in other places, that the problem is that Google exhibits bias but its employees may have claimed otherwise at various points; this is also antitrust-irrelevant.)  For example, Edelman writes:

Search bias is a mechanism whereby Google can leverage its dominance in search, in order to achieve dominance in other sectors.  So for example, if Google wants to be dominant in restaurant reviews, Google can adjust search results, so whenever you search for restaurants, you get a Google reviews page, instead of a Chowhound or Yelp page. That’s good for Google, but it might not be in users’ best interests, particularly if the other services have better information, since they’ve specialized in exactly this area and have been doing it for years.

“Leveraging” one’s dominance in search, of course, takes a bit more than bias.  But I was quite curious about Edelman’s evidence and so I went and looked at Edelman and Lockwood.  Here is how they characterize their research question: “Whether search engines’ algorithmic results favor their own services, and if so, which search engines do so most, to what extent, and in what substantive areas.”  Here is how the authors describe what they did to test the hypothesis that Google engages in more search bias than other search engines:

To formalize our analysis, we formed a list of 32 search terms for services commonly provided by search engines, such as “email”, “calendar”, and “maps”. We searched for each term using the top 5 search engines: Google, Yahoo, Bing, Ask, and AOL. We collected this data in August 2010.

We preserved and analyzed the first page of results from each search. Most results came from sources independent of search engines, such as blogs, private web sites, and Wikipedia. However, a significant fraction – 19% – came from pages that were obviously affiliated with one of the five search engines. (For example, we classified results from youtube.com and gmail.com as Google, while Microsoft results included msn.com, hotmail.com, live.com, and Bing.)

Here is the underlying data for all 32 terms; so far, so good.  A small pilot study examining whether and to what extent search engines favor their own content is an interesting project — though, again, I’m not sure it says anything about the antitrust issues.  No surprise: they find some evidence that search engines exhibit some bias in favor of affiliated sites.  You can see all of the evidence at Edelman’s site (again, to his credit).  Interpretations of these results vary dramatically.  Edelman sees a serious problem.  Danny Sullivan begs to differ (“Google only favors itself 19 percent of the time”), and also makes the important point that the study took place before Yahoo searches were powered by Bing.

In their study, Edelman and Lockwood appear at least somewhat aware that bias and vertical integration can be efficient although they do not frame it in those terms.  They concede, for example, that “in principle, a search engine might feature its own services because its users prefer these links.”  To distinguish between these two possibilities, they conceive of the following test:

To test the user preference and bias hypotheses, we use data from two different sources on click-through-rate (CTR) for searches at Google, Yahoo, and Bing. Using CTR data from comScore and another service that (with users’ permission) tracks users’ searches and clicks (a service which prefers not to be listed by name), we analyze the frequency with which users click on search results for selected terms. The data span a four-week period, centered around the time of our automated searches.  In click-through data, the most striking pattern is that the first few search results receive the vast majority of users’ clicks. Across all search engines and search terms, the first result received, on average, 72% of users’ clicks, while the second and third results received 13% and 8% of clicks, respectively.

So far, no surprises.  The first listing generates greater incremental click-through than the second or third listing.  Similarly, the eye-level shelf space generates more sales than less prominent shelf space.  The authors have a difficult time distinguishing user preference from bias:

This concentration of users’ clicks makes it difficult to disprove the user preference hypothesis. For example, as shown in Table 1, Google and Yahoo each list their own maps service as the first result for the query “maps”. Our CTR data indicates that Google Maps receives 86% of user clicks when the search is performed on Google, and Yahoo Maps receives 72% of clicks when the search is performed on Yahoo. One might think that this concentration is evidence of users’ preference for the service affiliated with their search engine. On the other hand, since clicks are usually highly concentrated on the first result, it is possible that users have no such preference, and that they are simply clicking on the first result because it appears first. Moreover, since the advantage conferred by a result’s rank likely differs across different search queries, we do not believe it is appropriate to try to control for ranking in a regression.

The interesting question from a consumer welfare perspective is not what happens to the users without a strong preference for Google Maps or Yahoo Maps.  Users without a strong preference are likely to click-through on whatever service is offered on their search engine of choice.  There is no significant welfare loss from a consumer who is indifferent between Google Maps and Yahoo Maps from choosing one over the other.

The more interesting question is whether users with a strong preference for a non-Google product are foreclosed from access to consumers by search bias.  When Google ranks its Maps above others, but a user with a strong preference for Yahoo Maps finds it listed second, is the user able to find his product of choice?  Probably if it is listed second.  Probably not if it is delisted or something more severe.  Edelman reports some data on this issues:

Nevertheless, there is one CTR pattern that would be highly suggestive of bias. Suppose we see a case in which a search engine ranks its affiliated result highly, yet that result receives fewer clicks than lower results. This would suggest that users strongly prefer the lower result — enough to overcome the effect of the affiliated result’s higher ranking.

Of course this is consistent with bias; however, to repeat the critical point, this bias does not inexorably lead to — or even suggest — an antitrust problem.  Let’s recall the shelf space analogy.  Consider a supermarket where Pepsi is able to gain access to the premium eye-level shelf space but consumers have a strong preference for Coke.  Whether or not the promotional efforts of Pepsi will have an impact on competition depend on whether Coke is able to get access to consumers.  In that case, it may involve reaching down to the second or third shelf.  There might be some incremental search costs involved.  And even if one could show that Coke sales declined dramatically in response to Pepsi’s successful execution of its contractual shelf-space bias strategy, that merely shows harm to rivals rather than harm to competition.  If Coke-loving consumers can access their desired product, Coke isn’t harmed, and there is certainly no competitive risk.

So what do we make of evidence that in the face of search engine bias, click-through data suggest consumers will still pick lower listings?  One inference is that consumers with strong preferences for content other than the biased result nonetheless access their preferred content.  It is difficult to see a competitive problem arising in such an environment.  Edelman anticipates this point somewhat when observes during his interview:

The thing about the effect I’ve just described is you don’t see it very often. Usually the No. 1 link gets twice as many clicks as the second result. So the bias takes some of the clicks that should have gone to the right result. It seems most users are influenced by the positioning.

This fails to justify Edelman’s position.  First off, in a limited sample of terms, its unclear what it means for these reversals not to happen “very often.”  More importantly, so what that the top link gets twice as many clicks as the second link?  The cases where the second link gets the dominant share of clicks-through might well be those where users have a strong preference for the second listed site.  Even if they are not, the antitrust question is whether search bias is efficient or poses a competitive threat.  Most users might be influenced by the positioning because they lack a strong preference or even any preference at all.  That search engines compete for the attention of those consumers, including through search bias, should not be surprising.  But it does not make out a coherent claim of consumer harm.

The ‘compared to what’ question looms large here.  One cannot begin to conceive of answering the search bias problem — if it is a problem at all — from a consumer welfare perspective until they pin down the appropriate counterfactual.  Edelman appears to assume  — when he observes that “bias takes some of the clicks that should have gone to the right result” — that the benchmark “right result” is that which would prevail if listings were correlated perfectly with aggregate consumer preference.   My point here is simple: that comparison is not the one that is relevant to antitrust.  An antitrust inquiry would distinguish harm to competitors from harm to competition; it would focus its inquiry on whether bias impaired the competitive process by foreclosing rivals from access to consumers and not merely whether various listings would be improved but for Google’s bias.  The answer to that question is clearly yes.  The relevant question, however, is whether that bias is efficient.   Evidence that other search engines with much smaller market shares, and certainly without any market power, exhibit similar bias would suggest to most economists that the practice certainly has some efficiency justifications.  Edelman ignores that possibility and by doing so, ignores decades of economic theory and empirical evidence.  This is a serious error, as the overwhelming lesson of that literature is that restrictions on vertical contracting and integration are a serious threat to consumer welfare.

I do not know what answer the appropriate empirical analysis would reveal.  As Geoff and I argue in this paper, however, I suspect a monopolization case against Google on these grounds would face substantial obstacles.  A deeper understanding of the competitive effects of search engine bias is a worthy project.  Edelman should also be applauded for providing some data that is interesting fodder for discussion.  But my sense of the economic arguments and existing data are that they do not provide the support for an antitrust attack against search bias against Google specifically, nor the basis for a consumer-welfare grounded search neutrality regime.

Filed under: advertising, antitrust, armen alchian, business, economics, exclusionary conduct, google, monopolization, net neutrality, technology

Continue reading
Antitrust & Consumer Protection

No Facts, No Problem?

Popular Media There has been, as is to be expected, plenty of casual analysis of the AT&T / T-Mobile merger to go around.  As I mentioned, I . . .

There has been, as is to be expected, plenty of casual analysis of the AT&T / T-Mobile merger to go around.  As I mentioned, I think there are a number of interesting issues to be resolved in an investigation with access to the facts necessary to conduct the appropriate analysis.   Annie Lowery’s piece in Slate is one of the more egregious violators of the liberal application of “folk economics” to the merger while reaching some very confident conclusions concerning the competitive effects of the merger:

Merging AT&T and T-Mobile would reduce competition further, creating a wireless behemoth with more than 125 million customers and nudging the existing oligopoly closer to a duopoly. The new company would have more customers than Verizon, and three times as many as Sprint Nextel. It would control about 42 percent of the U.S. cell-phone market.

That means higher prices, full stop. The proposed deal is, in finance-speak, a “horizontal acquisition.” AT&T is not attempting to buy a company that makes software or runs network improvements or streamlines back-end systems. AT&T is buying a company that has the broadband it needs and cutting out a competitor to boot—a competitor that had, of late, pushed hard to compete on price. Perhaps it’s telling that AT&T has made no indications as of yet that it will keep T-Mobile’s lower rates.

Full stop?  I don’t think so.  Nothing in economic theory says so.  And by the way, 42 percent simply isn’t high enough to tell a merger to monopoly story here; and Lowery concedes some efficiencies from the merger (“buying a company that has the broadband it needs” is an efficiency!).  To be clear, the merger may or may not pose competitive problems as a matter of fact.  The point is that serious analysis must be done in order to evaluate its likely competitive effects.  And of course, Lowery (HT: Yglesias, ) has no obligation to conduct serious analysis in a column — nor do I in a blog post. But this idea that the market concentration is an incredibly useful and — in her case, perfectly accurate — predictor of price effects is devoid of analytical content and also misleads on the relevant economics.  Quite the contrary, so undermined has been the confidence in the traditional concentration-price notions of horizontal merger analysis that the antitrust agencies’ 2010 Horizontal Merger Guidelines are premised in large part upon the notion that modern merger analysis considers shares to be an inherently unreliable predictor of competitive effects!!  (For what its worth, a recent Wall Street Journal column discussing merger analysis makes the same mistake — that is, suggests that the merger analysis comes down to shares and HHIs.  It doesn’t.)

To be sure, the merger of large firms with relatively large shares may attract significant attention, may suggest that the analysis drags on for a longer period of time, and likely will provide an opportunity for the FCC to extract some concessions.  But what I’m talking about is the antitrust economics here, not the political economy.  That is, will the merger increase prices and harm consumers?  With respect to the substantive merits, there is a fact-intensive economic analysis that must be done before anybody makes strong predictions about competitive effects.  The antitrust agencies will conduct that analysis.  So will the parties.  Indeed, the reported $3 billion termination fee suggests that AT&T is fairly confident it will get this through; and it clearly thought of this in advance.  It is not as if the parties’ efficiencies contentions are facially implausible.  The idea that the merger could alleviate spectrum exhaustion, that there are efficiencies in spectrum holdings, and that this will facilitate expansion of LTE are worth investigating on the facts; just as the potentially anticompetitive theories are.   I don’t have strong opinions on the way that analysis will come out without doing it myself or at least having access to more data.

I’m only reacting to, and rejecting, the idea that we should simplify merger analysis to the dual propositions — that: (1) an increase in concentration leads to higher prices, and (2) when data doesn’t comport with (1) we can dismiss it by asserting without evidence that prices would have fallen even more.  This approach is, lets just say, problematic.

In the meantime, the Sprint CEO has publicly criticized the deal.  As I’ve discussed previously, economic theory and evidence suggest that when rivals complain about a merger, it is likely to increase competition rather than reduce it.  This is, of course, a rule of thumb.  But it is one that generates much more reliable inferences than the simple view — rejected by both theory and evidence — that a reduction in the number of firms allows leads to higher prices.  Yglesias points out, on the other hand, that rival Verizon prices increased post-merger (but did it experience abnormal returns?  What about other rivals?), suggesting the market expects the merger to create market power.  At least there we are in the world of casual empiricism rather than misusing theory.

Adam Thierer at Tech Liberation Front provides some insightful analysis as to the political economy of deal approval.   Karl Smith makes a similar point here.

Filed under: antitrust, economics, merger guidelines, mergers & acquisitions, technology

Continue reading
Antitrust & Consumer Protection

The AT&T and T-Mobile Merger

Popular Media The big merger news is that AT&T is planning to acquire T-Mobile.  From the AT&T press release: AT&T Inc. (NYSE: T) and Deutsche Telekom AG . . .

The big merger news is that AT&T is planning to acquire T-Mobile.  From the AT&T press release:

AT&T Inc. (NYSE: T) and Deutsche Telekom AG (FWB: DTE) today announced that they have entered into a definitive agreement under which AT&T will acquire T-Mobile USA from Deutsche Telekom in a cash-and-stock transaction currently valued at approximately $39 billion. The agreement has been approved by the Boards of Directors of both companies.

AT&T’s acquisition of T-Mobile USA provides an optimal combination of network assets to add capacity sooner than any alternative, and it provides an opportunity to improve network quality in the near term for both companies’ customers. In addition, it provides a fast, efficient and certain solution to the impending exhaustion of wireless spectrum in some markets, which limits both companies’ ability to meet the ongoing explosive demand for mobile broadband.

With this transaction, AT&T commits to a significant expansion of robust 4G LTE (Long Term Evolution) deployment to 95 percent of the U.S. population to reach an additional 46.5 million Americans beyond current plans – including rural communities and small towns.  This helps achieve the Federal Communications Commission (FCC) and President Obama’s goals to connect “every part of America to the digital age.” T-Mobile USA does not have a clear path to delivering LTE.

As the press release suggests, the potential efficiencies of the deal lie in relieving spectrum exhaustion in some markets as well as 4G LTE.  AT&T President Ralph De La Vega, in an interview, described the potential gains as follows:

The first thing is, this deal alleviates the impending spectrum exhaust challenges that both companies face. By combining the spectrum holdings that we have, which are complementary, it really helps both companies.  Second, just like we did with the old AT&T Wireless merger, when we combine both networks what we are going to have is more network capacity and better quality as the density of the network grid increases.In major urban areas, whether Washington, D.C., New York or San Francisco, by combining the networks we actually have a denser grid. We have more cell sites per grid, which allows us to have a better capacity in the network and better quality. It’s really going to be something that customers in both networks are going to notice.

The third point is that AT&T is going to commit to expand LTE to cover 95 percent of the U.S. population.

T-Mobile didn’t have a clear path to LTE, so their 34 million customers now get the advantage of having the greatest and latest technology available to them, whereas before that wasn’t clear. It also allows us to deliver that to 46.5 million more Americans than we have in our current plans. This is going to take LTE not just to major cities but to rural America.

At least some of the need for more spectrum is attributable to the success of the iPhone:

This transaction quickly provides the spectrum and network efficiencies necessary for AT&T to address impending spectrum exhaust in key markets driven by the exponential growth in mobile broadband traffic on its network. AT&T’s mobile data traffic grew 8,000 percent over the past four years and by 2015 it is expected to be eight to 10 times what it was in 2010. Put another way, all of the mobile traffic volume AT&T carried during 2010 is estimated to be carried in just the first six to seven weeks of 2015. Because AT&T has led the U.S. in smartphones, tablets and e-readers – and as a result, mobile broadband – it requires additional spectrum before new spectrum will become available.

On regulatory concerns, De La Vega observes:

We are very respectful of the processes the Department of Justice and (other regulators) use.  The criteria that has been used in the past for mergers of this type is that the merger is looked at (for) the benefits it brings on a market-by-market basis and how it impacts competition.

Today, when you look across the top 20 markets in the country, 18 of those markets have five or more competitors, and when you look across the entire country, the majority of the country’s markets have five or more competitors. I think if the criteria that has been used in the past is used against this merger, I think the appropriate authorities will find there will still be plenty of competition left.

If you look at pricing as a key barometer of the competition in an industry, our industry despite all of the mergers that have taken place in the past, (has) actually reduced prices to customers 50 percent since 1999. Even when these mergers have been done in the past they have always benefited the customers and we think they will benefit again.

Obviously, the deal is expected to generate significant regulatory scrutiny and will trigger a lot of interesting discussion and analysis of the state or wireless competition in the U.S.   With the forthcoming FCC Wireless Competition Report likely to signal the FCC’s position on the issue, and split approval authority with the conventional antitrust agencies, there appears to be significant potential for inter-agency conflict.

Greg Stirling at Searchengineland notes that “AT&T and T-Mobile said that they expect regulatory review to take up to 12 months.”   I’ll take the “over.”  Stirling also notes, in an interesting post, the $3 billion termination fee owed to T-Mobile if the deal gets blocked.  How’s that for a confidence signal?  In any event, it will be interesting to watch this unfold.

Here is an interesting preview, from AT&T executives this morning, of some of the arguments AT&T will be advancing in the coming months to achieve regulatory approval.  Some of the most critical issues to parse out will be previous historical experience with cellular mergers, and whether, in fact, it is likely that the merger will bring about substantial efficiencies and facilitate bringing LTE to new markets.  The preview includes the following chart, suggesting that significant price increases are not likely as a result of the merger based upon past experience.

No doubt there will be further opportunity to comment upon developments here over the next 12-18 months.

Filed under: antitrust, merger guidelines, mergers & acquisitions

Continue reading
Antitrust & Consumer Protection

Proposed Privacy Legislation

Popular Media The Obama Administration is advocating a privacy bill.  One provision will limit the use of data to the purpose for which it was collected unless . . .

The Obama Administration is advocating a privacy bill.  One provision will limit the use of data to the purpose for which it was collected unless a consumer gives permission for additional uses; another will give consumers increased rights to access information about themselves.

Both of these provisions may actually reduce safety of data online.  One additional purpose for which data can be used is to verify identity in cases where there is some doubt.  Many of us have had the experience of having a merchant call a credit card company and ask a series of questions to verify our identity.  This bill would apparently make that process more difficult.  This would lead either to increased inconvenience or increased risk.  This provision is enforced in Europe and there is some evidence that identity theft is more common there.

There is also a danger of allowing increased access to information.  A thief who obtains some information about a consumer may be able to use this to spoof   the system and obtain access to much more information, which will facilitate more harmful forms of theft.

The more fundamental issue is that there is no cost benefit analysis showing that any regulation is justified, as I showed in my previous post on this issue.

Filed under: privacy

Continue reading
Data Security & Privacy