Showing Latest Publications

Google Book Project

Popular Media Google’s efforts to make out of print books available online has run into a major stumbling block. Judge Chin ordered that books can only be . . .

Google’s efforts to make out of print books available online has run into a major stumbling block. Judge Chin ordered that books can only be digitized by Google if the author opts in; the agreement which he through out called for opt out.  This is an shame and a highly inefficient result.  As reported, the intricacies of copyright law and the unavailability of many rights holders means that opt in is not feasible in many cases.  As a result, thousands of books will not be digitized at all.  Instead of transferring rights to authors (which was apparently Judge Chin’s intent) he has simply destroyed valuable property rights.  This case was argued as an issue of the distribution of rights, but it is really about the creation of  rights — or, as it turns out, their non-creation.

Filed under: copyright, google, litigation Tagged: property rights

Continue reading
Intellectual Property & Licensing

Search Bias and Antitrust

Popular Media There is an antitrust debate brewing concerning Google and “search bias,” a term used to describe search engine results that preference the content of the . . .

There is an antitrust debate brewing concerning Google and “search bias,” a term used to describe search engine results that preference the content of the search provider.  For example, Google might list Google Maps prominently if one searches “maps” or Microsoft’s Bing might prominently place Microsoft affiliated content or products.

Apparently both antitrust investigations and Congressional hearings are in the works; regulators and commentators appear poised to attempt to impose “search neutrality” through antitrust or other regulatory means to limit or prohibit the ability of search engines (or perhaps just Google) to favor their own content.  At least one proposal goes so far as to advocate a new government agency to regulate search.  Of course, when I read proposals like this, I wonder where Google’s share of the “search market” will be by the time the new agency is built.

As with the net neutrality debate, I understand some of the push for search neutrality involves an intense push to discard traditional economically-grounded antitrust framework.  The logic for this push is simple.  The economic literature on vertical restraints and vertical integration provides no support for ex ante regulation arising out of the concern that a vertically integrating firm will harm competition through favoring its own content and discriminating against rivals.  Economic theory suggests that such arrangements may be anticompetitive in some instances, but also provides a plethora of pro-competitive explanations.  Lafontaine & Slade explain the state of the evidence in their recent survey paper in the Journal of Economic Literature:

We are therefore somewhat surprised at what the weight of the evidence is telling us. It says that, under most circumstances, profit-maximizing vertical-integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view. Although there are isolated studies that contradict this claim, the vast majority support it. Moreover, even in industries that are highly concentrated so that horizontal considerations assume substantial importance, the net effect of vertical integration appears to be positive in many instances. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked. Furthermore, we have found clear evidence that restrictions on vertical integration that are imposed, often by local authorities, on owners of retail networks are usually detrimental to consumers. Given the weight of the evidence, it behooves government agencies to reconsider the validity of such restrictions.

Of course, this does not bless all instances of vertical contracts or integration as pro-competitive.  The antitrust approach appropriately eschews ex ante regulation in favor of a fact-specific rule of reason analysis that requires plaintiffs to demonstrate competitive harm in a particular instance. Again, given the strength of the empirical evidence, it is no surprise that advocates of search neutrality, as net neutrality before it, either do not rely on consumer welfare arguments or are willing to sacrifice consumer welfare for other objectives.

I wish to focus on the antitrust arguments for a moment.  In an interview with the San Francisco Gate, Harvard’s Ben Edelman sketches out an antitrust claim against Google based upon search bias; and to his credit, Edelman provides some evidence in support of his claim.

I’m not convinced.  Edelman’s interpretation of evidence of search bias is detached from antitrust economics.  The evidence is all about identifying whether or not there is bias.  That, however, is not the relevant antitrust inquiry; instead, the question is whether such vertical arrangements, including preferential treatment of one’s own downstream products, are generally procompetitive or anticompetitive.  Examples from other contexts illustrate this point.

Grocery product manufacturers contract for “bias” with supermarkets through slotting contracts and other shelf space payments.  The bulk of economic theory and evidence on these contracts suggest that they are generally efficient and a normal part of the competitive process.   Vertically integrated firms may “bias” their own content in ways that increase output.  Whether bias occurs within the firm (as is the case with Google favoring its own products) or by contract (the shelf space example) should be of no concern for Edelman and those making search bias antitrust arguments.  Economists have known since Coase — and have been reminded by Klein, Alchian, Williamson and others — that firms may achieve by contract anything they could do within the boundaries of the firm.  The point is that, in the economics literature, it is well known that content self-promoting incentives in a vertical relationship can be either efficient or anticompetitive depending on the circumstances of the situation.  The empirical literature suggests that such relationships are mostly pro-competitive and that restrictions upon the abilities of firms to enter them generally reduce consumer welfare.

Edelman is an economist, and so I find it a bit odd that he has framed the “bias” debate without reference to any of this literature.  Instead, his approach appears to be that bias generates harm to rivals and that this harm is a serious antitrust problem.  (Or in other places, that the problem is that Google exhibits bias but its employees may have claimed otherwise at various points; this is also antitrust-irrelevant.)  For example, Edelman writes:

Search bias is a mechanism whereby Google can leverage its dominance in search, in order to achieve dominance in other sectors.  So for example, if Google wants to be dominant in restaurant reviews, Google can adjust search results, so whenever you search for restaurants, you get a Google reviews page, instead of a Chowhound or Yelp page. That’s good for Google, but it might not be in users’ best interests, particularly if the other services have better information, since they’ve specialized in exactly this area and have been doing it for years.

“Leveraging” one’s dominance in search, of course, takes a bit more than bias.  But I was quite curious about Edelman’s evidence and so I went and looked at Edelman and Lockwood.  Here is how they characterize their research question: “Whether search engines’ algorithmic results favor their own services, and if so, which search engines do so most, to what extent, and in what substantive areas.”  Here is how the authors describe what they did to test the hypothesis that Google engages in more search bias than other search engines:

To formalize our analysis, we formed a list of 32 search terms for services commonly provided by search engines, such as “email”, “calendar”, and “maps”. We searched for each term using the top 5 search engines: Google, Yahoo, Bing, Ask, and AOL. We collected this data in August 2010.

We preserved and analyzed the first page of results from each search. Most results came from sources independent of search engines, such as blogs, private web sites, and Wikipedia. However, a significant fraction – 19% – came from pages that were obviously affiliated with one of the five search engines. (For example, we classified results from youtube.com and gmail.com as Google, while Microsoft results included msn.com, hotmail.com, live.com, and Bing.)

Here is the underlying data for all 32 terms; so far, so good.  A small pilot study examining whether and to what extent search engines favor their own content is an interesting project — though, again, I’m not sure it says anything about the antitrust issues.  No surprise: they find some evidence that search engines exhibit some bias in favor of affiliated sites.  You can see all of the evidence at Edelman’s site (again, to his credit).  Interpretations of these results vary dramatically.  Edelman sees a serious problem.  Danny Sullivan begs to differ (“Google only favors itself 19 percent of the time”), and also makes the important point that the study took place before Yahoo searches were powered by Bing.

In their study, Edelman and Lockwood appear at least somewhat aware that bias and vertical integration can be efficient although they do not frame it in those terms.  They concede, for example, that “in principle, a search engine might feature its own services because its users prefer these links.”  To distinguish between these two possibilities, they conceive of the following test:

To test the user preference and bias hypotheses, we use data from two different sources on click-through-rate (CTR) for searches at Google, Yahoo, and Bing. Using CTR data from comScore and another service that (with users’ permission) tracks users’ searches and clicks (a service which prefers not to be listed by name), we analyze the frequency with which users click on search results for selected terms. The data span a four-week period, centered around the time of our automated searches.  In click-through data, the most striking pattern is that the first few search results receive the vast majority of users’ clicks. Across all search engines and search terms, the first result received, on average, 72% of users’ clicks, while the second and third results received 13% and 8% of clicks, respectively.

So far, no surprises.  The first listing generates greater incremental click-through than the second or third listing.  Similarly, the eye-level shelf space generates more sales than less prominent shelf space.  The authors have a difficult time distinguishing user preference from bias:

This concentration of users’ clicks makes it difficult to disprove the user preference hypothesis. For example, as shown in Table 1, Google and Yahoo each list their own maps service as the first result for the query “maps”. Our CTR data indicates that Google Maps receives 86% of user clicks when the search is performed on Google, and Yahoo Maps receives 72% of clicks when the search is performed on Yahoo. One might think that this concentration is evidence of users’ preference for the service affiliated with their search engine. On the other hand, since clicks are usually highly concentrated on the first result, it is possible that users have no such preference, and that they are simply clicking on the first result because it appears first. Moreover, since the advantage conferred by a result’s rank likely differs across different search queries, we do not believe it is appropriate to try to control for ranking in a regression.

The interesting question from a consumer welfare perspective is not what happens to the users without a strong preference for Google Maps or Yahoo Maps.  Users without a strong preference are likely to click-through on whatever service is offered on their search engine of choice.  There is no significant welfare loss from a consumer who is indifferent between Google Maps and Yahoo Maps from choosing one over the other.

The more interesting question is whether users with a strong preference for a non-Google product are foreclosed from access to consumers by search bias.  When Google ranks its Maps above others, but a user with a strong preference for Yahoo Maps finds it listed second, is the user able to find his product of choice?  Probably if it is listed second.  Probably not if it is delisted or something more severe.  Edelman reports some data on this issues:

Nevertheless, there is one CTR pattern that would be highly suggestive of bias. Suppose we see a case in which a search engine ranks its affiliated result highly, yet that result receives fewer clicks than lower results. This would suggest that users strongly prefer the lower result — enough to overcome the effect of the affiliated result’s higher ranking.

Of course this is consistent with bias; however, to repeat the critical point, this bias does not inexorably lead to — or even suggest — an antitrust problem.  Let’s recall the shelf space analogy.  Consider a supermarket where Pepsi is able to gain access to the premium eye-level shelf space but consumers have a strong preference for Coke.  Whether or not the promotional efforts of Pepsi will have an impact on competition depend on whether Coke is able to get access to consumers.  In that case, it may involve reaching down to the second or third shelf.  There might be some incremental search costs involved.  And even if one could show that Coke sales declined dramatically in response to Pepsi’s successful execution of its contractual shelf-space bias strategy, that merely shows harm to rivals rather than harm to competition.  If Coke-loving consumers can access their desired product, Coke isn’t harmed, and there is certainly no competitive risk.

So what do we make of evidence that in the face of search engine bias, click-through data suggest consumers will still pick lower listings?  One inference is that consumers with strong preferences for content other than the biased result nonetheless access their preferred content.  It is difficult to see a competitive problem arising in such an environment.  Edelman anticipates this point somewhat when observes during his interview:

The thing about the effect I’ve just described is you don’t see it very often. Usually the No. 1 link gets twice as many clicks as the second result. So the bias takes some of the clicks that should have gone to the right result. It seems most users are influenced by the positioning.

This fails to justify Edelman’s position.  First off, in a limited sample of terms, its unclear what it means for these reversals not to happen “very often.”  More importantly, so what that the top link gets twice as many clicks as the second link?  The cases where the second link gets the dominant share of clicks-through might well be those where users have a strong preference for the second listed site.  Even if they are not, the antitrust question is whether search bias is efficient or poses a competitive threat.  Most users might be influenced by the positioning because they lack a strong preference or even any preference at all.  That search engines compete for the attention of those consumers, including through search bias, should not be surprising.  But it does not make out a coherent claim of consumer harm.

The ‘compared to what’ question looms large here.  One cannot begin to conceive of answering the search bias problem — if it is a problem at all — from a consumer welfare perspective until they pin down the appropriate counterfactual.  Edelman appears to assume  — when he observes that “bias takes some of the clicks that should have gone to the right result” — that the benchmark “right result” is that which would prevail if listings were correlated perfectly with aggregate consumer preference.   My point here is simple: that comparison is not the one that is relevant to antitrust.  An antitrust inquiry would distinguish harm to competitors from harm to competition; it would focus its inquiry on whether bias impaired the competitive process by foreclosing rivals from access to consumers and not merely whether various listings would be improved but for Google’s bias.  The answer to that question is clearly yes.  The relevant question, however, is whether that bias is efficient.   Evidence that other search engines with much smaller market shares, and certainly without any market power, exhibit similar bias would suggest to most economists that the practice certainly has some efficiency justifications.  Edelman ignores that possibility and by doing so, ignores decades of economic theory and empirical evidence.  This is a serious error, as the overwhelming lesson of that literature is that restrictions on vertical contracting and integration are a serious threat to consumer welfare.

I do not know what answer the appropriate empirical analysis would reveal.  As Geoff and I argue in this paper, however, I suspect a monopolization case against Google on these grounds would face substantial obstacles.  A deeper understanding of the competitive effects of search engine bias is a worthy project.  Edelman should also be applauded for providing some data that is interesting fodder for discussion.  But my sense of the economic arguments and existing data are that they do not provide the support for an antitrust attack against search bias against Google specifically, nor the basis for a consumer-welfare grounded search neutrality regime.

Filed under: advertising, antitrust, armen alchian, business, economics, exclusionary conduct, google, monopolization, net neutrality, technology

Continue reading
Antitrust & Consumer Protection

No Facts, No Problem?

Popular Media There has been, as is to be expected, plenty of casual analysis of the AT&T / T-Mobile merger to go around.  As I mentioned, I . . .

There has been, as is to be expected, plenty of casual analysis of the AT&T / T-Mobile merger to go around.  As I mentioned, I think there are a number of interesting issues to be resolved in an investigation with access to the facts necessary to conduct the appropriate analysis.   Annie Lowery’s piece in Slate is one of the more egregious violators of the liberal application of “folk economics” to the merger while reaching some very confident conclusions concerning the competitive effects of the merger:

Merging AT&T and T-Mobile would reduce competition further, creating a wireless behemoth with more than 125 million customers and nudging the existing oligopoly closer to a duopoly. The new company would have more customers than Verizon, and three times as many as Sprint Nextel. It would control about 42 percent of the U.S. cell-phone market.

That means higher prices, full stop. The proposed deal is, in finance-speak, a “horizontal acquisition.” AT&T is not attempting to buy a company that makes software or runs network improvements or streamlines back-end systems. AT&T is buying a company that has the broadband it needs and cutting out a competitor to boot—a competitor that had, of late, pushed hard to compete on price. Perhaps it’s telling that AT&T has made no indications as of yet that it will keep T-Mobile’s lower rates.

Full stop?  I don’t think so.  Nothing in economic theory says so.  And by the way, 42 percent simply isn’t high enough to tell a merger to monopoly story here; and Lowery concedes some efficiencies from the merger (“buying a company that has the broadband it needs” is an efficiency!).  To be clear, the merger may or may not pose competitive problems as a matter of fact.  The point is that serious analysis must be done in order to evaluate its likely competitive effects.  And of course, Lowery (HT: Yglesias, ) has no obligation to conduct serious analysis in a column — nor do I in a blog post. But this idea that the market concentration is an incredibly useful and — in her case, perfectly accurate — predictor of price effects is devoid of analytical content and also misleads on the relevant economics.  Quite the contrary, so undermined has been the confidence in the traditional concentration-price notions of horizontal merger analysis that the antitrust agencies’ 2010 Horizontal Merger Guidelines are premised in large part upon the notion that modern merger analysis considers shares to be an inherently unreliable predictor of competitive effects!!  (For what its worth, a recent Wall Street Journal column discussing merger analysis makes the same mistake — that is, suggests that the merger analysis comes down to shares and HHIs.  It doesn’t.)

To be sure, the merger of large firms with relatively large shares may attract significant attention, may suggest that the analysis drags on for a longer period of time, and likely will provide an opportunity for the FCC to extract some concessions.  But what I’m talking about is the antitrust economics here, not the political economy.  That is, will the merger increase prices and harm consumers?  With respect to the substantive merits, there is a fact-intensive economic analysis that must be done before anybody makes strong predictions about competitive effects.  The antitrust agencies will conduct that analysis.  So will the parties.  Indeed, the reported $3 billion termination fee suggests that AT&T is fairly confident it will get this through; and it clearly thought of this in advance.  It is not as if the parties’ efficiencies contentions are facially implausible.  The idea that the merger could alleviate spectrum exhaustion, that there are efficiencies in spectrum holdings, and that this will facilitate expansion of LTE are worth investigating on the facts; just as the potentially anticompetitive theories are.   I don’t have strong opinions on the way that analysis will come out without doing it myself or at least having access to more data.

I’m only reacting to, and rejecting, the idea that we should simplify merger analysis to the dual propositions — that: (1) an increase in concentration leads to higher prices, and (2) when data doesn’t comport with (1) we can dismiss it by asserting without evidence that prices would have fallen even more.  This approach is, lets just say, problematic.

In the meantime, the Sprint CEO has publicly criticized the deal.  As I’ve discussed previously, economic theory and evidence suggest that when rivals complain about a merger, it is likely to increase competition rather than reduce it.  This is, of course, a rule of thumb.  But it is one that generates much more reliable inferences than the simple view — rejected by both theory and evidence — that a reduction in the number of firms allows leads to higher prices.  Yglesias points out, on the other hand, that rival Verizon prices increased post-merger (but did it experience abnormal returns?  What about other rivals?), suggesting the market expects the merger to create market power.  At least there we are in the world of casual empiricism rather than misusing theory.

Adam Thierer at Tech Liberation Front provides some insightful analysis as to the political economy of deal approval.   Karl Smith makes a similar point here.

Filed under: antitrust, economics, merger guidelines, mergers & acquisitions, technology

Continue reading
Antitrust & Consumer Protection

The AT&T and T-Mobile Merger

Popular Media The big merger news is that AT&T is planning to acquire T-Mobile.  From the AT&T press release: AT&T Inc. (NYSE: T) and Deutsche Telekom AG . . .

The big merger news is that AT&T is planning to acquire T-Mobile.  From the AT&T press release:

AT&T Inc. (NYSE: T) and Deutsche Telekom AG (FWB: DTE) today announced that they have entered into a definitive agreement under which AT&T will acquire T-Mobile USA from Deutsche Telekom in a cash-and-stock transaction currently valued at approximately $39 billion. The agreement has been approved by the Boards of Directors of both companies.

AT&T’s acquisition of T-Mobile USA provides an optimal combination of network assets to add capacity sooner than any alternative, and it provides an opportunity to improve network quality in the near term for both companies’ customers. In addition, it provides a fast, efficient and certain solution to the impending exhaustion of wireless spectrum in some markets, which limits both companies’ ability to meet the ongoing explosive demand for mobile broadband.

With this transaction, AT&T commits to a significant expansion of robust 4G LTE (Long Term Evolution) deployment to 95 percent of the U.S. population to reach an additional 46.5 million Americans beyond current plans – including rural communities and small towns.  This helps achieve the Federal Communications Commission (FCC) and President Obama’s goals to connect “every part of America to the digital age.” T-Mobile USA does not have a clear path to delivering LTE.

As the press release suggests, the potential efficiencies of the deal lie in relieving spectrum exhaustion in some markets as well as 4G LTE.  AT&T President Ralph De La Vega, in an interview, described the potential gains as follows:

The first thing is, this deal alleviates the impending spectrum exhaust challenges that both companies face. By combining the spectrum holdings that we have, which are complementary, it really helps both companies.  Second, just like we did with the old AT&T Wireless merger, when we combine both networks what we are going to have is more network capacity and better quality as the density of the network grid increases.In major urban areas, whether Washington, D.C., New York or San Francisco, by combining the networks we actually have a denser grid. We have more cell sites per grid, which allows us to have a better capacity in the network and better quality. It’s really going to be something that customers in both networks are going to notice.

The third point is that AT&T is going to commit to expand LTE to cover 95 percent of the U.S. population.

T-Mobile didn’t have a clear path to LTE, so their 34 million customers now get the advantage of having the greatest and latest technology available to them, whereas before that wasn’t clear. It also allows us to deliver that to 46.5 million more Americans than we have in our current plans. This is going to take LTE not just to major cities but to rural America.

At least some of the need for more spectrum is attributable to the success of the iPhone:

This transaction quickly provides the spectrum and network efficiencies necessary for AT&T to address impending spectrum exhaust in key markets driven by the exponential growth in mobile broadband traffic on its network. AT&T’s mobile data traffic grew 8,000 percent over the past four years and by 2015 it is expected to be eight to 10 times what it was in 2010. Put another way, all of the mobile traffic volume AT&T carried during 2010 is estimated to be carried in just the first six to seven weeks of 2015. Because AT&T has led the U.S. in smartphones, tablets and e-readers – and as a result, mobile broadband – it requires additional spectrum before new spectrum will become available.

On regulatory concerns, De La Vega observes:

We are very respectful of the processes the Department of Justice and (other regulators) use.  The criteria that has been used in the past for mergers of this type is that the merger is looked at (for) the benefits it brings on a market-by-market basis and how it impacts competition.

Today, when you look across the top 20 markets in the country, 18 of those markets have five or more competitors, and when you look across the entire country, the majority of the country’s markets have five or more competitors. I think if the criteria that has been used in the past is used against this merger, I think the appropriate authorities will find there will still be plenty of competition left.

If you look at pricing as a key barometer of the competition in an industry, our industry despite all of the mergers that have taken place in the past, (has) actually reduced prices to customers 50 percent since 1999. Even when these mergers have been done in the past they have always benefited the customers and we think they will benefit again.

Obviously, the deal is expected to generate significant regulatory scrutiny and will trigger a lot of interesting discussion and analysis of the state or wireless competition in the U.S.   With the forthcoming FCC Wireless Competition Report likely to signal the FCC’s position on the issue, and split approval authority with the conventional antitrust agencies, there appears to be significant potential for inter-agency conflict.

Greg Stirling at Searchengineland notes that “AT&T and T-Mobile said that they expect regulatory review to take up to 12 months.”   I’ll take the “over.”  Stirling also notes, in an interesting post, the $3 billion termination fee owed to T-Mobile if the deal gets blocked.  How’s that for a confidence signal?  In any event, it will be interesting to watch this unfold.

Here is an interesting preview, from AT&T executives this morning, of some of the arguments AT&T will be advancing in the coming months to achieve regulatory approval.  Some of the most critical issues to parse out will be previous historical experience with cellular mergers, and whether, in fact, it is likely that the merger will bring about substantial efficiencies and facilitate bringing LTE to new markets.  The preview includes the following chart, suggesting that significant price increases are not likely as a result of the merger based upon past experience.

No doubt there will be further opportunity to comment upon developments here over the next 12-18 months.

Filed under: antitrust, merger guidelines, mergers & acquisitions

Continue reading
Antitrust & Consumer Protection

Proposed Privacy Legislation

Popular Media The Obama Administration is advocating a privacy bill.  One provision will limit the use of data to the purpose for which it was collected unless . . .

The Obama Administration is advocating a privacy bill.  One provision will limit the use of data to the purpose for which it was collected unless a consumer gives permission for additional uses; another will give consumers increased rights to access information about themselves.

Both of these provisions may actually reduce safety of data online.  One additional purpose for which data can be used is to verify identity in cases where there is some doubt.  Many of us have had the experience of having a merchant call a credit card company and ask a series of questions to verify our identity.  This bill would apparently make that process more difficult.  This would lead either to increased inconvenience or increased risk.  This provision is enforced in Europe and there is some evidence that identity theft is more common there.

There is also a danger of allowing increased access to information.  A thief who obtains some information about a consumer may be able to use this to spoof   the system and obtain access to much more information, which will facilitate more harmful forms of theft.

The more fundamental issue is that there is no cost benefit analysis showing that any regulation is justified, as I showed in my previous post on this issue.

Filed under: privacy

Continue reading
Data Security & Privacy

March 15: Kick-Off for The Law School Hiring Cartel

Popular Media If you’re currently a law professor and you’re thinking you might want to change schools (because, for example, your school continued its precipitous slide in . . .

If you’re currently a law professor and you’re thinking you might want to change schools (because, for example, your school continued its precipitous slide in the law school rankings . . . more about that later), you’d better hop on the phone. Today is your last day to snag a visiting offer from another law school. (You’ve already missed the deadline for procuring a permanent offer. That was March 1.)

The competing law schools, you see, have agreed to limit competition amongst themselves for law professor talent. Pursuant to the Association of American Law Schools’ Statement of Good Practices for the Recruitment of and Resignation by Full-Time Faculty Members, the law schools have pledged to “make an[y] offer of an indefinite appointment as a teacher during the following academic year no later than March 1 and of a visiting appointment no later than March 15.”

If this arrangement strikes you as legally suspect, congratulations. You know more about antitrust law than does the Executive Committee of the AALS. This arrangement is, quite simply, an unreasonable horizontal restraint of trade — not unlike an agreement among Ford, Chrysler, and GM that they will not poach engineers and designers from one another during the six-month period preceding the debut of new models. They’d no doubt love to adopt an agreement like that, but their lawyers would wisely counsel against doing so.

Members of the AALS, of course, contend that their little arrangement is fine. First, they insist it’s merely a “statement of good practices,” not an actual agreement, which is necessary to satisfy the “contract, combination, or conspiracy” element of Sherman Act Section 1. In addition, they maintain, it’s not an “unreasonable” restraint of trade. They’re wrong on both points.

As anyone who’s spent time teaching in a law school knows, law school administrators treat the hiring arrangement as though it is an actual agreement that binds them. They speed up recruiting to ensure that they make lateral offers before the cut-off dates. They talk to each other about the arrangement as though they recognize it as a common commitment.  Moreover, even if the arrangement were not the product of an express agreement, a reasonable factfinder would infer agreement from the fact that the law schools, in collectively adhering to the “good practice,” are engaging in consciously parallel behavior that would make no sense to adopt unilaterally (i.e., any law school that unilaterally adopted a policy of refusing to poach from its rivals after a certain date would hurt itself without procuring any economic benefit).  Thus, there is no question that the arrangement represents an antitrust “agreement” under prevailing legal standards.

The restraint is also unreasonable.  Most likely, the agreement would be deemed a “naked” restraint of trade.  Herbert Hovenkamp has written that “a serviceable definition of a naked restraint is one whose profitability depends on the exercise of market power.”  (The Antitrust Enterprise: Principle and Execution 112 (2005).)  If the participants in this agreement didn’t have market power — e.g., if only 20 of the nearly 200 law schools in the AALS adopted this policy — the policy wouldn’t really work.  An exercise of market power seems to be required for the arrangement to have efficacy, so the restraint is probably naked.  And if it’s naked, then it’s per se unreasonable and thus illegal.

But even if the arrangement is not naked, it would be condemned under a more probing analysis — either a “quick look” or a full-on rule of reason analysis.  Because it precludes law professors from procuring other offers when they’re most valuable to their existing employers (and thus in the best position to extract salaries approaching their actual worth), the arrangement systematically drives faculty salaries below the levels that persist in free competition.  This is especially galling because most law professors do not learn what they will be paid the following year until after the deadline for procuring another offer has passed.

The law schools would contend, of course, that this anticompetitive effect is outweighed by an important benefit: avoidance of the disruption that inevitably occurs when a professor resigns late in the spring term, after the fall schedule has been completed and students have selected courses.  There are at least two problems with that argument.

First, the argument really amounts to an assertion that vigorous competition among schools for professor talent is itself unreasonable because it leads to messy results.  The Supreme Court has rejected that line of argument in no uncertain terms.  In the Professional Engineers case, the Court condemned an agreement among engineers not to discuss price with potential clients, despite the engineers’ insistence that the agreement was necessary to prevent the shoddy design work that would result from low engineering prices.  The engineers’ public safety argument would not fly, the Court concluded, because “the Rule of Reason does not support a defense based on the assumption that competition itself is unreasonable.”  The Court explained that the Sherman Act is premised on “[t]he assumption that competition is the best method of allocating resources in a free market,” and it insisted that “[e]ven assuming occasional exceptions to the presumed [good] consequences of competition, the statutory policy precludes inquiry into the question whether competition is good or bad.”  Thus, the competing law schools can’t agree not to compete for labor just because doing so is hard.

Second, even if the policy at issue creates a good effect — reduced disruption from untimely resignations — there are less restrictive means of securing that end.  Each individual law school could negotiate resignation rules with its own professors.  For example, a law school could agree with its professors that they may not resign after Date X, and it could even bargain for a liquidated damages provision that would compensate it for any disruption occurring from breach.  Law schools would then compete with each other on this contract term — some would allow later resignations than others.  They could even have different resignation dates for different professors.  This sort of unilateral, non-collusive solution to the problem of untimely resignations would preserve competition for labor resources, causing them to be allocated more efficiently.

Of course, the members of the AALS know that they’re unlikely to be sued over this policy.  After all, any disgruntled law professors that brought suit over the policy would signal that they are prone to litigate and would thereby reduce their attractiveness as lateral candidates.  You’d think, though, that the AALS would show a little more respect for the law. 

Filed under: antitrust, cartels, law school

Continue reading
Antitrust & Consumer Protection

The NFL Lawyers Up

Popular Media For a possible antitrust suit following from the players’ decertification: The National Football League geared up for its antitrust battle against players Saturday by hiring . . .

For a possible antitrust suit following from the players’ decertification:

The National Football League geared up for its antitrust battle against players Saturday by hiring two prominent attorneys for its legal team.

David Boies, who represented Al Gore in the Bush vs. Gore case following the 2000 election, and who last year won a $1.3 billion copyright infringement verdict for Oracle, will represent the N.F.L. in the suit brought by players against the league after the dissolution of the players union Friday. He is considered one of the country’s leading trial lawyers.

Also joining Gregg Levy, the longtime outside counsel for the N.F.L., will be Paul Clement, who served for three years as the U. S. Solicitor General for President George W. Bush and who has argued more than 50 cases before the U.S. Supreme Court.

Hearings in the injunction players are seeking to lift the lockout planned by team owners could begin as early as next week.

NYTMichael McCann provides a nice review of the state of play and where things are likely to go from here.

 

Filed under: antitrust, sports

Continue reading
Antitrust & Consumer Protection

Privacy Cost-Benefit Analysis

Popular Media As I mentioned in my previous post, there is a strong effort to regulate the use of information on the web in the name of “privacy.” The basic tradeoff . . .

As I mentioned in my previous post, there is a strong effort to regulate the use of information on the web in the name of “privacy.” The basic tradeoff that drives the web is that firms use information for advertising and other purposes,and in return consumers get lots of things free.  Google alone offers about 40 free services, including the original  search engine, gmail, maps, and the increasingly popular android operating system for mobile devices. Facebook is another set of free services. There are hundreds of others, all ultimately funded by advertising and the use of information.  Any effort to regulate information is going to change the terms at which these services are offered.

To justify regulation, two conditions must be met.  First there must be some market failure.  Second, there must be at least an expectation that the benefits of the proposed regulation will outweigh the costs.  In a market economy, we generally put the burden of proof on those proposing regulation, since the default assumption is that markets provide net benefits.  Proponents of regulating the use of information on the internet have met neither of these burdens.

One main justification for regulation is that people do not want to be tracked. I discussed this issue in my previous post.  Let me just add that, while people express a desire not to be tracked, in practice they seem quite willing to trade information for other services.  The other issue is identity theft — the possibility that information will be misused for illegitimate purposes.  Tom Lenard and I have written extensively about this issue. The bottom line, however, is that consumers are not liable for much if any of the costs of identity theft, and since firms must bear these costs there is no obvious market failure.

With respect to the second issue, there has been virtually no effort to undertake any cost benefit analysis of the proposed regulations.  However, if there were such an analysis, it is unlikely that regulations would be cost justified since the benefits of the free stuff are huge and the costs are small at best.  While it is conceivable that some tweaking would pass a cost-benefit test, it is very unlikely that any regulation which could get through the political process and then be administered by an agency such as the FTC would in fact pass this test.  Moreover, the proposed regulations, such as a “do not track” list or shifting from opt out to opt in are well beyond “tweaking” and might fundamentally change the terms of the tradeoff.

The bottom line is this:  Privacy advocates act as if privacy is free.  But increased privacy means reduced use of information, and no one has shown that altering the terms of this tradeoff would be beneficial to consumers.

Filed under: cost-benefit analysis, privacy Tagged: cost-benefit analysis, privacy

Continue reading
Data Security & Privacy

Privacy and Tracking

Popular Media First I would like to thank Geoff Manne for inviting me to join this blog.  I know most of my fellow bloggers and it is . . .

First I would like to thank Geoff Manne for inviting me to join this blog.  I know most of my fellow bloggers and it is a group I am proud to be associated with.

For my first few posts I am going to write about privacy.  This is a hot topic.  Senators McCain and Kerry are floating a privacy bill, and the FTC is also looking at privacy. I have written a lot about privacy (mostly with Tom Lenard of the Technology Policy Institute, where I am a senior fellow).

The issue of the day is “tracking.”  There are several proposals for “do not track” legislation and polls show that consumers do not want to be tracked.

The entire fear of being tracked is based on an illusion.  It is a deep illusion, and difficult or impossible to eliminate, but still an illusion.   People are uncomfortable with the idea that someone knows what they are doing.  (It is “creepy.”)  But in fact no person knows what you are doing, even if you are being tracked. Only a machine knows.

As humans, we have difficulty understanding that something can be “known” but nonetheless not known by anyone.   We do not understand that we can be “tracked” but that no one is tracking us.  That is, data on our searches may exist on a server somewhere so that the server “knows” it, but no human knows it.  We don’t intuitively grasp this concept because it it entirely alien to our evolved intelligence.

In my most recent paper (with Michael Hammock, coming out in Competition Policy International) we cite two books by Clifford Nass ( C. Nass & C. Yen, The Man Who Lied to His Laptop: What Machines Teach Us About Human Relationships (2010), and B. Reeves & C. Nass, The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places (1996, 2002).)  Nass and his coauthors show that people automatically treat intelligent machines like other people.  For example, if asked to fill out a questionnaire about the quality of a computer, they rate the machine higher if they are filling out the form on the computer being rated than if it on another computer — they don’t want to hurt the computer’s feelings.  Privacy is like that — people can’t adapt to the notion that a machine knows something. They assume (probably unconsciously) that if somethingis known then a person knows it, and this is why they do not like being tracked.

One final point about tracking.  Even if you are tracked, the purpose is to find out what you want and sell it to you.  Selling people things they want is the essence of the market economy, and if tracking does a better job of this, then it is helping the market function better, and also helping consumers get products that are a better fit.  Why should this make anyone mad?

Filed under: advertising, consumer protection, privacy, regulation, truth on the market Tagged: “do not track”, privacy, tracking

Continue reading
Antitrust & Consumer Protection