Showing 9 of 523 Publications in Financial Regulation & Corporate Governance

Is Google Search Bias Consistent with Anticompetitive Foreclosure?

Popular Media In my series of three posts (here, here and here) drawn from my empirical study on search bias I have examined whether search bias exists, . . .

In my series of three posts (here, here and here) drawn from my empirical study on search bias I have examined whether search bias exists, and, if so, how frequently it occurs.  This, the final post in the series, assesses the results of the study (as well as the Edelman & Lockwood (E&L) study to which it responds) to determine whether the own-content bias I’ve identified is in fact consistent with anticompetitive foreclosure or is otherwise sufficient to warrant antitrust intervention.

As I’ve repeatedly emphasized, while I refer to differences among search engines’ rankings of their own or affiliated content as “bias,” without more these differences do not imply anticompetitive conduct.  It is wholly unsurprising and indeed consistent with vigorous competition among engines that differentiation emerges with respect to algorithms.  However, it is especially important to note that the theories of anticompetitive foreclosure raised by Google’s rivals involve very specific claims about these differences.  Properly articulated vertical foreclosure theories proffer both that bias is (1) sufficient in magnitude to exclude Google’s rivals from achieving efficient scale, and (2) actually directed at Google’s rivals.  Unfortunately for search engine critics, their theories fail on both counts.  The observed own-content bias appears neither to be extensive enough to prevent rivals from gaining access to distribution nor does it appear to target Google’s rivals; rather, it seems to be a natural result of intense competition between search engines and of significant benefit to consumers.

Vertical foreclosure arguments are premised upon the notion that rivals are excluded with sufficient frequency and intensity as to render their efforts to compete for distribution uneconomical.  Yet the empirical results simply do not indicate that market conditions are in fact conducive to the types of harmful exclusion contemplated by application of the antitrust laws.  Rather, the evidence indicates that (1) the absolute level of search engine “bias” is extremely low, and (2) “bias” is not a function of market power, but an effective strategy that has arisen as a result of serious competition and innovation between and by search engines.  The first finding undermines competitive foreclosure arguments on their own terms, that is, even if there were no pro-consumer justifications for the integration of Google content with Google search results.  The second finding, even more importantly, reveals that the evolution of consumer preferences for more sophisticated and useful search results has driven rival search engines to satisfy that demand.  Both Bing and Google have shifted toward these results, rendering the complained-of conduct equivalent to satisfying the standard of care in the industry–not restraining competition.

A significant lack of search bias emerges in the representative sample of queries.  This result is entirely unsurprising, given that bias is relatively infrequent even in E&L’s sample of queries specifically designed to identify maximum bias.  In the representative sample, the total percentage of queries for which Google references its own content when rivals do not is even lower—only about 8%—meaning that Google favors its own content far less often than critics have suggested.  This fact is crucial and highly problematic for search engine critics, as their burden in articulating a cognizable antitrust harm includes not only demonstrating that bias exists, but further that it is actually competitively harmful.  As I’ve discussed, bias alone is simply not sufficient to demonstrate any prima facie anticompetitive harm as it is far more often procompetitive or competitively neutral than actively harmful.  Moreover, given that bias occurs in less than 10% of queries run on Google, anticompetitive exclusion arguments appear unsustainable.

Indeed, theories of vertical foreclosure find virtually zero empirical support in the data.  Moreover, it appears that, rather than being a function of monopolistic abuse of power, search bias has emerged as an efficient competitive strategy, allowing search engines to differentiate their products in ways that benefit consumers.  I find that when search engines do reference their own content on their search results pages, it is generally unlikely that another engine will reference this same content.  However, the fact that both this percentage and the absolute level of own content inclusion is similar across engines indicates that this practice is not a function of market power (or its abuse), but is rather an industry standard.  In fact, despite conducting a much smaller percentage of total consumer searches, Bing is consistently more biased than Google, illustrating that the benefits search engines enjoy from integrating their own content into results is not necessarily a function of search engine size or volume of queries.  These results are consistent with a business practice that is efficient and at significant tension with arguments that such integration is designed to facilitate competitive foreclosure.

Inclusion of own content accordingly appears to be just one dimension upon which search engines have endeavored to satisfy and anticipate heterogeneous and dynamic consumer preferences.  Consumers today likely make strategic decisions as to which engine to run their searches on, and certainly expect engines to return far more complex results than were available just a few years ago. For example, over the last few years, search engines have begun “personalizing” search results, tailoring results pages to individual searchers, and allowing users’ preferences to be reflected over time.  While the traditional “10 blue links” results page is simply not an effective competitive strategy today, it appears that own-content inclusion is.  By developing and offering their own products in search results, engines are better able to directly satisfy consumer desires.

Moreover, the purported bias does not involve attempts to prominently display Google’s own general or vertical search content over that of rivals.  Consider the few queries in Edelman & Lockwood’s small sample of terms for which Google returned Google content within the top three results but neither Bing nor Blekko referenced the same content anywhere on their first page of results.  For the query “voicemail,” for example, Google refers to both Google Voice and Google Talk; both instances appear unrelated to the grievances of general and vertical search rivals.  The query “movie” results in a OneBox with the next 3 organic results including movie.com, fandango.com, and yahoo.movies.com.  The single instance in Edelman & Lockwood’s sample for which Google ranks its own content in the Top 3 positions but this content is not referred to at all on Bing’s first page of results is a link to blogger.com in response to the query “blog.”  It is difficult to construct a story whereby this result impedes Bing’s competitive position.  In fact, none of these examples suggests that efforts to anticompetitively foreclose rivals are in play.  To the contrary, each seems to be a result of simple and expected procompetitive product differentiation.

Overall, the evidence reveals very little search engine bias, and no overwhelming or systematic biasing by Google against  search competitors.  Indeed, the data simply do not support claims that own-content bias is of the nature, quality, or magnitude to generate plausible antitrust concerns.  To the contrary, the results strongly suggest that own-content bias fosters natural and procompetitive product differentiation.  Accordingly, search bias is likely beneficial to consumers—and is clearly not indicative of harm to consumer welfare.

Antitrust regulators should proceed with caution when evaluating such claims given the overwhelmingly consistent economic learning concerning the competitive benefits generally of vertical integration for consumers.  Serious care must be taken in order not to deter vigorous competition between search engines and the natural competitive process between rivals constantly vying to best one another to serve consumers.

Filed under: advertising, antitrust, business, economics, exclusionary conduct, google, Internet search, law and economics, monopolization, technology Tagged: Bias, Bing, Blekko, Competition law, Edelman, google, microsoft, Web search engine

Continue reading
Antitrust & Consumer Protection

A Quick Assessment of the FCC’s Appalling Staff Report on the AT&T Merger

Popular Media As everyone knows by now, AT&T’s proposed merger with T-Mobile has hit a bureaucratic snag at the FCC.  The remarkable decision to refer the merger . . .

As everyone knows by now, AT&T’s proposed merger with T-Mobile has hit a bureaucratic snag at the FCC.  The remarkable decision to refer the merger to the Commission’s Administrative Law Judge (in an effort to derail the deal) and the public release of the FCC staff’s internal, draft report are problematic and poorly considered.  But far worse is the content of the report on which the decision to attempt to kill the deal was based.

With this report the FCC staff joins the exalted company of AT&T’s complaining competitors (surely the least reliable judges of the desirability of the proposed merger if ever there were any) and the antitrust policy scolds and consumer “advocates” who, quite literally, have never met a merger of which they approved.

In this post I’m going to hit a few of the most glaring problems in the staff’s report, and I hope to return again soon with further analysis.

As it happens, AT&T’s own response to the report is actually very good and it effectively highlights many of the key problems with the staff’s report.  While it might make sense to take AT&T’s own reply with a grain of salt, in this case the reply is, if anything, too tame.  No doubt the company wants to keep in the Commission’s good graces (it is the very definition of a repeat player at the agency, after all).  But I am not so constrained.  Using the company’s reply as a jumping off point, let me discuss a few of the problems with the staff report.

First, as the blog post (written by Jim Cicconi, Senior Vice President of External & Legislative Affairs) notes,

We expected that the AT&T-T-Mobile transaction would receive careful, considered, and fair analysis.   Unfortunately, the preliminary FCC Staff Analysis offers none of that.  The document is so obviously one-sided that any fair-minded person reading it is left with the clear impression that it is an advocacy piece, and not a considered analysis.

In our view, the report raises questions as to whether its authors were predisposed.  The report cherry-picks facts to support its views, and ignores facts that don’t.  Where facts were lacking, the report speculates, with no basis, and then treats its own speculations as if they were fact.  This is clearly not the fair and objective analysis to which any party is entitled, and which we have every right to expect.

OK, maybe they aren’t pulling punches.  The fact that this reply was written with such scathing language despite AT&T’s expectation to have to go right back to the FCC to get approval for this deal in some form or another itself speaks volumes about the undeniable shoddiness of the report.

Cicconi goes on to detail five areas where AT&T thinks the report went seriously awry:  “Expanding LTE to 97% of the U.S. Population,” “Job Gains Versus Losses,” “Deutsche Telekom, T-Mobile’s Parent, Has Serious Investment Constraints,” “Spectrum” and “Competition.”  I have dealt with a few of these issues at some length elsewhere, including most notably here (noting how the FCC’s own wireless competition report “supports what everyone already knows: falling prices, improved quality, dynamic competition and unflagging innovation have led to a golden age of mobile services”), and here (“It is troubling that critics–particularly those with little if any business experience–are so certain that even with no obvious source of additional spectrum suitable for LTE coming from the government any time soon, and even with exponential growth in broadband (including mobile) data use, AT&T’s current spectrum holdings are sufficient to satisfy its business plans”).

What is really galling about the staff report—and, frankly, the basic posture of the agency—is that its criticisms really boil down to one thing:  “We believe there is another way to accomplish (something like) what AT&T wants to do here, and we’d just prefer they do it that way.”  This is central planning at its most repugnant.  What is both assumed and what is lacking in this basic posture is beyond the pale for an allegedly independent government agency—and as Larry Downes notes in the linked article, the agency’s hubris and its politics may have real, costly consequences for all of us.

Competition

But procedure must be followed, and the staff thus musters a technical defense to support its basic position, starting with the claim that the merger will result in too much concentration.  Blinded by its new-found love for HHIs, the staff commits a few blunders.  First, it claims that concentration levels like those in this case “trigger a presumption of harm” to competition, citing the DOJ/FTC Merger Guidelines.  Alas, as even the report’s own footnotes reveal, the Merger Guidelines actually say that highly concentrated markets with HHI increases of 200 or more trigger a presumption that the merger will “enhance market power.”  This is not, in fact, the same thing as harm to competition.  Elsewhere the staff calls this—a merger that increases concentration and gives one firm an “undue” share of the market—“presumptively illegal.”  Perhaps the staff could use an antitrust refresher course.  I’d be happy to come teach it.

Not only is there no actual evidence of consumer harm resulting from the sort of increases in concentration that might result from the merger, but the staff seems to derive its negative conclusions despite the damning fact that the data shows that wireless markets have seen considerable increases in concentration along with considerable decreases in prices, rather than harm to competition, over the last decade.  While high and increasing HHIs might indicate a need for further investigation, when actual evidence refutes the connection between concentration and price, they simply lose their relevance.  Someone should tell the FCC staff.

This is a different Wireless Bureau than the one that wrote so much sensible material in the 15th Annual Wireless Competition Report.  That Bureau described a complex, dynamic, robust mobile “ecosystem” driven not by carrier market power and industrial structure, but by rapid evolution and technological disruptors.  The analysis here wishes away every important factor that every consumer knows to be the real drivers of price and innovation in the mobile marketplace, including, among other things:

  1. Local markets, where there are five, six, or more carriers to choose from;
  2. Non-contract/pre-paid providers, whose strength is rapidly growing;
  3. Technology that is making more bands of available spectrum useful for competitive offerings;
  4. The reality that LTE will make inter-modal competition a reality; and
  5. The reality that churn is rampant and consumer decision-making is driven today by devices, operating systems, applications and content – not networks.

The resulting analysis is stilted and stale, and describes a wireless industry that exists only in the agency’s collective imagination.

There is considerably more to say about the report’s tortured unilateral effects analysis, but it will have to wait for my next post.  Here I want to quickly touch on a two of the other issues called out by Cicconi’s blog post.

Jobs

First, although it’s not really in my bailiwick to comment on the job claims that have been such an important aspect of the public conversations surrounding this merger, some things are simple logic, and the staff’s contrary claims here are inscrutable.  As Cicconi suggests, it is hard to understand how the $8 billion investment and build-out required to capitalize on AT&T’s T-Mobile purchase will fail to produce a host of jobs, how the creation of a more-robust, faster broadband network will fail to ignite even further growth in this growing sector of the economy, and, finally, how all this can fail to happen while the FCC’s own (relatively) paltry $4.5 billion broadband fund will somehow nevertheless create approximately 500,000 (!!!) jobs.  Even Paul Krugman knows that private investment is better than government investment in generating stimulus – the claim is that there’s not enough of it, not that it doesn’t work as well.  Here, however, the fiscal experts on the FCC’s staff have determined that massive private funding won’t create even 96,000 jobs, although the same agency claims that government funding only one half as large will create five times that many jobs.  Um, really?

Meanwhile the agency simply dismisses AT&T’s job preservation commitments.  Now, I would also normally disregard such unenforceable pronouncements as cheap talk – except given the frequency and the volume with which AT&T has made them, they would suffer pretty mightily for failing to follow through on them now.  Even more important perhaps, I have to believe (again, given the vehemence with which they have made the statements and the reality of de facto, reputational enforcement) they are willing to agree to whatever is in their control in a consent decree, thus making them, in fact, legally enforceable.  For the staff to so blithely disregard AT&T’s claims on jobs is unintelligible except as farce—or venality.

Spectrum

Although the report rarely misses an opportunity to fail to mention the spectrum crisis that has been at the center of the Administration’s telecom agenda and the focus of the National Broadband Plan, coincidentally authored by the FCC’s staff, the crux of the report seems to come down to a stark denial that such a spectrum crunch even exists.  As I noted, much of the staff report amounts to an extended meditation on why the parties can and should run their businesses as the staff say they can and should.  The report’s section assessing the parties’ claims regarding the transition to LTE (para 210, ff.) is remarkable.  It begins thus:

One of the Applicants’ primary justifications for the necessity of this transaction is that, as standalone firms, AT&T and T-Mobile are, and will continue to be, spectrum and capacity constrained. Due to these constraints, we find it more plausible that a spectrum constrained firm would maximize deployment of more spectrally efficient LTE, rather than limit it. Transitioning to LTE is primarily a function of only two factors: (1) the extent of LTE capable equipment deployed on the network and (2) the penetration of LTE compatible devices in the subscriber base. Although it may make it more economical, the transition does not require “spectrum headroom” as the Applicants claim. Increased deployment could be achieved by both of the Applicants on a standalone basis by adding the more spectrally efficient LTE-capable radios and equipment to the network and then providing customers with dual mode HSPAILTE devices. . . .

Forget the spectrum crunch!  It is the very absence of spectrum that will give firms the incentive and the ability to transition to more-efficient technology.  And all they have to do is run duplicate equipment on their networks and give all their customers new devices overnight.  And, well, the whole business model fits in a few paragraphs, entails no new spectrum, actually creates spectrum, and meets all foreseeable demand (as long as demand never increases which, of course, the report conveniently fails to assess).

Moreover, claims the report, AT&T’s transition to LTE flows inevitably from its competition with Verizon.  But, as Cicconi points out, the staff is unprincipled in its disparate treatment of the industry’s competitive conditions.  Somehow, without T-Mobile in the mix, prices will skyrocket and quality will be degraded—let’s say, just for example, by not upgrading to LTE (my interpretation, not the staff’s).  But 100 pages later, it turns out that AT&T doesn’t need to merge with T-Mobile to expand its LTE network because it will have to do so in response to competition from Verizon anyway.  It would appear, however, that Verizon’s power over AT&T operates only if T-Mobile exists separately and AT&T has a harder time competing.  Remove T-Mobile and expand AT&T’s ability to compete and, apparently, the market collapses.  Such is the logic of the report.

There is much more to criticize in the report, and I hope to have a chance to do so in the next few days.

Filed under: antitrust, business, law and economics, merger guidelines, regulation, technology, telecommunications Tagged: at&t, FCC, merger, t-mobile

Continue reading
Antitrust & Consumer Protection

Kahneman’s Time Interview Fails to Allay Concerns About Behavioral Law and Economics

Popular Media TOTM alumnus Todd Henderson recently pointed me to a short, ten-question interview Time Magazine conducted with Nobel prize-winning economist Daniel Kahneman.  Prof. Kahneman is a founding . . .

TOTM alumnus Todd Henderson recently pointed me to a short, ten-question interview Time Magazine conducted with Nobel prize-winning economist Daniel Kahneman.  Prof. Kahneman is a founding father of behavioral economics, which rejects the rational choice model of human behavior (i.e., humans are rational self-interest maximizers) in favor of a more complicated model that incorporates a number of systematic irrationalities (e.g., the so-called endowment effect, under which people value items they own more than they’d be willing to pay to acquire those same items if they didn’t own them). 

 I’ve been interested in behavioral economics since I took Cass Sunstein’s “Elements of the Law” course as a first-year law student.  Prof. Sunstein is a leading figure in the “behavioral law and economics” movement, which advocates structuring laws and regulations to account for the various irrationalities purportedly revealed by behavioral economics.  Most famously, behavioral L&E calls for the imposition of default rules that “nudge” humans toward outcomes they’d likely choose but for the irrationalities and myopia with which they are beset.

 I’ve long been somewhat suspicious of the behavioral L&E project.  As I once explained in a short response essay entitled Two Mistakes Behavioralists Make,  I suspect that behavioral L&E types are too quick to reject rational explanations for observed human behavior and that they too hastily advocate a governmental fix for irrational behavior.  Time’s interview with Prof. Kahneman did little to allay those two concerns.

Asked to identify his “favorite experiment that demonstrates our blindness to our own blindness,” Prof. Kahneman responded:

It’s one someone else did.  During [the ’90s] when there was terrorist activity in Thailand, people were asked how much they’d pay for a travel-insurance policy that pays $100,000 in case of death for any reason.  Others were asked how much they’d pay for a policy that pays $100,000 for death in a terrorist act.  And people will pay more for the second, even though it’s less likely.

 This answer pattern is admittedly strange.  Since death from a terrorist attack is, a fortiori, less likely than death from any cause, it makes no sense to pay the same amount for the two insurance policies; the “regardless of cause” life insurance policy should command a far higher price.  So maybe people are wildly irrational in comparing risks and the value of risk mitigation measures.

 Or maybe, as boundedly rational (but not systematically irrational) beings, they just don’t want to waste effort answering silly, hypothetical questions about the maximum amount they’d pay for stuff.  I remember exercises in Prof. Sunstein’s class in which we were split into groups and asked to state either how much we’d pay to obtain a certain object or, assuming we owned the object, how much we’d demand as a sales price.  I distinctly recall thinking how artificial the question was.  Given the low stakes of the exercise, I quickly wrote down some number and returned to thinking about what I would have for lunch, what was going to be on Sunstein’s exam, and whether I had adequately prepared for my next class.  I suspect my classmates did as well.  Was it not fully rational for us to conserve our limited mental resources by giving quick, thoughtless answers to wholly hypothetical, zero-stakes questions?

If so, then there are two possible reasons for subjects’ strange answers to the terrorism insurance questions Kahneman cites:  Subjects could be wildly irrational with respect to risk assessment and the value of protective measures, or they might rationally choose to give hasty answers to silly questions that don’t matter.  What we need is some way to choose between these irrational and rational accounts of the answer pattern.

Perhaps the best thing to do would be to examine people’s revealed preferences by looking at what they actually do when they’re spending money to protect against risk.  If Kahneman’s explanation for subjects’ strange answers were sound, we’d see people paying hefty premiums for terrorism insurance.  Profit-seeking insurance companies, in turn, would scramble to create and market such risk protection, realizing that they could charge irrational consumers far more than their expected liabilities.  But we don’t see this sort of thing.

That suggests that the alternative, “rational” (or at least not systematically irrational) account is the more compelling story:  Subjects pestered with questions about how much hypothetical money they’d spend on hypothetical insurance products decide not to invest too much in the decision and just spit out an answer.  As we all learn as kids, you a ask a silly question, you get a silly answer.

So again we see the behavioralist tendency to discount the rational account too quickly.  But what about the second common behavioralist mistake (i.e., hastily jumping from an observation about human irrationality to the conclusion that a governmental fix is warranted)?  On that issue, consider this portion of the interview:

Time:  You endorse a kind of libertarian paternalism that gives people freedom of choice but frames the choice so they are nudged toward the option that’s better for them.  Are you worried that experts will misuse that?

Kahneman:  What psychology and behavioral economics have shown is that people don’t think very carefully.  They’re influenced by all sorts of superficial things in their decisionmaking, and they procrastinate and don’t read the small print.  You’ve got to create situations so they’ll make better decisions for themselves.

Could Prof. Kahneman have been more evasive?  The question was about an obvious downside of governmental intervention to correct for systematic irrationalities, but Prof. Kahneman, channeling Herman “9-9-9” Cain, just ignored it and repeated his affirmative case.  This is a serious problem for the behavioral L&E crowd:  They think they’re done once they convince you that humans exhibit some irrationalities.  But they’re not.  Just as one may believe in anthropogenic global warming and still oppose efforts to combat it on cost-benefit grounds, one may be skeptical of a nudge strategy even if one believes that humans may, in fact, exhibit some systematic irrationalities.  Individual free choice may have its limits, but governmental decisionmaking (executed by self-serving humans whose own rationality is limited) may amount to a cure that’s worse than the disease.

Readers interested in the promise and limitations of behavioral law and economics should check out TOTM’s all-star Free to Choose Symposium.

 

Filed under: behavioral economics, behavioral economics, economics, free to choose symposium, law and economics, nobel prize, regulation

Continue reading
Financial Regulation & Corporate Governance

Holdup Problem, Airline Edition

Popular Media Economists are all quite familiar with the “holdup problem,” i.e. one contracting partner exploiting the other after asset specific investments have been made.  One classic . . .

Economists are all quite familiar with the “holdup problem,” i.e. one contracting partner exploiting the other after asset specific investments have been made.  One classic law school textbook example is Alaska Packers v. Domenico in which the Alaska Packers’ Association hired Domenico for the salmon season for $50 plus 2 cents per salmon caught, but after leaving the dock and arriving in Alaskan waters for the short salmon season, the workers demanded an increase in their pay.  The defendant agreed, but upon return to San Francisco, refused to pay.  The seaman sued and lost on the theory that the exchange did not involve fresh consideration.  This, Judge Posner has argued, was the right economic result on the grounds that it discourages holdup.  Many of our readers will also be familiar with the famous Fisher Body / GM example of vertical integration solving the holdup problem, and the subsequent debate between Benjamin Klein and Ronald Coase over that particular example.

Now comes another example of the holdup problem at work.  In fact, it is difficult to imagine a better example.  Apparently, half way through a flight from India to Birmingham, England, an airline took advantage of the asset specific investments made by its passengers to alter the terms of the deal:

Passengers aboard two chartered jetliners from India to Britain were hit up for about $200 each, in cash, to continue their trip this week in what one flier compared to a hostage situation.  The charter company, Austria-based Comtel Air, and the Spanish company that owns the planes pointed fingers at each other over the situation Thursday. But Lal Dadrah, a passenger on one of the flights who recorded the crew passing the hat, called the situation “a complete, utter sham.”

Comtel Air passengers on a Tuesday flight to Birmingham, England, from the Indian city of Amritsar were hit up for 130 pounds — about $200 each — during a layover in Vienna. They were allowed off the aircraft to take the money from teller machines, a process that took about seven hours. There were varying accounts of what the money was to pay for, ranging from fuel to fees.

The NY Times story provides a few more details:

Britain’s Channel 4 news broadcast video showing a Comtel cabin crew member telling passengers: “We need some money to pay the fuel, to pay the airport, to pay everything we need. If you want to go to Birmingham, you have to pay.”

Some passengers said they were sent off the plane to cash machines in Vienna to raise the money.

“We all got together, took our money out of purses — 130 pounds ($205),” said Reena Rindi, who was aboard with her daughter. “Children under two went free, my little one went free because she’s under two. If we didn’t have the money, they were making us go one by one outside, in Vienna, to get the cash out.”

The economics don’t stop there.  There is potential for an agency problem as well:

Bhupinder Kandra, the airline’s majority shareholder, told the Associated Press from Vienna that travel agents had taken the passengers’ money before the planes left but had not passed it on to the airline.  “This is not my problem,” he said. “The problem is with the agents.”

A great example for the classroom.

 

Filed under: contracts

Continue reading
Financial Regulation & Corporate Governance

The Influence of Prospect Theory

Popular Media Source. Filed under: behavioral economics

Source.

Filed under: behavioral economics

Continue reading
Financial Regulation & Corporate Governance

Extending & Rebutting Edelman & Lockwood on Search Bias

Popular Media In my last post, I discussed Edelman & Lockwood’s (E&L’s) attempt to catch search engines in the act of biasing their results—as well as their . . .

In my last post, I discussed Edelman & Lockwood’s (E&L’s) attempt to catch search engines in the act of biasing their results—as well as their failure to actually do so.  In this post, I present my own results from replicating their study.  Unlike E&L, I find that Bing is consistently more biased than Google, for reasons discussed further below, although neither engine references its own content as frequently as E&L suggest.

I ran searches for E&L’s original 32 non-random queries using three different search engines—Google, Bing, and Blekko—between June 23 and July 5 of this year.  This replication is useful, as search technology has changed dramatically since E&L recorded their results in August 2010.  Bing now powers Yahoo, and Blekko has had more time to mature and enhance its results.  Blekko serves as a helpful “control” engine in my study, as it is totally independent of Google and Microsoft, and so has no incentive to refer to Google or Microsoft content unless it is actually relevant to users.  In addition, because Blekko’s model is significantly different than Google and Microsoft’s, if results on all three engines agree that specific content is highly relevant to the user query, it lends significant credibility to the notion that the content places well on the merits rather than being attributable to bias or other factors.

How Do Search Engines Rank Their Own Content?

Focusing solely upon the first position, Google refers to its own products or services when no other search engine does in 21.9% of queries; in another 21.9% of queries, both Google and at least one other search engine rival (i.e. Bing or Blekko) refer to the same Google content with their first links.

But restricting focus upon the first position is too narrow.  Assuming that all instances in which Google or Bing rank their own content first and rivals do not amounts to bias would be a mistake; such a restrictive definition would include cases in which all three search engines rank the same content prominently—agreeing that it is highly relevant—although not all in the first position. 

The entire first page of results provides a more informative comparison.  I find that Google and at least one other engine return Google content on the first page of results in 7% of the queries.  Google refers to its own content on the first page of results without agreement from either rival search engine in only 7.9% of the queries.  Meanwhile, Bing and at least one other engine refer to Microsoft content in 3.2% of the queries.  Bing references Microsoft content without agreement from either Google or Blekko in 13.2% of the queries:

This evidence indicates that Google’s ranking of its own content differs significantly from its rivals in only 7.9% of queries, and that when Google ranks its own content prominently it is generally perceived as relevant.  Further, these results suggest that Bing’s organic search results are significantly more biased in favor of Microsoft content than Google’s search results are in favor of Google’s content.

Examining Search Engine “Bias” on Google

The following table presents the percentages of queries for which Google’s ranking of its own content differs significantly from its rivals’ ranking of that same content.

Note that percentages below 50 in this table indicate that rival search engines generally see the referenced Google content as relevant and independently believe that it should be ranked similarly.

So when Google ranks its own content highly, at least one rival engine typically agrees with this ranking; for example, when Google places its own content in its Top 3 results, at least one rival agrees with this ranking in over 70% of queries.  Bing especially agrees with Google’s rankings of Google content within its Top 3 and 5 results, failing to include Google content that Google ranks similarly in only a little more than a third of queries.

Examining Search Engine “Bias” on Bing

Bing refers to Microsoft content in its search results far more frequently than its rivals reference the same Microsoft content.  For example, Bing’s top result references Microsoft content for 5 queries, while neither Google nor Blekko ever rank Microsoft content in the first position:

This table illustrates the significant discrepancies between Bing’s treatment of its own Microsoft content relative to Google and Blekko.  Neither rival engine refers to Microsoft content Bing ranks within its Top 3 results; Google and Blekko do not include any Microsoft content Bing refers to on the first page of results in nearly 80% of queries.

Moreover, Bing frequently ranks Microsoft content highly even when rival engines do not refer to the same content at all in the first page of results.  For example, of the 5 queries for which Bing ranks Microsoft content in its top result, Google refers to only one of these 5 within its first page of results, while Blekko refers to none.  Even when comparing results across each engine’s full page of results, Google and Blekko only agree with Bing’s referral of Microsoft content in 20.4% of queries.

Although there are not enough Bing data to test results in the first position in E&L’s sample, Microsoft content appears as results on the first page of a Bing search about 7 times more often than Microsoft content appears on the first page of rival engines.  Also, Google is much more likely to refer to Microsoft content than Blekko, though both refer to significantly less Microsoft content than Bing.

A Closer Look at Google v. Bing

On E&L’s own terms, Bing results are more biased than Google results; rivals are more likely to agree with Google’s algorithmic assessment (than with Bing’s) that its own content is relevant to user queries.  Bing refers to Microsoft content other engines do not rank at all more often than Google refers its own content without any agreement from rivals.  Figures 1 and 2 display the same data presented above in order to facilitate direct comparisons between Google and Bing.

As Figures 1 and 2 illustrate, Bing search results for these 32 queries are more frequently “biased” in favor of its own content than are Google’s.  The bias is greatest for the Top 1 and Top 3 search results.

My study finds that Bing exhibits far more “bias” than E&L identify in their earlier analysis.  For example, in E&L’s study, Bing does not refer to Microsoft content at all in its Top 1 or Top 3 results; moreover, Bing refers to Microsoft content within its entire first page 11 times, while Google and Yahoo refer to Microsoft content 8 and 9 times, respectively.  Most likely, the significant increase in Bing’s “bias” differential is largely a function of Bing’s introduction of localized and personalized search results and represents serious competitive efforts on Bing’s behalf.

Again, it’s important to stress E&L’s limited and non-random sample, and to emphasize the danger of making strong inferences about the general nature or magnitude of search bias based upon these data alone.  However, the data indicate that Google’s own-content bias is relatively small even in a sample collected precisely to focus upon the queries most likely to generate it.  In fact—as I’ll discuss in my next post—own-content bias occurs even less often in a more representative sample of queries, strongly suggesting that such bias does not raise the competitive concerns attributed to it.

Filed under: antitrust, business, economics, google, Internet search, law and economics, monopolization, technology Tagged: antitrust, Bias, Bing, Blekko, google, microsoft, search, Web search engine, Yahoo

Continue reading
Antitrust & Consumer Protection

Investigating Search Bias: Measuring Edelman & Lockwood’s Failure to Measure Bias in Search

Popular Media Last week I linked to my new study on “search bias.”  At the time I noted I would have a few blog posts in the . . .

Last week I linked to my new study on “search bias.”  At the time I noted I would have a few blog posts in the coming days discussing the study.  This is the first of those posts.

A lot of the frenzy around Google turns on “search bias,” that is, instances when Google references its own links or its own content (such as Google Maps or YouTube) in its search results pages.  Some search engine critics condemn such references as inherently suspect and almost by their very nature harmful to consumers.  Yet these allegations suffer from several crucial shortcomings.  As I’ve noted (see, e.g., here and here), these naked assertions of discrimination are insufficient to state a cognizable antitrust claim, divorced as they are from consumer welfare analysis.  Indeed, such “discrimination” (some would call it “vertical integration”) has a well-recognized propensity to yield either pro-competitive or competitively neutral outcomes, rather than concrete consumer welfare losses.  Moreover, because search engines exist in an incredibly dynamic environment, marked by constant innovation and fierce competition, we would expect different engines, utilizing different algorithms and appealing to different consumer preferences, to emerge.  So when search engines engage in product differentiation of this sort, there is no reason to be immediately suspicious of these business decisions.

No reason to be immediately suspicious – but there could, conceivably, be a problem.  If there is, we would want to see empirical evidence of it—of both the existence of bias, as well as the consumer harm emanating from it.  But one of the most notable features of this debate is the striking lack of empirical data.  Surprisingly little research has been done in this area, despite frequent assertions that own-content bias is commonly practiced and poses a significant threat to consumers (see, e.g., here).

My paper is an attempt to rectify this.  In the paper, I investigate the available data to determine whether and to what extent own-content bias actually occurs, by analyzing and replicating a study by Ben Edelman and Ben Lockwood (E&L) and conducting my own study of a larger, randomized set of search queries.

In this post I discuss my analysis and critique of E&L; in future posts I’ll present my own replication of their study, as well as the results of my larger study of 1,000 random search queries.  Finally, I’ll analyze whether any of these findings support anticompetitive foreclosure theories or are otherwise sufficient to warrant antitrust intervention.

E&L “investigate . . . [w]hether search engines’ algorithmic results favor their own services, and if so, which search engines do most, to what extent, and in what substantive areas.”  Their approach is to measure the difference in how frequently search engines refer to their own content relative to how often their rivals do so.

One note at the outset:  While this approach provides useful descriptive facts about the differences between how search engines link to their own content, it does little to inform antitrust analysis because Edelman and Lockwood begin with the rather odd claim that competition among differentiated search engines for consumers is a puzzle that creates an air of suspicion around the practice—in fact, they claim that “it is hard to see why results would vary . . . across search engines.”  This assertion, of course, is simply absurd.  Indeed, Danny Sullivan provides a nice critique of this claim:

It’s not hard to see why search engine result differ at all.  Search engines each use their own “algorithm” to cull through the pages they’ve collected from across the web, to decide which pages to rank first . . . . Google has a different algorithm than Bing.  In short, Google will have a different opinion than Bing.  Opinions in the search world, as with the real world, don’t always agree.

Moreover, this assertion completely discounts both the vigorous competitive product differentiation that occurs in nearly all modern product markets as well as the obvious selection effects at work in own-content bias (Google users likely prefer Google content).  This combination detaches E&L’s analysis from the consumer welfare perspective, and thus antitrust policy relevance, despite their claims to the contrary (and the fact that their results actually exhibit very little bias).

Several methodological issues undermine the policy relevance of E&L’s analysis.  First, they hand select 32 search queries and execute searches on Google, Bing, Yahoo, AOL and Ask.  This hand-selected non-random sample of 32 search queries cannot generate reliable inferences regarding the frequency of bias—a critical ingredient to understanding its potential competitive effects.  Indeed, E&L acknowledge their queries are chosen precisely because they are likely to return results including Google content (e.g., email, images, maps, video, etc.).

E&L analyze the top three organic search results for each query on each engine.  They find that 19% of all results across all five search engines refer to content affiliated with one of them.  They focus upon the first three organic results and report that Google refers to its own content in the first (“top”) position about twice as often as Yahoo and Bing refer to Google content in this position.  Additionally, they note that Yahoo is more biased than Google when evaluating the first page rather than only the first organic search result.

E&L also offer a strained attempt to deal with the possibility of competitive product differentiation among search engines.  They examine differences among search engines’ references to their own content by “compar[ing] the frequency with which a search engine links to its own pages, relative to the frequency with which other search engines link to that search engine’s pages.”  However, their evidence undermines claims that Google’s own-content bias is significant and systematic relative to its rivals’.  In fact, almost zero evidence of statistically significant own-content bias by Google emerges.

E&L find, in general, Google is no more likely to refer to its own content than other search engines are to refer to that same content, and across the vast majority of their results, E&L find Google search results are not statistically more likely to refer to Google content than rivals’ search results.

The same data can be examined to test the likelihood that a search engine will refer to content affiliated with a rival search engine.  Rather than exhibiting bias in favor of an engine’s own content, a “biased” search engine might conceivably be less likely to refer to content affiliated with its rivals.  The table below reports the likelihood (in odds ratios) that a search engine’s content appears in a rival engine’s results.

The first two columns of the table demonstrate that both Google and Yahoo content are referred to in the first search result less frequently in rivals’ search results than in their own.  Although Bing does not have enough data for robust analysis of results in the first position in E&L’s original analysis, the next three columns in Table 1 illustrate that all three engines’ (Google, Yahoo, and Bing) content appears less often on the first page of rivals’ search results than on their own search engine.  However, only Yahoo’s results differ significantly from 1.  As between Google and Bing, the results are notably similar.

E&L also make a limited attempt to consider the possibility that favorable placement of a search engine’s own content is a response to user preferences rather than anticompetitive motives.  Using click-through data, they find, unsurprisingly, that the first search result tends to receive the most clicks (72%, on average).  They then identify one search term for which they believe bias plays an important role in driving user traffic.  For the search query “email,” Google ranks its own Gmail first and Yahoo Mail second; however, E&L also find that Gmail receives only 29% of clicks while Yahoo Mail receives 54%.  E&L claim that this finding strongly indicates that Google is engaging in conduct that harms users and undermines their search experience.

However, from a competition analysis perspective, that inference is not sound.  Indeed, the fact that the second-listed Yahoo Mail link received the majority of clicks demonstrates precisely that Yahoo was not competitively foreclosed from access to users.  Taken collectively, E&L are not able to muster evidence of potential competitive foreclosure.

While it’s important to have an evidence-based discussion surrounding search engine results and their competitive implications, it’s also critical to recognize that bias alone is not evidence of competitive harm.  Indeed, any identified bias must be evaluated in the appropriate antitrust economic context of competition and consumers, rather than individual competitors and websites.  E&L’s analysis provides a useful starting point for describing how search engines differ in their referrals to their own content.  But, taken at face value, their results actually demonstrate little or no evidence of bias—let alone that the little bias they do find is causing any consumer harm.

As I’ll discuss in coming posts, evidence gathered since E&L conducted their study further suggests their claims that bias is prevalent, inherently harmful, and sufficient to warrant antitrust intervention are overstated and misguided.

Filed under: antitrust, business, economics, google, Internet search, law and economics, monopolization, technology Tagged: antitrust, Bing, google, search, search bias, Search Engines, search neutrality, Web search engine, Yahoo

Continue reading
Antitrust & Consumer Protection

My New Empirical Study on Defining and Measuring Search Bias

Popular Media Tomorrow is the deadline for Eric Schmidt to send his replies to the Senate Judiciary Committee’s follow up questions from his appearance at a hearing . . .

Tomorrow is the deadline for Eric Schmidt to send his replies to the Senate Judiciary Committee’s follow up questions from his appearance at a hearing on Google antitrust issues last month.  At the hearing, not surprisingly, search neutrality was a hot topic, with representatives from the likes of Yelp and Nextag, as well as Expedia’s lawyer, Tom Barnett (that’s Tom Barnett (2011), not Tom Barnett (2006-08)), weighing in on Google’s purported bias.  One serious problem with the search neutrality/search bias discussions to date has been the dearth of empirical evidence concerning so-called search bias and its likely impact upon consumers.  Hoping to remedy this, I posted a study this morning at the ICLE website both critiquing one of the few, existing pieces of empirical work on the topic (by Ben Edelman, Harvard economist) as well as offering up my own, more expansive empirical analysis.  Chris Sherman at Search Engine Land has a great post covering the study.  The title of his article pretty much says it all:  “Bing More Biased Than Google; Google Not Behaving Anti-competitively.”

Read the full piece here

Continue reading
Antitrust & Consumer Protection

The Bulldozer Solution to the Housing Crisis

TOTM My inaugural blog on two-sided markets did not elicit much reaction from TOTM readers. Perhaps it was too boring. In a desperate attempt to generate . . .

My inaugural blog on two-sided markets did not elicit much reaction from TOTM readers. Perhaps it was too boring. In a desperate attempt to generate a hostile comment from at least one housing advocate, I have decided to advocate bulldozing homes in foreclosure as one (of several) means to relieve the housing crisis. Not with families inside them, of course. In my mind, the central problem of U.S. housing markets is the misallocation of land: Thanks to the housing boom, there are too many houses and not enough greenery. And bulldozers are the fastest way to convert unwanted homes into parks.

Read the full piece here.

Continue reading
Financial Regulation & Corporate Governance