What are you looking for?

Showing 9 of 200 Results in Intermediary Liability

Defining and Measuring Search Bias: Some Preliminary Evidence

ICLE White Paper Summary Search engines produce immense value by identifying, organizing, and presenting the Internet´s information in response to users´ queries.1 Search engines efficiently provide better and . . .


Search engines produce immense value by identifying, organizing, and presenting the Internet´s information in response to users´ queries.1 Search engines efficiently provide better and faster answers to users´ questions than alternatives.

Recently, critics have taken issue with the various methods search engines use to identify relevant content and rank search results for users. Google, in particular, has been the subject of much of this criticism on the grounds that its organic search results—those generated algorithmically—favor its own products and services at the expense of those of its rivals. It is widely understood that search engines´ algorithms for ranking various web pages naturally differ. Likewise, there is widespread recognition that competition among search engines is vigorous, and that differentiation between engines´ ranking functions is not only desirable, but a natural byproduct of competition, necessary to survival, and beneficial to consumers.2 Nonetheless, despite widespread recognition of the consumer benefits of such differentiation, complaints from rival search engines have persisted and succeeded in attracting attention from a number of state, federal and international regulatory agencies. Unfortunately, much of this attention has focused on the impact upon individual websites of differences among search engines´ algorithmic methods of identifying and ranking relevant content, rather than analyzing these differences from a conventional consumer?welfare driven antitrust analysis.

For example, many of these complaints ignore the fact that search engine users self?select into different engines or use multiple engines for different types of searches when considering the competitive implications of search rankings.Rather than focus upon competition among search engines in how results are identified and presented to users, critics and complainants craft their arguments around alleged search engine “discrimination” or “bias.”4 The complainants must have in mind something other than competitive decisions to rank content that differ from the decisions made by rivals; bias in this sense is both necessary to and inherent within any useful indexing tool. Yet, critics have generally avoided a precise definition of the allegedly troublesome conduct. Indeed, the term “bias” is used colloquially and is frequently invoked in the search engine debate to encompass a wide array of behavior—generally suggesting a latent malignancy within search engine conduct—with some critics citing mere differences in results across engines as evidence of harmful conduct.5

The more useful attempts to define “bias,” however, focus upon differences in organic rankings attributable to the search engine ranking its own content (“owncontent bias”); that is, a sufficient condition for own?content bias is that a search engine ranks its own content more prominently than its rivals do. To be even more precise about the nature of the alleged “own?content bias,” it should be clear that this form of  bias refers exclusively to organic results, i.e., those results the search engine produces algorithmically, as distinguished from the paid advertisements that might appear at the top, bottom, or right?hand side of a search result page.6 Critics at the Senate’s recent hearing on the “Power of Google” were particularly vociferous on this front, accusing Google of having “cooked”7 its algorithm and of “rig[ging] its results, biasing in favor of Google.”8

Competition economists and regulatory agencies are familiar with business arrangements which give rise to concerns of own?content bias.9 Complaints and economic theories of harm assert that a vertically integrated firm (in this case, Google offers search results as well as products like YouTube and Google Maps) might discriminate against its rivals by “foreclosing” them from access to a critical input. Here, the critical input necessary for rivals´ success is alleged to be prominent placement in Google´s search results. The economics of the potential anticompetitive exclusion of rivals involving vertically integrated firms are well understood in antitrust. The conditions that must be satisfied for these concerns to generate real risk to consumers are also well known. Over a century of antitrust jurisprudence, economic study, and enforcement agency practice have produced a well?understood economic analysis of the competitive effects of a vertically integrated firm´s “discrimination” in favor of its own products or services, including widespread recognition that such arrangements generally produce significant benefits for consumers. Modern competition policy recognizes that vertical integration and contractual arrangements are generally procompetitive; it also understands that discrimination of this sort may create the potential for competitive harm under some conditions. Sensible competition policy involving vertical integration and contractual arrangements requires one to be sensitive to the potential consumer welfare?enhancing potential of such vertical integration while also taking seriously the possibility that a firm might successfully harm competition itself (and not merely a rival).

In addition to the failure to distinguish procompetitive conduct from anticompetitive behavior, critics´ allegations of own?content bias suffer deeper conceptual ambiguities. The perceived issue for Google´s rivals is not merely that Google links to a map when responding to search queries, suggesting one might be  relevant for the user; indeed, rival search engines frequently respond to similar user queries with their own or other map products. Rather, critics find problematic that Google responds to user queries with a Google Map. This is a critical distinction because it concedes that rivals´ complaints are not satisfied by the response that consumers are better off with the map; nor do critics pause to consider that perhaps the Google search user prefers the Google Map to rival products.10 Thus, critics brazenly take issue with the relationship between Google and the search result even where they concede Google produces more relevant results for consumers.11 Rather than focusing upon consumers, critics argue that the fact that Google is affiliated with the referred search result is itself prima facie evidence of competitively harmful bias.12 On its face, this argument turns conventional antitrust wisdom on its head. Conduct that harms rivals merely because it attracts consumers from rivals is the essence of competition and the logical core of the maxim that antitrust protects “competition, not competitors.?13

Critics´ failure to account for the potential consumer benefits from ?own?content bias? extends beyond ignoring the fact that users might prefer Google´s products to rivals´. Most critics simply ignore the myriad of procompetitive explanations for vertical integration in the economics literature. This omission by critics, and especially by economist critics, is mystifying given that economists have documented not only a plethora of procompetitive justifications for such integration, but also that such vertical relationships are much more likely to be competitively beneficial or benign than to raise serious threats of foreclosure.14

The critical antitrust question is always whether the underlying conduct creates or maintains monopoly power and thus reduces competition and consumer welfare, or is more likely efficient and procompetitive. To be clear, documenting the mere existence of own?content bias itself does little to answer this question. Bias is not a sufficient condition for competitive harm as a matter of economics because it can increase, decrease, or have no impact at all upon consumer welfare; neither is bias, without more, sufficient to state a cognizable antitrust claim.15

Nonetheless, documenting whether and how much of the alleged bias exists in Google´s and its rivals´ search results can improve our understanding of its competitive implications—that is, whether the evidence of discrimination in favor of one´s own content across search engines is more consistent with anticompetitive foreclosure or with competitive differentiation.

Critically, in order to generate plausible competitive concerns, search bias must, at minimum, be sufficient in magnitude to foreclose rivals from achieving minimum efficient scale (otherwise, if it merely represents effective competition that makes life harder for competitors, it is not an antitrust concern at all). It follows from this necessary condition that not all evidence of ?bias? is relevant to this competitive concern; in particular, Google referring to its own products and services more prominently than its rivals rank those same services has little to do with critics´ complaints unless they implicate general or vertical search.

Despite widespread discussion of search engine bias, virtually no evidence exists indicating that bias abounds—and very little that it exists at all. Edelman & Lockwood recently addressed this dearth of evidence by conducting a small study focused upon own?content bias in 32 search queries. They contend that their results are indicative of systemic and significant bias demanding antitrust intervention.16 The authors define and measure ?bias? as the extent to which a search engine´s ranking of its own content differs from how its rivals rank the same content. This approach provides some useful information concerning differences among search engine rankings. However, the study should not be relied upon to support broad sweeping antitrust policy concerns with Google.

The small sample of search queries provides one reason for caution. Perhaps more importantly, the non?random sample of search queries undermines its utility for addressing the critical antitrust policy questions focusing upon the magnitude of search bias, both generally and as it relates to whether the degree and nature of observed bias satisfies the well?known conditions required for competitive foreclosure. Further, evaluating their evidence at face value, Edelman & Lockwood misinterpret its relevance (Edelman & Lockwood in fact find almost no evidence of bias) and, most problematically, simply assume that own?content bias is inherently suspect from a consumer welfare perspective rather than considering the well?known consumer benefits of vertical integration. Despite these shortcomings, Edelman & Lockwood´s study has received considerable attention, both in the press and from Google´s critics, who cite it as evidence of harmful and anticompetitive search engine behavior.17 In the present analysis, as a starting point, we first “replicate” and analyze Edelman & Lockwood´s earlier study of a small, non?random sample of search queries in the modern search market. We then extend this methodology to a larger random sample of search queries in order to draw more reliable inferences concerning the answers to crucial questions for the competition policy debate surrounding search engine bias, including: (1) what precisely is search engine bias?; (2) what are its  competitive implications?; (3) how common is it?; (4) what explains its existence and relative frequency across search engines?; and, most importantly, (5) does observed search engine bias pose a competitive threat or is it a feature of competition between search engines?

Part I of this paper articulates an antitrust?appropriate framework for analyzing claims of “own?content bias” and delineates its utility and shortcomings as a theory of antitrust harm; it further evaluates Edelman & Lockwood’s study, methodology and analysis using this framework. Part II lays out the methodology employed in our own studies. Part III presents the results of our replication of Edelman & Lockwood and analyzes antitrust implications for the search engine bias debate; Part IV does the same for our larger, random sample of search queries. Part V concludes.

Continue reading
Antitrust & Consumer Protection

Amazon and Internet Commerce

Popular Media Stewart Baker at the Volokh Conspiracy has a very interesting post on the new Amazon browser.  He thinks it might revolutionize doing business on the Web, with a tremendous increase in security. . . .

Stewart Baker at the Volokh Conspiracy has a very interesting post on the new Amazon browser.  He thinks it might revolutionize doing business on the Web, with a tremendous increase in security.  This increase in security will entail a loss in privacy, so let’s hope the privacy guys don’t stop it.

Filed under: business, Internet search, markets, privacy Tagged: Amazon’s new browser

Continue reading
Financial Regulation & Corporate Governance

Searching for Antitrust Remedies, Part II

Popular Media In the last post, I discussed possible characterizations of Google’s conduct for purposes of antitrust analysis.  A firm grasp of the economic implications of the . . .

In the last post, I discussed possible characterizations of Google’s conduct for purposes of antitrust analysis.  A firm grasp of the economic implications of the different conceptualizations of Google’s conduct is a necessary – but not sufficient – precondition for appreciating the inconsistencies underlying the proposed remedies for Google’s alleged competitive harms.  In this post, I want to turn to a different question: assuming arguendo a competitive problem associated with Google’s algorithmic rankings – an assumption I do not think is warranted, supported by the evidence, or even consistent with the relevant literature on vertical contractual relationships – how might antitrust enforcers conceive of an appropriate and consumer-welfare-conscious remedy?  Antitrust agencies, economists, and competition policy scholars have all appropriately stressed the importance of considering a potential remedy prior to, rather than following, an antitrust investigation; this is good advice not only because of the benefits of thinking rigorously and realistically about remedial design, but also because clear thinking about remedies upfront might illuminate something about the competitive nature of the conduct at issue.

Somewhat ironically, former DOJ Antitrust Division Assistant Attorney General Tom Barnett – now counsel for Expedia, one of the most prominent would-be antitrust plaintiffs against Google – warned (in his prior, rather than his present, role) that “[i]mplementing a remedy that is too broad runs the risk of distorting markets, impairing competition, and prohibiting perfectly legal and efficient conduct,” and that “forcing a firm to share the benefits of its investments and relieving its rivals of the incentive to develop comparable assets of their own, access remedies can reduce the competitive vitality of an industry.”  Barnett also noted that “[t]here seems to be consensus that we should prohibit unilateral conduct only where it is demonstrated through rigorous economic analysis to harm competition and thereby harm consumer welfare.”  Well said.  With these warnings well in-hand, we must turn to two inter-related concerns necessary to appreciating the potential consequences of a remedy for Google’s conduct: (1) the menu of potential remedies available for an antitrust suit against Google, and (2) the efficacy of these potential remedies from a consumer-welfare, rather than firm-welfare, perspective.

What are the potential remedies?

The burgeoning search neutrality crowd presents no lack of proposed remedies; indeed, if there is one segment in which Google’s critics have proven themselves prolific, it is in their constant ingenuity conceiving ways to bring governmental intervention to bear upon Google.  Professor Ben Edelman has usefully aggregated and discussed several of the alternatives, four of which bear mention:  (1) a la Frank Pasquale and Oren Bracha, the creation of a “Federal Search Commission,” (2) a la the regulations surrounding the Customer Reservation Systems (CRS) in the 1990s, a prohibition on rankings that order listings “us[ing] any factors directly or indirectly relating to” whether the search engine is affiliated with the link, (3) mandatory disclosure of all manual adjustments to algorithmic search, and (4) transfer of the “browser choice” menu of the EC Microsoft litigation to the Google search context, requiring Google to offer users a choice of five or so rivals whenever a user enters particular queries.

Geoff and I discuss several of these potential remedies in our paper, If Search Neutrality is the Answer, What’s the Question?  It suffices to say that we find significant consumer welfare threats from the creation of a new regulatory agency designed to impose “neutral” search results.  For now, I prefer to focus on the second of these remedies – analogized to CRS technology in the 1990s – here; Professor Edelman not only explains proposed CRS-inspired regulation, but does so in effusive terms:

A first insight comes from recognizing that regulators have already – successfully! – addressed the problem of bias in information services. One key area of intervention was customer reservation systems (CRS’s), the computer networks that let travel agents see flight availability and pricing for various major airlines. Three decades ago, when CRS’s were largely owned by the various airlines, some airlines favored their own flights. For example, when a travel agent searched for flights through Apollo, a CRS then owned by United Airlines, United flights would come up first – even if other carriers offered lower prices or nonstop service. The Department of Justice intervened, culminating in rules prohibiting any CRS owned by an airline from ordering listings “us[ing] any factors directly or indirectly relating to carrier identity” (14 CFR 255.4). Certainly one could argue that these rules were an undue intrusion: A travel agent was always free to find a different CRS, and further additional searches could have uncovered alternative flights. Yet most travel agents hesitated to switch CRS’s, and extra searches would be both time-consuming and error-prone. Prohibiting biased listings was the better approach.

The same principle applies in the context of web search. On this theory, Google ought not rank results by any metric that distinctively favors Google. I credit that web search considers myriad web sites – far more than the number of airlines, flights, or fares. And I credit that web search considers more attributes of each web page – not just airfare price, transit time, and number of stops. But these differences only grant a search engine more room to innovate. These differences don’t change the underlying reasoning, so compelling in the CRS context, that a system provider must not design its rules to systematically put itself first.

The analogy is a superficially attractive one, and we’re tempted to entertain it, so far as it goes.  Organizational questions inhere in both settings, and similarly so: both flights and search results must be ordinally ranked, and before CRS regulation, a host airline’s flights often appeared before those of rival airlines.  Indeed, we will take Edelman’s analogy at face value.  Problematically for Professor Edelman and others pushing the CRS-style remedy, a fuller exploration of CRS regulation reveals this market intervention – well, put simply, wasn’t so successful after all.  Not for consumers anyway.  It did, however, generate (economically) predictable consequences: reduced consumer welfare through reduced innovation. Let’s explore the consequences of Edelman’s analogy further below the fold.

History of CRS Antitrust Suits and Regulation

Early air travel primarily consisted of “interline” flights – flights on more than one carrier to reach a final destination.  CRSs arose to enable airlines to coordinate these trips for their customers across multiple airlines, which necessitated compiling information about rival airlines, their routes, fares, and other price- and quality-relevant information.  Major airlines predominantly owned CRSs at this time, which served both competitive and cooperative ends; this combination of economic forces naturally drew antitrust advocates’ attention.

CRS regulation proponents proffered numerous arguments as to the potentially anticompetitive nature and behavior of CRS-owning airlines.  For example, they claimed that CRS-owning airlines engaged in “dirty tricks,” such as using their CRSs to terminate passengers’ reservations on smaller, rival airlines and to rebook customers on their own flights, and refusing to allow smaller airlines to become CRS co-hosts, thereby preventing these smaller airlines from being listed in search results.  CRS-owning airlines faced further allegations of excluding rivals through contractual provisions, such as long-term commitments from travel agents.  Proponents of antitrust enforcement alleged that the nature of the CRS market created significant barriers to entry and provided CRS-owning airlines with significant cost advantages to selling their own flights.  These cost advantages purportedly derived from two main sources: (1) quality advantages that airline-owned CRSs enjoyed, as they could commit to providing comprehensive and accurate information about the owner airline’s flight schedule, and (2) joint ownership of CRSs, which facilitated coordination between airlines and CRSs, thereby decreasing the distribution and information costs.

These claims suffered from serious shortcomings including both a failure to demonstrate harm to competition rather than injury to specific rivals as well as insufficient appreciation for the value of dynamic efficiency and innovation to consumer welfare.  These latter concerns were especially pertinent in the CRS context, as CRSs arose at a time of incredible change – the deregulated airline industry, joined with novel computer technology, necessitated significant and constant innovation.  Courts accordingly generally denied antitrust remedies in these cases – rejecting claims that CRSs imposed unreasonable restraints on competition, denied access to an essential facility, or facilitated monopoly leverage.

Yet, particularly relevant for present purposes, one of the most popular anticompetitive stories was that CRSs practiced “display bias,” defined as ranking the owner airline’s flights above those of all other airlines.  Proponents claimed display bias was particularly harmful in the CRS setting, because only the travel agent, and not the customer, could see the search results, and travel agents might have incentives to book passengers on more expensive flights for which they receive more commission.  Fred Smith describes the investigations surrounding this claim:

These initial CRS services were used mostly by sophisticated travel agents, who could quickly scroll down to a customer’s preferred airline.  But this extra “effort” was considered discriminatory by some at the DOJ and the DOT, and hearings were held to investigate this threat to competition.  Great attention was paid to the “time” required to execute only a few keystrokes, to the “complexity” of re-designing first screens by computer-proficient travel agents, and to the “barriers” placed on such practices by the host CRS provider.

CRS Rules

While courts declined to intervene in the CRS market, the Department of Transportation (DOT) eagerly crafted rules to govern CRS operations.  The DOT’s two primary goals in enacting the 1984 CRS regulations were (1) to incentivize entry into the CRS market and (2) to prevent airline ownership of CRSs from decreasing competition in the downstream passenger air travel market.  One of the most notable rules introduced in the 1984 CRS regulations prohibited display bias.  The DOT changed both this rule and CRS rules as a whole significantly, and by 1997, the DOT required each CRS “(i) to offer at least one integrated display that uses the same criteria for both online and interline connections and (ii) to use elapsed time or non-stop itinerary as a significant factor in selecting the flight options from the database” (Alexander, 2004).  However, the DOT did not categorically forbid display bias; rather, it created several exceptions to this rule – and even allowed airlines to disseminate software that introduced bias into displays.  Additionally, the DOT expressly refused to enforce its anti-bias rules against travel agent displays.

Other CRS rules attempted to reinforce these two goals of additional market entry and preservation of downstream competition.  CRS rules specifically focused on mitigating travel agent switching costs between CRS vendors and reducing any quality advantage incumbent CRSs allegedly had.  Rules prohibited discriminatory booking fees and the tying of travel agent commissions to CRS use, limited contract lengths, prohibited minimum uses and rollover clauses, and required CRSs to give all participating carriers equal service upgrades.

Evidence of CRS Regulation “Success”?

The CRS regulatory experiment had years to run its course; despite the extent and commitment of its regulatory sweep, these rules failed to improve consumer outcomes in any meaningful way.  CRS regulations precipitated neither innovation nor entry, and likely incurred serious allocative efficiency and consumer welfare losses by attempting to prohibit display bias.

First, CRS regulations unambiguously failed in their goal of increasing ease of entry:

Only six CRS vendors offered their services to domestic airlines and travel agents in the mid-1980s. . . If the rules had actually facilitated entry, the number of CRS vendors should have grown or some new entrants should have been seen during the past twenty years.  The evidence, however, is to the contrary.  It remains that ‘[s]ince the [CAB] first adopted CRS rules, no firm has entered the CRS business.’  Meanwhile, there has been a series of mergers coupled with introduction of multinational CRS; the cumulative effect was to reduce the number of CRSs. . . Even if a regulation could successfully facilitate entry by a supplier of CRS services, the gain from such entry would at this point be relatively small, and possibly negative. (Alexander and Lee, 2004) (emphasis added).

As such, CRS regulations did not achieve one of their primary objectives – a fact which stands in stark contrast to Edelman’s declaration that CRS rules represent an unequivocal regulatory success.

Most relevant to the search engine bias analogy, the CRS regulations prohibiting bias did not positively affect consumer welfare.  To the contrary, by ignoring the reality that most travel agents took consumer interests into account in their initial choice of CRS operator (even if they do so to a lesser extent in each individual search they conduct for consumers), and that even if residual bias remained, consumers were “informed and repeat players who have their own preferences,” CRS regulations imposed unjustified costs.  As Alexander and Lee describe it

[T]he social value of prohibiting display . . . bias solely to improve the quality of information that consumers receive about travel options appears to be low and may be negative.  Travel agents have strong incentives to protect consumers from poor information, through how they customize their internal display screens, and in their choices of CRS vendors.

Moreover, and predictably, CRS regulations appear to have caused serious harm to the competitive process:

The major competitive advantage of the pre-regulation CRS was that it permitted the leading airlines to slightly disadvantage their leading competitors by placing them a bit farther down on the list of available flights.  United would place American slightly farther down the list, and American would return the favor for United flights.  The result, of course, was that the other airlines received slightly higher ranks than they would have otherwise.  When “bias” was eliminated, United moved up on the American system and vice versa, while all other airlines moved down somewhat.  The antitrust restriction on competitive use of the CRS, then, actually reduced competition.  Moreover, the rules ensured that the United/American market leadership would endure fewer challenges from creative newcomers, since any changes to the system would have to undergo DOT oversight, thus making “sneak attacks” impossible.  The resulting slowdown of CRS technology damaged the competitiveness of these systems.  Much of the innovative lead that these systems had enjoyed slowly eroded as the internet evolved.  Today, much of the air travel business has moved to the internet (as have the airlines themselves) (Smith, 1999).

These competitive losses occurred despite evidence suggesting that CRSs themselves enhanced competition and thus had the predictable positive impact for consumers.  For example, one study found that CRS usage increased travel agents’ productivity by an average of 41% and that in the early 1990s over 95% of travel agents used a CRS – indicating that travel agents were able to assist consumers far more effectively once CRSs became available (Ellig, 1991).  The rules governing contractual terms fared no better; indeed, these also likely reduced consumer welfare:

The prohibited contract practices–long-term contracting and exclusive dealing–that had been regarded as exclusionary might not have proved to be such a critical barrier to entry: entry did not occur, independently of those practices.  Evidence on the dealings between travel agents and CRS vendors, post-regulation, suggests that these practices may have enhanced overall allocative efficiency.  Travel agents appear to have agreed to some, if not all, restrictive contracts with CRS vendors as a means of providing those vendors with assurance that they would be repaid gradually, over time, for their up-front investments in the travel agent, such as investments in equipment or training (Alexander and Lee, 2004).

Accordingly, CRS regulations seem to have threatened innovation by decreasing the likelihood that CRS vendors would recover research and development expenditures without providing a commensurate consumer benefit.

Termination of Rules

The DOT terminated CRS regulations in 2004 in light of their failure to improve competitive outcomes in the CRS market and a growing sense that they were making things worse, not better – which Edelman fails to acknowledge and which certainly undermines his claim that regulators addressed this problem “successfully.”  From the time CRS regulations were first adopted in 1984 until 2004, the CRS market and the associated technology changed significantly, rapidly becoming more complex.  As the market increased in complexity, it became increasingly more difficult for the DOT to effectively regulate.  Two occurrences in particular precipitated de-regulation: (1) the major airlines divested themselves of CRS ownership (despite the absence of any CRS regulations requiring or encouraging divestiture!), and (2) the commercialization of the internet introduced novel forms of substitutes to the CRS system that the CRS regulations did not govern.  Online direct-to-traveler services, such as Travelocity, Expedia and Orbitz provide consumers with a method to choose their own flights, entirely absent travel agent assistance.  More importantly, Expedia and Orbitz each developed direct connection technologies that allow them to make reservations directly with an airline’s internal reservation system – bypassing CRS systems almost completely.  Moreover, Travelocity, Expedia, and Orbitz were never forced to comply with CRS regulations, which allowed them to adopt more consumer-friendly products and innovate in meaningful ways, obsoleting traditional CRSs.  It is unsurprising that Expedia has warned against overly broad regulations in the search engine bias debate – it has first-hand knowledge of how crucial the ability to innovate is.)

These developments, taken in harmony, mean that in order to cause any antitrust harm in the first instance, a hypothetical CRS monopolist must have been interacting with (1) airlines, (2) travel agents, and (3) consumers who all had an insufficient incentive to switch to another alternative in the face of a significant price increase.  Given this nearly insurmountable burden, and the failure of CRS regulations to improve consumer welfare in even the earlier and simpler state of the world, Alexander and Lee find that, by the time CRS regulations were terminated in 2004, they failed to pass a cost-benefit analysis.

Overall, CRS regulations incurred significant consumer welfare losses and rendered the entire CRS system nearly obsolete by stifling its ability to compete with dynamic and innovative online services.  As Ellig notes, “[t]he legal and economic debate over CRS. . . frequently overlooked the peculiar economics of innovation and entrepreneurship.”  Those who claim search engine bias exists (as distinct from valuable product differentiation between engines) and can be meaningfully regulated rely upon this same flawed analysis and expect the same flawed regulatory approach to “fix” whatever issues they perceive as ailing the search engine market.  Search engine regulation will make consumers worse off.  In the meantime, proponents of so-called search neutrality and heavy-handed regulation of organic search results battle over which of a menu of cumbersome and costly regulatory schemes should be adopted in the face of evidence that the approaches are more likely to harm consumers than help them, and even stronger evidence that there is no competitive problem with search in the first place.

Indeed, one benefit of thinking hard about remedies in the first instance is that it may illuminate something about the competitive nature of the conduct one seeks to regulate.  I defer to former AAG Barnett in explaining this point:

Put another way, a bad section 2 remedy risks hurting consumers and competition and thus is worse than no remedy at all. That is why it is important to consider remedies at the outset, before deciding whether a tiger needs catching. Doing so has a number of benefits.  …

Furthermore, contemplation of the remedy may reveal that there is no competitive harm in the first place.  Judge Posner has noted that “[t]he nature of the remedy sought in an antitrust case is often . . . an important clue to the soundness of the antitrust claim.”(4) The classic non-section 2 example is Pueblo Bowl-O-Mat, where plaintiffs claimed that the antitrust laws prohibited a firm from buying and reinvigorating failing bowling alleys and prayed for an award of the “profits that would have been earned had the acquired centers closed.”(5) The Supreme Court correctly noted that condemning conduct that increased competition “is inimical to the purposes of [the antitrust] laws”(6)–more competition is not a competitive harm to be remedied. In the section 2 context, one might wish that the Supreme Court had focused on the injunctive relief issued in Aspen Skiing–a compelled joint venture whose ability to enhance competition among ski resorts was not discussed(7)–in assessing whether discontinuing a similar joint venture harmed competition in the first place.(8)

A review of my paper with Geoff reveals several common themes among proposed remedies intimated by the above discussion of CRS regulations.  The proposed remedies consistently: (1) disadvantage Google, (2) advantage its rivals, and (3) have little if anything to do with consumers.  Neither economics nor antitrust history supports such a regulatory scheme; unfortunately, it is consumers that might again ultimately pay the inevitable tax for clumsy regulatory tinkering with product design and competition.

Filed under: antitrust, economics, federal trade commission, google, international center for law & economics, monopolization, technology Tagged: antitrust, Federal Trade Commission, google, search

Continue reading
Antitrust & Consumer Protection

TechFreedom Search Engine Regulation Event today

Popular Media Today at 12:30 at the Capitol Visitor Center, TechFreedom is hosting a discussion on the regulation of search engines:  “Search Engine Regulation: A Solution in . . .

Today at 12:30 at the Capitol Visitor Center, TechFreedom is hosting a discussion on the regulation of search engines:  “Search Engine Regulation: A Solution in Search of a Problem?”

The basics:

Allegations of “search bias” have led to increased scrutiny of Google, including active investigations in the European Union and Texas, a possible FTC investigation, and sharply-worded inquiries from members of Congress. But what does “search bias” really mean? Does it demand preemptive “search neutrality” regulation, requiring government oversight of how search results are ranked? Is antitrust intervention required to protect competition? Or can market forces deal with these concerns?

A panel of leading thinkers on Internet law will explore these questions at a luncheon hosted by TechFreedom, a new digital policy think tank. The event will take place at the Capitol Visitor Center room SVC-210/212 onTuesday, June 14 from 12:30 to 2:30pm, and include a complimentary lunch. CNET’s Declan McCullagh, a veteran tech policy journalist, will moderate a panel of four legal experts:

More details are here, and the event will be streaming live from that link as well.  If all goes well, it will also be accessible right here:


Live Broadcasting by Ustream

Filed under: administrative, announcements, essential facilities, google, law and economics, monopolization, regulation, technology Tagged: Declan McCullagh, Eric Goldman, frank pasquale, geoffrey manne, google, james grimmelmann, techfreedom, Web search engine

Continue reading
Antitrust & Consumer Protection

Manne and Wright on Search Neutrality

Popular Media Josh and I have just completed a white paper on search neutrality/search bias and the regulation of search engines.  The paper is this year’s first . . .

Josh and I have just completed a white paper on search neutrality/search bias and the regulation of search engines.  The paper is this year’s first in the ICLE Antitrust & Consumer Protection White Paper Series:

If Search Neutrality Is the Answer, What’s the Question?


Geoffrey A. Manne

(Lewis & Clark Law School and ICLE)


Joshua D. Wright

(George Mason Law School & Department of Economics and ICLE)

In this paper we evaluate both the economic and non-economic costs and benefits of search bias. In Part I we define search bias and search neutrality, terms that have taken on any number of meanings in the literature, and survey recent regulatory concerns surrounding search bias. In Part II we discuss the economics and technology of search. In Part III we evaluate the economic costs and benefits of search bias. We demonstrate that search bias is the product of the competitive process and link the search bias debate to the economic and empirical literature on vertical integration and the generally-efficient and pro-competitive incentives for a vertically integrated firm to discriminate in favor of its own content. Building upon this literature and its application to the search engine market, we conclude that neither an ex ante regulatory restriction on search engine bias nor the imposition of an antitrust duty to deal upon Google would benefit consumers. In Part V we evaluate the frequent claim that search engine bias causes other serious, though less tangible, social and cultural harms. As with the economic case for search neutrality, we find these non-economic justifications for restricting search engine bias unconvincing, and particularly susceptible to the well-known Nirvana Fallacy of comparing imperfect real world institutions with romanticized and unrealistic alternatives

Search bias is not a function of Google’s large share of overall searches. Rather, it is a feature of competition in the search engine market, as evidenced by the fact that its rivals also exercise editorial and algorithmic control over what information is provided to consumers and in what manner. Consumers rightly value competition between search engine providers on this margin; this fact alone suggests caution in regulating search bias at all, much less with an ex ante regulatory schema which defines the margins upon which search providers can compete. The strength of economic theory and evidence demonstrating that regulatory restrictions on vertical integration are costly to consumers, impede innovation, and discourage experimentation in a dynamic marketplace support the conclusion that neither regulation of search bias nor antitrust intervention can be justified on economic terms. Search neutrality advocates touting the non-economic virtues of their proposed regime should bear the burden of demonstrating that they exist beyond the Nirvana Fallacy of comparing an imperfect private actor to a perfect government decision-maker, and further, that any such benefits outweigh the economic costs.



Filed under: announcements, antitrust, economics, error costs, essential facilities, exclusionary conduct, google, law and economics, markets, monopolization, technology, truth on the market Tagged: antitrust, foundem, google, search, search bias, Search Engines, search neutrality, tradecomet, Vertical integration

Continue reading
Antitrust & Consumer Protection

Microsoft comes full circle

Popular Media I am disappointed but not surprised to see that my former employer filed an official antitrust complaint against Google in the EU.  The blog post . . .

I am disappointed but not surprised to see that my former employer filed an official antitrust complaint against Google in the EU.  The blog post by Microsoft’s GC, Brad Smith, summarizing its complaint is here.

Most obviously, there is a tragic irony to the most antitrust-beleaguered company ever filing an antitrust complaint against its successful competitor.  Of course the specifics are not identical, but all of the atmospheric and general points that Microsoft itself made in response to the claims against it are applicable here.  It smacks of competitors competing not in the marketplace but in the regulators’ offices.  It promotes a kind of weird protectionism, directing the EU’s enforcement powers against a successful US company . . . at the behest of another US competitor.  Regulators will always be fighting last year’s battles to the great detriment of the industry.  Competition and potential competition abound, even where it may not be obvious (Linux for Microsoft; Facebook for Google, for example).  Etc.  Microsoft was once the world’s most powerful advocate for more sensible, restrained, error-cost-based competition policy.  That it now finds itself on the opposite side of this debate is unfortunate for all of us.

Brad’s blog post is eloquent (as he always is) and forceful.  And he acknowledges the irony.  And of course he may be right on the facts.  Unfortunately we’ll have to resort to a terribly-costly, irretrievably-flawed and error-prone process to find out–not that the process is likely to result in a very reliable answer anyway.  Where I think he is most off base is where he draws–and asks regulators to draw–conclusions about the competitive effects of the actions he describes.  It is certain that Google has another story and will dispute most or all of the facts.  But even without that information we can dispute the conclusions that Google’s actions, if true, are necessarily anticompetitive.  In fact, as Josh and I have detailed at length here and here, these sorts of actions–necessitated by the realities of complex, innovative and vulnerable markets and in many cases undertaken by the largest and the smallest competitors alike–are more likely pro-competitive.  More important, efforts to ferret out the anti-competitive among them will almost certainly harm welfare rather than help it–particularly when competitors are welcomed in to the regulators’ and politicians’ offices in the process.

As I said, disappointing.  It is not inherently inappropriate for Microsoft to resort to this simply because it has been the victim of such unfortunate “competition” in the past, nor is Microsoft obligated or expected to demonstrate intellectual or any other sort of consistency.  But knowing what it does about the irretrievable defects of the process and the inevitable costliness of its consequences, it is disingenuous or naive (the Nirvana fallacy) for it to claim that it is simply engaging in a reliable effort to smooth over a bumpy competitive landscape.  That may be the ideal of antitrust enforcement, but no one knows as well as Microsoft that the reality is far from that ideal.  To claim implicitly that, in this case, things will be different is, as I said, disingenuous.  And likely really costly in the end for all of us.

Filed under: antitrust, business, exclusionary conduct, law and economics, markets, monopolization, politics, regulation, technology Tagged: Brad Smith, Competition law, European Commission, google, microsoft, politics, regulation

Continue reading
Antitrust & Consumer Protection

An update on the evolving e-book market: Kindle edition (pun intended)

Popular Media [UPDATE:  Josh links to a WSJ article telling us that EU antitrust enforcers raided several (unnamed) e-book publishers as part of an apparent antitrust investigation . . .

[UPDATE:  Josh links to a WSJ article telling us that EU antitrust enforcers raided several (unnamed) e-book publishers as part of an apparent antitrust investigation into the agency model and whether it is “improperly restrictive.”  Whatever that means.  Key grafs:

At issue for antitrust regulators is whether agency models are improperly restrictive. Europe, in particular, has strong anticollusion laws that limit the extent to which companies can agree on the prices consumers will eventually be charged.

Amazon, in particular, has vociferously opposed the agency practice, saying it would like to set prices as it sees fit. Publishers, by contrast, resist the notion of online retailers’ deep discounting.

It is unclear whether the animating question is whether the publishers might have agreed to a particular pricing model, or to particular prices within that model.  As a legal matter that distinction probably doesn’t matter at all; as an economic matter it would seem to be more complicated–to be explored further another day . . . .]

A year ago I wrote about the economics of the e-book publishing market in the context of the dispute between Amazon and some publishers (notably Macmillan) over pricing.  At the time I suggested a few things about how the future might pan out (never a god good idea . . . ):

And that’s really the twist.  Amazon is not ready to be a platform in this business.  The economic conditions are not yet right and it is clearly making a lot of money selling physical books directly to its users.  The Kindle is not ubiquitous and demand for electronic versions of books is not very significant–and thus Amazon does not want to take on the full platform development and distribution risk.  Where seller control over price usually entails a distribution of inventory risk away from suppliers and toward sellers, supplier control over price correspondingly distributes platform development risk toward sellers.  Under the old system Amazon was able to encourage the distribution of the platform (the Kindle) through loss-leader pricing on e-books, ensuring that publishers shared somewhat in the costs of platform distribution (from selling correspondingly fewer physical books) and allowing Amazon to subsidize Kindle sales in a way that helped to encourage consumer familiarity with e-books.  Under the new system it does not have that ability and can only subsidize Kindle use by reducing the price of Kindles–which impedes Amazon from engaging in effective price discrimination for the Kindle, does not tie the subsidy to increased use, and will make widespread distribution of the device more expensive and more risky for Amazon.

This “agency model,” if you recall, is one where, essentially, publishers, rather than Amazon, determine the price for electronic versions of their books sold via Amazon and pay Amazon a percentage.  The problem from Amazon’s point of view, as I mention in the quote above, is that without the ability to control the price of the books it sells, Amazon is limited essentially to fiddling with the price of the reader–the platform–itself in order to encourage more participation on the reader side of the market.  But I surmised (again in the quote above), that fiddling with the price of the platform would be far more blunt and potentially costly than controlling the price of the books themselves, mainly because the latter correlates almost perfectly with usage, and the former does not–and in the end Amazon may end up subsidizing lots of Kindle purchases from which it is then never able to recoup its losses because it accidentally subsidized lots of Kindle purchases by people who had no interest in actually using the devices very much (either because they’re sticking with paper or because Apple has leapfrogged the competition).

It appears, nevertheless, that Amazon has indeed been pursuing this pricing strategy.  According to this post from Kevin Kelly,

In October 2009 John Walkenbach noticed that the price of the Kindle was falling at a consistent rate, lowering almost on a schedule. By June 2010, the rate was so unwavering that he could easily forecast the date at which the Kindle would be free: November 2011.

There’s even a nice graph to go along with it:

So what about the recoupment risk?  Here’s my new theory:  Amazon, having already begun offering free streaming videos for Prime customers, will also begin offering heavily-discounted Kindles and even e-book subsidies–but will also begin rescinding its shipping subsidy and otherwise make the purchase of dead tree books relatively more costly (including by maintaining less inventory–another way to recoup).  It will still face a substantial threat from competing platforms like the iPad but Amazon is at least in a position to affect a good deal of consumer demand for Kindle’s dead tree competitors.

For a take on what’s at stake (here relating to newspapers rather than books, but I’m sure the dynamic is similar), this tidbit linked from one of the comments to Kevin Kelly’s post is eye-opening:

If newspapers switched over to being all online, the cost base would be instantly and permanently transformed. The OECD report puts the cost of printing a typical paper at 28 per cent and the cost of sales and distribution at 24 per cent: so the physical being of the paper absorbs 52 per cent of all costs. (Administration costs another 8 per cent and advertising another 16.) That figure may well be conservative. A persuasive looking analysis in the Business Insider put the cost of printing and distributing the New York Times at $644 million, and then added this: ‘a source with knowledge of the real numbers tells us we’re so low in our estimate of the Times’s printing costs that we’re not even in the ballpark.’ Taking the lower figure, that means that New York Times, if it stopped printing a physical edition of the paper, could afford to give every subscriber a free Kindle. Not the bog-standard Kindle, but the one with free global data access. And not just one Kindle, but four Kindles. And not just once, but every year. And that’s using the low estimate for the costs of printing.

Filed under: antitrust, business, cartels, contracts, doj, e-books, economics, error costs, law and economics, litigation, MFNs, monopolization, resale price maintenance, technology, vertical restraints Tagged: agency model, Amazon, Amazon Kindle, antitrust, Apple, doj, e-book, e-books, iBookstore, Kindle, major publishers, MFN, most favored nations clause, per se, price-fixing, publishing industry, Rule of reason, two-sided markets, vertical restraints

Continue reading
Antitrust & Consumer Protection

On the ethical dimension of l’affair hiybbprqag

Popular Media Former TOTM blog symposium participant Joshua Gans (visiting Microsoft Research) has a post at TAP on l’affair hiybbprqag, about which I blogged previously here. Gans . . .

Former TOTM blog symposium participant Joshua Gans (visiting Microsoft Research) has a post at TAP on l’affair hiybbprqag, about which I blogged previously here.

Gans notes, as I did, that Microsoft is not engaged in wholesale copying of Google’s search results, even though doing so would be technologically feasible.  But Gans goes on to draw a normative conclusion:

Let’s start with “imitation,” “copying” and its stronger variants of “plagiarism” and “cheating.” Had Bing wanted to do this and directly map Google’s search results onto its own, it could have done it. It could have set up programs to enter terms in Google and skimmed off the results and then used them directly. And I think we can all agree that that is wrong. Why? Two reasons. First, if Google has invested to produce those results, if others can just hang off them and copy it, Google’s may not earn the return on its efforts it should do. Second, if Bing were doing this and representing itself as a different kind of search, then that misrepresentation would be misleading. Thus, imitation reduces Google’s reward for innovation while adding no value in terms of diversity.

His first reason why this would be wrong is . . . silly.  I mean, I don’t want to get into a moral debate, but since when is it wrong to engage in activity that “may” hamper another firm’s ability to earn the return on its effort that it “should” (whatever “should” means here)?  I always thought that was called “competition” and we encouraged it.  As I noted the other day, competition via imitation is an important part of Schumpeterian capitalism.  To claim that reducing another company’s profits via imitation is wrong, but doing so via innovation is good and noble, is to hang one’s hat on a distinction that does not really exist.

The second argument, that doing so would amount to misrepresentation, is possible, but I’m sure if Microsoft were actually just copying Google’s results their representations would look different than they do now and the problem would probably not exist, so this claim is speculative, at best.

Now, regardless, I doubt it would be profitable for Microsoft to copy Google wholesale, and this is basically just a red herring (as Gans understands–he goes on to discuss the more “innocuous” imitation at issue).  While I think Gans’ claims that it would be “wrong” are just hand waiving, I am confident it would be “wrong” from the point of view of Microsoft’s bottom line–or else they would already be doing it.  In this context, that would seem to be the only standard that matters, unless there were a legal basis for the claim.

On this score, Gans points us to Shane Greenstein (Kellogg).  Greenstein writes:

Let’s start with a weak standard, the law. Legally speaking, imitation is allowed so long as a firm does not violate laws governing patents, copyright, or trade secrets. Patents obviously do not apply to this situation, and neither does copyright  because Google does not get a copyright on a search result. It also does not appear as if Googles trade secrets were violated. So, generally speaking, it does not appear as if any law has been broken.

This is all well and good, but Greenstein goes on to engage in his own casual moralizing, and his comments are worth reproducing (imitating?) at some length:

The norms of rivalry

There is nothing wrong with one retailer walking through a rival’s shop and getting ideas for what to do. There is really nothing wrong with a designer of a piece of electronic equipment buying a rival’s product and studying it in order to get new ideas for a  better design. 

In the modern Internet, however, there is no longer any privacy for users. Providers want to know as much as they can, and generally the rich suppliers can learn quite a lot about user conduct and preferences.

That means that rivals can learn a great deal about how users conduct their business, even when they are at a rival’s site. It is as if one retailer had a camera in a rival’s store, or one designer could learn the names of the buyer’s of their rival’s products, and interview them right away.

In the offline world, such intimate familiarity with a rival’s users and their transactions would be uncomfortable. It would seem like an intrusion on the transaction between user and supplier. Why is it permissible in the online world? Why is there any confusion about this being an intrusion in the online world? Why isn’t Microsoft’s behavior seen — cut and dry — as an intrusion?

In other words, the transaction between supplier and user is between supplier and user, and nobody else should be able to observe it without permission of both supplier and user. The user alone does not have the right or ability to invite another party to observe all aspects of the transaction.

That is what bothers me about Bing’s behavior. There is nothing wrong with them observing users, but they are doing more than just that. They are observing their rival’s transaction with users. And learning from it. In other contexts that would not be allowed without explicit permission of both parties — both user and supplier.

Moreover, one party does not like it in this case, as they claim the transaction with users as something they have a right to govern and keep to themselves. There is some merit in that claim.

In most contexts it seems like the supplier’s wishes should be respected. Why not online? (emphasis mine)

Where on Earth do these moral standards come from?  In what way is it not “allowed” (whatever that means here) for a firm to observe and learn from a rival’s transactions with users?  I can see why the rival would prefer it to be otherwise, of course, but so what?  They would also prefer to eradicate their meddlesome rival entirely, if possible (hence Microsoft’s considerable engagement with antitrust authorities concerning Google’s business), but we hardly elevate such desires to the realm of the moral.

What I find most troublesome is the controlling, regulatory mindset implicit in these analyses.  Here’s Gans again:

Outright imitation of this type should be prohibited but what do we call some more innocuous types? Just look at how the look and feel of the iPhone has been adopted by some mobile software developers just as the consumer success of graphic based interfaces did in an earlier time. This certainly reduces Apple’s reward for its innovations but the hit on diversity is murkier because while some features are common, competitors have tried to differentiate themselves. So this is not imitation but it is something more common, leveraging without compensation and how you feel about it depends on just how much reward you think pioneers should receive.

It is usually politicians and not economists (other than politico-economists like Krugman) who think they have a handle on–and an obligation to do something about–things like “how much reward . . .pioneers should receive.”  I would have thought the obvious answer to the question would be either “the optimal amount, but good luck knowing what that is or expecting to find it in the real world,” or else, for the Second Best, “whatever the market gives them.”  The implication that there is some moral standard appreciable by human mortals, or even human economists, is a recipe for disaster.

Filed under: business, economics, google, intellectual property, markets, monopolization, politics, technology Tagged: Bing, business ethics, google, Internet search, Joshua Gans, microsoft, Shane Greenstein

Continue reading
Antitrust & Consumer Protection

Microsoft undermines its own case

Popular Media One of my favorite stories in the ongoing saga over the regulation (and thus the future) of Internet search emerged earlier this week with claims . . .

One of my favorite stories in the ongoing saga over the regulation (and thus the future) of Internet search emerged earlier this week with claims by Google that Microsoft has been copying its answers–using Google search results to bolster the relevance of its own results for certain search terms.  The full story from Internet search journalist extraordinaire, Danny Sullivan, is here, with a follow up discussing Microsoft’s response here.  The New York Times is also on the case with some interesting comments from a former Googler that feed nicely into the Schumpeterian competition angle (discussed below).  And Microsoft consultant (“though on matters unrelated to issues discussed here”)  and Harvard Business prof Ben Edelman coincidentally echoes precisely Microsoft’s response in a blog post here.

What I find so great about this story is how it seems to resolve one of the most significant strands of the ongoing debate–although it does so, from Microsoft’s point of view, unintentionally, to be sure.

Here’s what I mean.  Back when Microsoft first started being publicly identified as a significant instigator of regulatory and antitrust attention paid to Google, the company, via its chief competition counsel, Dave Heiner, defended its stance in large part on the following ground:

All of this is quite important because search is so central to how people navigate the Internet, and because advertising is the main monetization mechanism for a wide range of Web sites and Web services. Both search and online advertising are increasingly controlled by a single firm, Google. That can be a problem because Google’s business is helped along by significant network effects (just like the PC operating system business). Search engine algorithms “learn” by observing how users interact with search results. Google’s algorithms learn less common search terms better than others because many more people are conducting searches on these terms on Google.

These and other network effects make it hard for competing search engines to catch up. Microsoft’s well-received Bing search engine is addressing this challenge by offering innovations in areas that are less dependent on volume. But Bing needs to gain volume too, in order to increase the relevance of search results for less common search terms. That is why Microsoft and Yahoo! are combining their search volumes. And that is why we are concerned about Google business practices that tend to lock in publishers and advertisers and make it harder for Microsoft to gain search volume. (emphasis added).

Claims of “network effects” “increasing returns to scale” and the absence of “minimum viable scale” for competitors run rampant (and unsupported) in the various cases against Google.  The TradeComet complaint, for example, claims that

[t]he primary barrier to entry facing vertical search websites is the inability to draw enough search traffic to reach the critical mass necessary to become independently sustainable.

But now we discover (what we should have known all along) that “learning by doing” is not the only way to obtain the data necessary to generate relevant search results: “Learning by copying” works, as well.  And there’s nothing wrong with it–in fact, the very process of Schumpeterian creative destruction assumes imitation.

As Armen Alchian notes in describing his evolutionary process of competition,

Neither perfect knowledge of the past nor complete awareness of the current state of the arts gives sufficient foresight to indicate profitable action . . . [and] the pervasive effects of uncertainty prevent the ascertainment of actions which are supposed to be optimal in achieving profits.  Now the consequence of this is that modes of behavior replace optimum equilibrium conditions as guiding rules of action. First, wherever successful enterprises are observed, the elements common to these observable successes will be associated with success and copied by others in their pursuit of profits or success. “Nothing succeeds like success.”

So on the one hand, I find the hand wringing about Microsoft’s “copying” Google’s results to be completely misplaced–just as the pejorative connotations of “embrace and extend” deployed against Microsoft itself when it was the target of this sort of scrutiny were bogus.  But, at the same time, I see this dynamic essentially decimating Microsoft’s (and others’) claims that Google has an unassailable position because no competitor can ever hope to match its size, and thus its access to information essential to the quality of search results, particularly when it comes to so-called “long-tail” search terms.

Long-tail search terms are queries that are extremely rare and, thus, for which there is little user history (information about which results searchers found relevant and clicked on) to guide future search results.  As Ben Edelman writes in his blog post (linked above) on this issue (trotting out, even while implicitly undercutting, the “minimum viable scale” canard):

Of course the reality is that Google’s high market share means Google gets far more searches than any other search engine. And Google’s popularity gives it a real advantage: For an obscure search term that gets 100 searches per month at Google, Bing might get just five or 10. Also, for more popular terms, Google can slice its data into smaller groups — which results are most useful to people from Boston versus New York, which results are best during the day versus at night, and so forth. So Google is far better equipped to figure out what results users favor and to tailor its listings accordingly. Meanwhile, Microsoft needs additional data, such as Toolbar and Related Sites data, to attempt to improve its results in a similar way.

But of course the “additional data” that Microsoft has access to here is, to a large extent, the same data that Google has.  Although Danny Sullivan’s follow up story (also linked above) suggests that Bing doesn’t do all it could to make use of Google’s data (for example, Bing does not, it seems, copy Google search results wholesale, nor does it use user behavior as extensively as it could (by, for example, seeing searches in Google and then logging the next page visited, which would give Bing a pretty good idea which sites in Google’s results users found most relevant)), it doesn’t change the fundamental fact that Microsoft and other search engines can overcome a significant amount of the so-called barrier to entry afforded by Google’s impressive scale by simply imitating much of what Google does (and, one hopes, also innovating enough to offer something better).

Perhaps Google is “better equipped to figure out what users favor.”  But it seems to me that only a trivial amount of this advantage is plausibly attributable to Google’s scale instead of its engineering and innovation.  The fact that Microsoft can (because of its own impressive scale in various markets) and does take advantage of accessible data to benefit indirectly from Google’s own prowess in search is a testament to the irrelevance of these unfortunately-pervasive scale and network effect arguments.

Filed under: antitrust, armen alchian, business, google, markets, monopolization, technology Tagged: antitrust, Armen Alchian, Bing, Danny Sullivan, economies of scale, google, Google Search, Internet search, microsoft, minimum viable scale, network effects

Continue reading
Antitrust & Consumer Protection