Showing Latest Publications

The Industrial Organization of Food Carts

Popular Media As Harold Demsetz notes, “the problem of defining ownership is precisely that of creating properly scaled legal barriers to entry.”   Taxi medallions, meet food cart . . .

As Harold Demsetz notes, “the problem of defining ownership is precisely that of creating properly scaled legal barriers to entry.”   Taxi medallions, meet food cart permits.  From the WSJ:

The city’s competitive street food culture has created a thriving black market for mobile food vending permits issued by the Department of Health and Mental Hygiene. The city charges a mere $200 for most food-cart permits, which must be paid every two years when they are renewed. But it only issues 3,100 year-round permits plus an additional 1,000 seasonal permits—not enough to satisfy demand. Transferring or renting these permits to another vendor is illegal but everyone, including the city’s Health Department, acknowledges, that it happens.

Meanwhile, demand for permits and their black-market prices continue to climb as street food’s popularity soars with blogs like Midtown Lunch chronicling vendors’ moves and some gourmet food trucks developing cult-like followings. Some permits fetch as much as $20,000 for two years, vendors say. In the case of Ms. Sultana, the Bronx food vendor, she says the permit holder told her someone else was willing to pay $15,000 for the permit she previously paid $7,000 for two years ago.

Mohammed Rahman, who has operated the popular Kwik Meal cart in midtown for 11 years, says he pays $15,000 every two years for his permit. “The city charges only $200, why should I have to pay $15,000? All the profits go to someone else.”

Obtaining a food cart or truck permit in one’s own name can take a decade or more, according to vendors. There are 2,080 people currently on the citywide waiting list for a two-year permit. The list is compiled of license holders and it’s not uncommon for families to get licenses for every member of their family—even if they don’t work at a cart—to increase their chances of obtaining a permit.

In a related story, the food carts in New York City now have a trade association (website here).

Even more closely related is the battle over green cart permits in NYC, and the competitive response from supermarkets:

The city started the green cart program almost three years ago to bring more fruits and vegetables to “underserved” neighborhoods with high rates of diet-related illnesses. Today, the city has issued about 450 permits to operate green carts in large swaths of the Bronx and upper Manhattan, as well as parts of Brooklyn, Queens and Staten Island. While most normal food carts can operate anywhere and tend to congregate in high-traffic neighborhoods like midtown Manhattan, green carts can sell only in designated zones.

Some lawmakers like Peter Koo, a City Council member who represents Flushing, thinks green carts shouldn’t be allowed within a certain distance from supermarkets.

The green carts have their fans in the community. George Wright recently walked up to the two carts a block from Ms. Kim’s store. “It’s cheaper than other stores,” he said, “and the fruit is very good.”

Ms. Kim says the monthly revenue in her store has dropped to $5,000 a month from $10,000 a month because of the carts, the tough economy and nearby construction. That’s before she shells out $3,500 for rent, buys the produce and pays an employee. The result is that she’s losing money each month.

She figures it’s going to get worse. Most of her customers pay with electronic food stamps and recently some green carts got portable devices so they can accept them as well.

“I have to go bankrupt,” she says.

Here is Yglesias on DC food cart deregulation.

Filed under: business, economics, food, markets, regulation

Continue reading
Financial Regulation & Corporate Governance

Search Bias and Antitrust

Popular Media There is an antitrust debate brewing concerning Google and “search bias,” a term used to describe search engine results that preference the content of the . . .

There is an antitrust debate brewing concerning Google and “search bias,” a term used to describe search engine results that preference the content of the search provider.  For example, Google might list Google Maps prominently if one searches “maps” or Microsoft’s Bing might prominently place Microsoft affiliated content or products.

Apparently both antitrust investigations and Congressional hearings are in the works; regulators and commentators appear poised to attempt to impose “search neutrality” through antitrust or other regulatory means to limit or prohibit the ability of search engines (or perhaps just Google) to favor their own content.  At least one proposal goes so far as to advocate a new government agency to regulate search.  Of course, when I read proposals like this, I wonder where Google’s share of the “search market” will be by the time the new agency is built.

As with the net neutrality debate, I understand some of the push for search neutrality involves an intense push to discard traditional economically-grounded antitrust framework.  The logic for this push is simple.  The economic literature on vertical restraints and vertical integration provides no support for ex ante regulation arising out of the concern that a vertically integrating firm will harm competition through favoring its own content and discriminating against rivals.  Economic theory suggests that such arrangements may be anticompetitive in some instances, but also provides a plethora of pro-competitive explanations.  Lafontaine & Slade explain the state of the evidence in their recent survey paper in the Journal of Economic Literature:

We are therefore somewhat surprised at what the weight of the evidence is telling us. It says that, under most circumstances, profit-maximizing vertical-integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view. Although there are isolated studies that contradict this claim, the vast majority support it. Moreover, even in industries that are highly concentrated so that horizontal considerations assume substantial importance, the net effect of vertical integration appears to be positive in many instances. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked. Furthermore, we have found clear evidence that restrictions on vertical integration that are imposed, often by local authorities, on owners of retail networks are usually detrimental to consumers. Given the weight of the evidence, it behooves government agencies to reconsider the validity of such restrictions.

Of course, this does not bless all instances of vertical contracts or integration as pro-competitive.  The antitrust approach appropriately eschews ex ante regulation in favor of a fact-specific rule of reason analysis that requires plaintiffs to demonstrate competitive harm in a particular instance. Again, given the strength of the empirical evidence, it is no surprise that advocates of search neutrality, as net neutrality before it, either do not rely on consumer welfare arguments or are willing to sacrifice consumer welfare for other objectives.

I wish to focus on the antitrust arguments for a moment.  In an interview with the San Francisco Gate, Harvard’s Ben Edelman sketches out an antitrust claim against Google based upon search bias; and to his credit, Edelman provides some evidence in support of his claim.

I’m not convinced.  Edelman’s interpretation of evidence of search bias is detached from antitrust economics.  The evidence is all about identifying whether or not there is bias.  That, however, is not the relevant antitrust inquiry; instead, the question is whether such vertical arrangements, including preferential treatment of one’s own downstream products, are generally procompetitive or anticompetitive.  Examples from other contexts illustrate this point.

Grocery product manufacturers contract for “bias” with supermarkets through slotting contracts and other shelf space payments.  The bulk of economic theory and evidence on these contracts suggest that they are generally efficient and a normal part of the competitive process.   Vertically integrated firms may “bias” their own content in ways that increase output.  Whether bias occurs within the firm (as is the case with Google favoring its own products) or by contract (the shelf space example) should be of no concern for Edelman and those making search bias antitrust arguments.  Economists have known since Coase — and have been reminded by Klein, Alchian, Williamson and others — that firms may achieve by contract anything they could do within the boundaries of the firm.  The point is that, in the economics literature, it is well known that content self-promoting incentives in a vertical relationship can be either efficient or anticompetitive depending on the circumstances of the situation.  The empirical literature suggests that such relationships are mostly pro-competitive and that restrictions upon the abilities of firms to enter them generally reduce consumer welfare.

Edelman is an economist, and so I find it a bit odd that he has framed the “bias” debate without reference to any of this literature.  Instead, his approach appears to be that bias generates harm to rivals and that this harm is a serious antitrust problem.  (Or in other places, that the problem is that Google exhibits bias but its employees may have claimed otherwise at various points; this is also antitrust-irrelevant.)  For example, Edelman writes:

Search bias is a mechanism whereby Google can leverage its dominance in search, in order to achieve dominance in other sectors.  So for example, if Google wants to be dominant in restaurant reviews, Google can adjust search results, so whenever you search for restaurants, you get a Google reviews page, instead of a Chowhound or Yelp page. That’s good for Google, but it might not be in users’ best interests, particularly if the other services have better information, since they’ve specialized in exactly this area and have been doing it for years.

“Leveraging” one’s dominance in search, of course, takes a bit more than bias.  But I was quite curious about Edelman’s evidence and so I went and looked at Edelman and Lockwood.  Here is how they characterize their research question: “Whether search engines’ algorithmic results favor their own services, and if so, which search engines do so most, to what extent, and in what substantive areas.”  Here is how the authors describe what they did to test the hypothesis that Google engages in more search bias than other search engines:

To formalize our analysis, we formed a list of 32 search terms for services commonly provided by search engines, such as “email”, “calendar”, and “maps”. We searched for each term using the top 5 search engines: Google, Yahoo, Bing, Ask, and AOL. We collected this data in August 2010.

We preserved and analyzed the first page of results from each search. Most results came from sources independent of search engines, such as blogs, private web sites, and Wikipedia. However, a significant fraction – 19% – came from pages that were obviously affiliated with one of the five search engines. (For example, we classified results from youtube.com and gmail.com as Google, while Microsoft results included msn.com, hotmail.com, live.com, and Bing.)

Here is the underlying data for all 32 terms; so far, so good.  A small pilot study examining whether and to what extent search engines favor their own content is an interesting project — though, again, I’m not sure it says anything about the antitrust issues.  No surprise: they find some evidence that search engines exhibit some bias in favor of affiliated sites.  You can see all of the evidence at Edelman’s site (again, to his credit).  Interpretations of these results vary dramatically.  Edelman sees a serious problem.  Danny Sullivan begs to differ (“Google only favors itself 19 percent of the time”), and also makes the important point that the study took place before Yahoo searches were powered by Bing.

In their study, Edelman and Lockwood appear at least somewhat aware that bias and vertical integration can be efficient although they do not frame it in those terms.  They concede, for example, that “in principle, a search engine might feature its own services because its users prefer these links.”  To distinguish between these two possibilities, they conceive of the following test:

To test the user preference and bias hypotheses, we use data from two different sources on click-through-rate (CTR) for searches at Google, Yahoo, and Bing. Using CTR data from comScore and another service that (with users’ permission) tracks users’ searches and clicks (a service which prefers not to be listed by name), we analyze the frequency with which users click on search results for selected terms. The data span a four-week period, centered around the time of our automated searches.  In click-through data, the most striking pattern is that the first few search results receive the vast majority of users’ clicks. Across all search engines and search terms, the first result received, on average, 72% of users’ clicks, while the second and third results received 13% and 8% of clicks, respectively.

So far, no surprises.  The first listing generates greater incremental click-through than the second or third listing.  Similarly, the eye-level shelf space generates more sales than less prominent shelf space.  The authors have a difficult time distinguishing user preference from bias:

This concentration of users’ clicks makes it difficult to disprove the user preference hypothesis. For example, as shown in Table 1, Google and Yahoo each list their own maps service as the first result for the query “maps”. Our CTR data indicates that Google Maps receives 86% of user clicks when the search is performed on Google, and Yahoo Maps receives 72% of clicks when the search is performed on Yahoo. One might think that this concentration is evidence of users’ preference for the service affiliated with their search engine. On the other hand, since clicks are usually highly concentrated on the first result, it is possible that users have no such preference, and that they are simply clicking on the first result because it appears first. Moreover, since the advantage conferred by a result’s rank likely differs across different search queries, we do not believe it is appropriate to try to control for ranking in a regression.

The interesting question from a consumer welfare perspective is not what happens to the users without a strong preference for Google Maps or Yahoo Maps.  Users without a strong preference are likely to click-through on whatever service is offered on their search engine of choice.  There is no significant welfare loss from a consumer who is indifferent between Google Maps and Yahoo Maps from choosing one over the other.

The more interesting question is whether users with a strong preference for a non-Google product are foreclosed from access to consumers by search bias.  When Google ranks its Maps above others, but a user with a strong preference for Yahoo Maps finds it listed second, is the user able to find his product of choice?  Probably if it is listed second.  Probably not if it is delisted or something more severe.  Edelman reports some data on this issues:

Nevertheless, there is one CTR pattern that would be highly suggestive of bias. Suppose we see a case in which a search engine ranks its affiliated result highly, yet that result receives fewer clicks than lower results. This would suggest that users strongly prefer the lower result — enough to overcome the effect of the affiliated result’s higher ranking.

Of course this is consistent with bias; however, to repeat the critical point, this bias does not inexorably lead to — or even suggest — an antitrust problem.  Let’s recall the shelf space analogy.  Consider a supermarket where Pepsi is able to gain access to the premium eye-level shelf space but consumers have a strong preference for Coke.  Whether or not the promotional efforts of Pepsi will have an impact on competition depend on whether Coke is able to get access to consumers.  In that case, it may involve reaching down to the second or third shelf.  There might be some incremental search costs involved.  And even if one could show that Coke sales declined dramatically in response to Pepsi’s successful execution of its contractual shelf-space bias strategy, that merely shows harm to rivals rather than harm to competition.  If Coke-loving consumers can access their desired product, Coke isn’t harmed, and there is certainly no competitive risk.

So what do we make of evidence that in the face of search engine bias, click-through data suggest consumers will still pick lower listings?  One inference is that consumers with strong preferences for content other than the biased result nonetheless access their preferred content.  It is difficult to see a competitive problem arising in such an environment.  Edelman anticipates this point somewhat when observes during his interview:

The thing about the effect I’ve just described is you don’t see it very often. Usually the No. 1 link gets twice as many clicks as the second result. So the bias takes some of the clicks that should have gone to the right result. It seems most users are influenced by the positioning.

This fails to justify Edelman’s position.  First off, in a limited sample of terms, its unclear what it means for these reversals not to happen “very often.”  More importantly, so what that the top link gets twice as many clicks as the second link?  The cases where the second link gets the dominant share of clicks-through might well be those where users have a strong preference for the second listed site.  Even if they are not, the antitrust question is whether search bias is efficient or poses a competitive threat.  Most users might be influenced by the positioning because they lack a strong preference or even any preference at all.  That search engines compete for the attention of those consumers, including through search bias, should not be surprising.  But it does not make out a coherent claim of consumer harm.

The ‘compared to what’ question looms large here.  One cannot begin to conceive of answering the search bias problem — if it is a problem at all — from a consumer welfare perspective until they pin down the appropriate counterfactual.  Edelman appears to assume  — when he observes that “bias takes some of the clicks that should have gone to the right result” — that the benchmark “right result” is that which would prevail if listings were correlated perfectly with aggregate consumer preference.   My point here is simple: that comparison is not the one that is relevant to antitrust.  An antitrust inquiry would distinguish harm to competitors from harm to competition; it would focus its inquiry on whether bias impaired the competitive process by foreclosing rivals from access to consumers and not merely whether various listings would be improved but for Google’s bias.  The answer to that question is clearly yes.  The relevant question, however, is whether that bias is efficient.   Evidence that other search engines with much smaller market shares, and certainly without any market power, exhibit similar bias would suggest to most economists that the practice certainly has some efficiency justifications.  Edelman ignores that possibility and by doing so, ignores decades of economic theory and empirical evidence.  This is a serious error, as the overwhelming lesson of that literature is that restrictions on vertical contracting and integration are a serious threat to consumer welfare.

I do not know what answer the appropriate empirical analysis would reveal.  As Geoff and I argue in this paper, however, I suspect a monopolization case against Google on these grounds would face substantial obstacles.  A deeper understanding of the competitive effects of search engine bias is a worthy project.  Edelman should also be applauded for providing some data that is interesting fodder for discussion.  But my sense of the economic arguments and existing data are that they do not provide the support for an antitrust attack against search bias against Google specifically, nor the basis for a consumer-welfare grounded search neutrality regime.

Filed under: advertising, antitrust, armen alchian, business, economics, exclusionary conduct, google, monopolization, net neutrality, technology

Continue reading
Antitrust & Consumer Protection

Comment to the Federal Reserve Board on Regulation II: Where’s the Competitive Impact Analysis?

Popular Media I have submitted a comment to the Federal Reserve Board concerning Regulation II, along with the American Enterprise Institute’s Alex Brill, Christopher DeMuth, Alex J. . . .

I have submitted a comment to the Federal Reserve Board concerning Regulation II, along with the American Enterprise Institute’s Alex Brill, Christopher DeMuth, Alex J. Pollock, and Peter Wallison, as well as my George Mason colleague Todd Zywicki.  Regulation II implements the interchange fee provisions of the Dodd-Frank Act.

The comment makes a rather straightforward and simple point:

We write to express our concern that the Federal Reserve Board has not to date taken the prudent and, importantly, legally required step of conducting a competitive impact analysis of Regulation II, which implements the interchange fee provisions of section 1075 of the Dodd-Frank Act (Pub L. 111-203). We consider this to be one of the most significant legal changes to the payment system’s competitive landscape since the Electronic Funds Transfer Act in 1978. This dramatic statutory and subsequent regulatory change will undoubtedly trigger a complex set of consequences for all firms participating in the payment system as well as for consumers purchasing both retail goods and financial services. The Federal Reserve’s obligation to conduct a competitive impact analysis of Regulation II is an appropriate and prudent safeguard against legal change with potentially pernicious consequences for the economy and consumers. Given the Board’s own well-crafted standards, we do not believe it is appropriate for the Board to move forward in implementing Regulation II without the required competitive impact analysis.

The rest of the comment appears below the fold.

The Board’s bulletin setting forth its role in the payments system lays out the policy that the Fed is supposed to follow “when considering … a legal change … if that change would have a direct and material adverse effect on the ability of other service providers to compete effectively with the Federal Reserve in providing similar services due to differing legal powers or constraints or due to a dominant marketing position deriving from such legal differences.” The bulletin explicitly promises that “[a]ll operational or legal changes having a substantial effect on payments system participants will be subject to a
competitive-impact analysis even if the competitive effects are not apparent on the face of the proposal.”

There is little doubt that Regulation II qualifies for the required competitive impact analysis by this standard as it will likely have a “substantial effect on payment system participants.” Further, several aspects of the proposal impose “differing constraints” on different institutions. The proposal, for example, exempts Fed-sponsored payment systems such as the A C H system from the scope of the regulation while sweeping in alternate payment providers, even though such provider systems are functionally indistinguishable in relevant respect.

The bulletin goes on to provide details of the required competitive impact analysis. For example, the Board must “first determine whether the proposal has a direct and material adverse effect on the ability of other service providers to compete effectively with the Federal Reserve in providing similar services.” If so, the Board must then “ascertain whether the adverse effect is due to legal differences or due to a dominant market position deriving from such legal differences.” If legal differences or a dominant market position deriving from those legal differences are detected, the analysis must then turn to assessing the benefits of the proposed legal change and determining whether those benefits could be “reasonably achieved with a lesser or no adverse competitive impact.” Indeed, the bulletin indicates that “the Board would then either modify the proposal to lessen or eliminate the adverse impact on competitors’ ability to compete or determine that the payments system objectives may not be reasonably achieved if the proposal were modified.” As the bulletin anticipates, such a detailed and careful analysis is fully appropriate to better understand the competitive impact of a significant legal change in the payment system before it is implemented.

As Federal Reserve Board Governor Sarah Bloom Raskin observed in recent Congressional testimony, “Commenters also have differing perspectives on the potential effect of the statute and the proposed rule on consumers,” and “the magnitude of the ultimate effect is not clear and will depend on the behavior of various participants in the debit card networks.”

We agree with Governor Raskin’s observations and conclude that an economic impact analysis of the competitive effects of Regulation II, while a complex endeavor, is a critical one to protect competition in the payment system and consumers. We urge the Board to conduct an impact analysis of Regulation II and to make this analysis available for public comment before implementation of Regulation II.

Interesting readers can search for other comments here.

 

Filed under: banking, consumer financial protection bureau, consumer protection, credit cards, economics, regulation

Continue reading
Antitrust & Consumer Protection

Small Business Financing Post-Crisis

Popular Media Tomorrow I will be attending a symposium on small business financing sponsored by the Entrepreneurial Business Law Journal‘s at the Moritz College of Law at . . .

Tomorrow I will be attending a symposium on small business financing sponsored by the Entrepreneurial Business Law Journal‘s at the Moritz College of Law at the Ohio State University. I’m on a panel entitled “Recessionary Impacts on Equity Capital,” which is a bit misleading–or at least a bit different that the topic I offered to speak on, which is the effect of the recession and recent financial crisis on small business financing more generally. The rest of the day includes presentations governmental and policy responses to the crisis and practical implications of constricted capital. A copy of the schedule and list of speakers is available. I’m not very familiar with any of the other panelists, but the luncheon address will be given by Al Martinez-Fonts, Executive Vice President, U.S. Chamber of Commerce.

I’m going to focus on a few basic points and highlight some of the myths around small businesses and small business financing that drives poor policy. My first objective is to lay out a simple framework for thinking about financing deals, or any deal for that matter. Namely, the idea that every transaction involves allocations of value, uncertainty and decision rights; and the deal itself provides structure on those allocations by specifying the incentive systems, performance measures and decision rights that address both parties’ interests. How those structures are designed determine the nature of risk exposure and incentive conflicts that may affect the ex post value and performance of the deal.

In a sense, there is nothing new in small business financing post-crisis.  The fundamentals are the same. There is a multitude of contractual terms to address the various kinds of incentive issues and uncertainties that exist in the current market environment. To the extent there is anything truly unique about the current context, they are less about the financial market itself than about broader regulatory and economic issues. For example, much of the uncertainty affecting credit-worthiness have to do with economic and cash flow uncertainties stemming from upheavals in the regulatory landscape for small businesses, including health care. Uncertainty concerning implementation of financial market reforms passed in July 2010 create uncertainties for lenders. These uncertainties exacerbate the usual economic uncertainties of new and small businesses during an economic recovery period.

During the recession itself, “stimulus” spending distorted the credit-worthiness of small businesses in industries that were more directly benefited by government handouts and by the security provided small businesses that supply large, publicly-administered and guaranteed businesses (such as in the auto industry).  Thus, federal and state economic policy to “create jobs” in some sectors distorted the incentives to lend to different groups of small businesses, likely reducing employment in other sectors.

Finally, I’m going to suggest that talking about “small business” financing is a misnomer if we are truly motivated by a care of job creation. A recent paper by John Haltiwanger, Ron Jarmin, and Javier Miranda illustrates that business size is not the key determinant of job creation in the US, as is often argued in the media and policy circles. (HT: Peter Klein at O&M) They find that it is young firms, which happen to be small, not small firms in general that provide the job creation. Ironically, these young firms are also the ones for whom financing is most difficult due to the nascent stage of development and uncertainty. Thus, policies directed to firms based on size alone further distort capital availability from other (larger) companies that are equally likely to create jobs. Since this distortion is not costless, the policies are not welfare-neutral by simply switching where jobs are created, but likely to reduce welfare overall.

So now you don’t need to rush to Columbus, Ohio, to hear what I’ll have to say–unless you want to see the fireworks in person. But now you’ll know what’s going on in case there is news of more upset around the horse shoe in Columbus.

Filed under: financial regulation, markets, regulation, Sykuta

Continue reading
Financial Regulation & Corporate Governance

An update on the evolving e-book market: Kindle edition (pun intended)

Popular Media [UPDATE:  Josh links to a WSJ article telling us that EU antitrust enforcers raided several (unnamed) e-book publishers as part of an apparent antitrust investigation . . .

[UPDATE:  Josh links to a WSJ article telling us that EU antitrust enforcers raided several (unnamed) e-book publishers as part of an apparent antitrust investigation into the agency model and whether it is “improperly restrictive.”  Whatever that means.  Key grafs:

At issue for antitrust regulators is whether agency models are improperly restrictive. Europe, in particular, has strong anticollusion laws that limit the extent to which companies can agree on the prices consumers will eventually be charged.

Amazon, in particular, has vociferously opposed the agency practice, saying it would like to set prices as it sees fit. Publishers, by contrast, resist the notion of online retailers’ deep discounting.

It is unclear whether the animating question is whether the publishers might have agreed to a particular pricing model, or to particular prices within that model.  As a legal matter that distinction probably doesn’t matter at all; as an economic matter it would seem to be more complicated–to be explored further another day . . . .]

A year ago I wrote about the economics of the e-book publishing market in the context of the dispute between Amazon and some publishers (notably Macmillan) over pricing.  At the time I suggested a few things about how the future might pan out (never a god good idea . . . ):

And that’s really the twist.  Amazon is not ready to be a platform in this business.  The economic conditions are not yet right and it is clearly making a lot of money selling physical books directly to its users.  The Kindle is not ubiquitous and demand for electronic versions of books is not very significant–and thus Amazon does not want to take on the full platform development and distribution risk.  Where seller control over price usually entails a distribution of inventory risk away from suppliers and toward sellers, supplier control over price correspondingly distributes platform development risk toward sellers.  Under the old system Amazon was able to encourage the distribution of the platform (the Kindle) through loss-leader pricing on e-books, ensuring that publishers shared somewhat in the costs of platform distribution (from selling correspondingly fewer physical books) and allowing Amazon to subsidize Kindle sales in a way that helped to encourage consumer familiarity with e-books.  Under the new system it does not have that ability and can only subsidize Kindle use by reducing the price of Kindles–which impedes Amazon from engaging in effective price discrimination for the Kindle, does not tie the subsidy to increased use, and will make widespread distribution of the device more expensive and more risky for Amazon.

This “agency model,” if you recall, is one where, essentially, publishers, rather than Amazon, determine the price for electronic versions of their books sold via Amazon and pay Amazon a percentage.  The problem from Amazon’s point of view, as I mention in the quote above, is that without the ability to control the price of the books it sells, Amazon is limited essentially to fiddling with the price of the reader–the platform–itself in order to encourage more participation on the reader side of the market.  But I surmised (again in the quote above), that fiddling with the price of the platform would be far more blunt and potentially costly than controlling the price of the books themselves, mainly because the latter correlates almost perfectly with usage, and the former does not–and in the end Amazon may end up subsidizing lots of Kindle purchases from which it is then never able to recoup its losses because it accidentally subsidized lots of Kindle purchases by people who had no interest in actually using the devices very much (either because they’re sticking with paper or because Apple has leapfrogged the competition).

It appears, nevertheless, that Amazon has indeed been pursuing this pricing strategy.  According to this post from Kevin Kelly,

In October 2009 John Walkenbach noticed that the price of the Kindle was falling at a consistent rate, lowering almost on a schedule. By June 2010, the rate was so unwavering that he could easily forecast the date at which the Kindle would be free: November 2011.

There’s even a nice graph to go along with it:

So what about the recoupment risk?  Here’s my new theory:  Amazon, having already begun offering free streaming videos for Prime customers, will also begin offering heavily-discounted Kindles and even e-book subsidies–but will also begin rescinding its shipping subsidy and otherwise make the purchase of dead tree books relatively more costly (including by maintaining less inventory–another way to recoup).  It will still face a substantial threat from competing platforms like the iPad but Amazon is at least in a position to affect a good deal of consumer demand for Kindle’s dead tree competitors.

For a take on what’s at stake (here relating to newspapers rather than books, but I’m sure the dynamic is similar), this tidbit linked from one of the comments to Kevin Kelly’s post is eye-opening:

If newspapers switched over to being all online, the cost base would be instantly and permanently transformed. The OECD report puts the cost of printing a typical paper at 28 per cent and the cost of sales and distribution at 24 per cent: so the physical being of the paper absorbs 52 per cent of all costs. (Administration costs another 8 per cent and advertising another 16.) That figure may well be conservative. A persuasive looking analysis in the Business Insider put the cost of printing and distributing the New York Times at $644 million, and then added this: ‘a source with knowledge of the real numbers tells us we’re so low in our estimate of the Times’s printing costs that we’re not even in the ballpark.’ Taking the lower figure, that means that New York Times, if it stopped printing a physical edition of the paper, could afford to give every subscriber a free Kindle. Not the bog-standard Kindle, but the one with free global data access. And not just one Kindle, but four Kindles. And not just once, but every year. And that’s using the low estimate for the costs of printing.

Filed under: antitrust, business, cartels, contracts, doj, e-books, economics, error costs, law and economics, litigation, MFNs, monopolization, resale price maintenance, technology, vertical restraints Tagged: agency model, Amazon, Amazon Kindle, antitrust, Apple, doj, e-book, e-books, iBookstore, Kindle, major publishers, MFN, most favored nations clause, per se, price-fixing, publishing industry, Rule of reason, two-sided markets, vertical restraints

Continue reading
Antitrust & Consumer Protection

On the ethical dimension of l’affair hiybbprqag

Popular Media Former TOTM blog symposium participant Joshua Gans (visiting Microsoft Research) has a post at TAP on l’affair hiybbprqag, about which I blogged previously here. Gans . . .

Former TOTM blog symposium participant Joshua Gans (visiting Microsoft Research) has a post at TAP on l’affair hiybbprqag, about which I blogged previously here.

Gans notes, as I did, that Microsoft is not engaged in wholesale copying of Google’s search results, even though doing so would be technologically feasible.  But Gans goes on to draw a normative conclusion:

Let’s start with “imitation,” “copying” and its stronger variants of “plagiarism” and “cheating.” Had Bing wanted to do this and directly map Google’s search results onto its own, it could have done it. It could have set up programs to enter terms in Google and skimmed off the results and then used them directly. And I think we can all agree that that is wrong. Why? Two reasons. First, if Google has invested to produce those results, if others can just hang off them and copy it, Google’s may not earn the return on its efforts it should do. Second, if Bing were doing this and representing itself as a different kind of search, then that misrepresentation would be misleading. Thus, imitation reduces Google’s reward for innovation while adding no value in terms of diversity.

His first reason why this would be wrong is . . . silly.  I mean, I don’t want to get into a moral debate, but since when is it wrong to engage in activity that “may” hamper another firm’s ability to earn the return on its effort that it “should” (whatever “should” means here)?  I always thought that was called “competition” and we encouraged it.  As I noted the other day, competition via imitation is an important part of Schumpeterian capitalism.  To claim that reducing another company’s profits via imitation is wrong, but doing so via innovation is good and noble, is to hang one’s hat on a distinction that does not really exist.

The second argument, that doing so would amount to misrepresentation, is possible, but I’m sure if Microsoft were actually just copying Google’s results their representations would look different than they do now and the problem would probably not exist, so this claim is speculative, at best.

Now, regardless, I doubt it would be profitable for Microsoft to copy Google wholesale, and this is basically just a red herring (as Gans understands–he goes on to discuss the more “innocuous” imitation at issue).  While I think Gans’ claims that it would be “wrong” are just hand waiving, I am confident it would be “wrong” from the point of view of Microsoft’s bottom line–or else they would already be doing it.  In this context, that would seem to be the only standard that matters, unless there were a legal basis for the claim.

On this score, Gans points us to Shane Greenstein (Kellogg).  Greenstein writes:

Let’s start with a weak standard, the law. Legally speaking, imitation is allowed so long as a firm does not violate laws governing patents, copyright, or trade secrets. Patents obviously do not apply to this situation, and neither does copyright  because Google does not get a copyright on a search result. It also does not appear as if Googles trade secrets were violated. So, generally speaking, it does not appear as if any law has been broken.

This is all well and good, but Greenstein goes on to engage in his own casual moralizing, and his comments are worth reproducing (imitating?) at some length:

The norms of rivalry

There is nothing wrong with one retailer walking through a rival’s shop and getting ideas for what to do. There is really nothing wrong with a designer of a piece of electronic equipment buying a rival’s product and studying it in order to get new ideas for a  better design. 

In the modern Internet, however, there is no longer any privacy for users. Providers want to know as much as they can, and generally the rich suppliers can learn quite a lot about user conduct and preferences.

That means that rivals can learn a great deal about how users conduct their business, even when they are at a rival’s site. It is as if one retailer had a camera in a rival’s store, or one designer could learn the names of the buyer’s of their rival’s products, and interview them right away.

In the offline world, such intimate familiarity with a rival’s users and their transactions would be uncomfortable. It would seem like an intrusion on the transaction between user and supplier. Why is it permissible in the online world? Why is there any confusion about this being an intrusion in the online world? Why isn’t Microsoft’s behavior seen — cut and dry — as an intrusion?

In other words, the transaction between supplier and user is between supplier and user, and nobody else should be able to observe it without permission of both supplier and user. The user alone does not have the right or ability to invite another party to observe all aspects of the transaction.

That is what bothers me about Bing’s behavior. There is nothing wrong with them observing users, but they are doing more than just that. They are observing their rival’s transaction with users. And learning from it. In other contexts that would not be allowed without explicit permission of both parties — both user and supplier.

Moreover, one party does not like it in this case, as they claim the transaction with users as something they have a right to govern and keep to themselves. There is some merit in that claim.

In most contexts it seems like the supplier’s wishes should be respected. Why not online? (emphasis mine)

Where on Earth do these moral standards come from?  In what way is it not “allowed” (whatever that means here) for a firm to observe and learn from a rival’s transactions with users?  I can see why the rival would prefer it to be otherwise, of course, but so what?  They would also prefer to eradicate their meddlesome rival entirely, if possible (hence Microsoft’s considerable engagement with antitrust authorities concerning Google’s business), but we hardly elevate such desires to the realm of the moral.

What I find most troublesome is the controlling, regulatory mindset implicit in these analyses.  Here’s Gans again:

Outright imitation of this type should be prohibited but what do we call some more innocuous types? Just look at how the look and feel of the iPhone has been adopted by some mobile software developers just as the consumer success of graphic based interfaces did in an earlier time. This certainly reduces Apple’s reward for its innovations but the hit on diversity is murkier because while some features are common, competitors have tried to differentiate themselves. So this is not imitation but it is something more common, leveraging without compensation and how you feel about it depends on just how much reward you think pioneers should receive.

It is usually politicians and not economists (other than politico-economists like Krugman) who think they have a handle on–and an obligation to do something about–things like “how much reward . . .pioneers should receive.”  I would have thought the obvious answer to the question would be either “the optimal amount, but good luck knowing what that is or expecting to find it in the real world,” or else, for the Second Best, “whatever the market gives them.”  The implication that there is some moral standard appreciable by human mortals, or even human economists, is a recipe for disaster.

Filed under: business, economics, google, intellectual property, markets, monopolization, politics, technology Tagged: Bing, business ethics, google, Internet search, Joshua Gans, microsoft, Shane Greenstein

Continue reading
Antitrust & Consumer Protection

The Behavioral Economics of Going to Bed Angry

Popular Media And other lessons in the (applied) economics of marriage (HT: Mankiw). Filed under: behavioral economics, economics, marriage

And other lessons in the (applied) economics of marriage (HT: Mankiw).

Filed under: behavioral economics, economics, marriage

Continue reading
Financial Regulation & Corporate Governance

Microsoft undermines its own case

Popular Media One of my favorite stories in the ongoing saga over the regulation (and thus the future) of Internet search emerged earlier this week with claims . . .

One of my favorite stories in the ongoing saga over the regulation (and thus the future) of Internet search emerged earlier this week with claims by Google that Microsoft has been copying its answers–using Google search results to bolster the relevance of its own results for certain search terms.  The full story from Internet search journalist extraordinaire, Danny Sullivan, is here, with a follow up discussing Microsoft’s response here.  The New York Times is also on the case with some interesting comments from a former Googler that feed nicely into the Schumpeterian competition angle (discussed below).  And Microsoft consultant (“though on matters unrelated to issues discussed here”)  and Harvard Business prof Ben Edelman coincidentally echoes precisely Microsoft’s response in a blog post here.

What I find so great about this story is how it seems to resolve one of the most significant strands of the ongoing debate–although it does so, from Microsoft’s point of view, unintentionally, to be sure.

Here’s what I mean.  Back when Microsoft first started being publicly identified as a significant instigator of regulatory and antitrust attention paid to Google, the company, via its chief competition counsel, Dave Heiner, defended its stance in large part on the following ground:

All of this is quite important because search is so central to how people navigate the Internet, and because advertising is the main monetization mechanism for a wide range of Web sites and Web services. Both search and online advertising are increasingly controlled by a single firm, Google. That can be a problem because Google’s business is helped along by significant network effects (just like the PC operating system business). Search engine algorithms “learn” by observing how users interact with search results. Google’s algorithms learn less common search terms better than others because many more people are conducting searches on these terms on Google.

These and other network effects make it hard for competing search engines to catch up. Microsoft’s well-received Bing search engine is addressing this challenge by offering innovations in areas that are less dependent on volume. But Bing needs to gain volume too, in order to increase the relevance of search results for less common search terms. That is why Microsoft and Yahoo! are combining their search volumes. And that is why we are concerned about Google business practices that tend to lock in publishers and advertisers and make it harder for Microsoft to gain search volume. (emphasis added).

Claims of “network effects” “increasing returns to scale” and the absence of “minimum viable scale” for competitors run rampant (and unsupported) in the various cases against Google.  The TradeComet complaint, for example, claims that

[t]he primary barrier to entry facing vertical search websites is the inability to draw enough search traffic to reach the critical mass necessary to become independently sustainable.

But now we discover (what we should have known all along) that “learning by doing” is not the only way to obtain the data necessary to generate relevant search results: “Learning by copying” works, as well.  And there’s nothing wrong with it–in fact, the very process of Schumpeterian creative destruction assumes imitation.

As Armen Alchian notes in describing his evolutionary process of competition,

Neither perfect knowledge of the past nor complete awareness of the current state of the arts gives sufficient foresight to indicate profitable action . . . [and] the pervasive effects of uncertainty prevent the ascertainment of actions which are supposed to be optimal in achieving profits.  Now the consequence of this is that modes of behavior replace optimum equilibrium conditions as guiding rules of action. First, wherever successful enterprises are observed, the elements common to these observable successes will be associated with success and copied by others in their pursuit of profits or success. “Nothing succeeds like success.”

So on the one hand, I find the hand wringing about Microsoft’s “copying” Google’s results to be completely misplaced–just as the pejorative connotations of “embrace and extend” deployed against Microsoft itself when it was the target of this sort of scrutiny were bogus.  But, at the same time, I see this dynamic essentially decimating Microsoft’s (and others’) claims that Google has an unassailable position because no competitor can ever hope to match its size, and thus its access to information essential to the quality of search results, particularly when it comes to so-called “long-tail” search terms.

Long-tail search terms are queries that are extremely rare and, thus, for which there is little user history (information about which results searchers found relevant and clicked on) to guide future search results.  As Ben Edelman writes in his blog post (linked above) on this issue (trotting out, even while implicitly undercutting, the “minimum viable scale” canard):

Of course the reality is that Google’s high market share means Google gets far more searches than any other search engine. And Google’s popularity gives it a real advantage: For an obscure search term that gets 100 searches per month at Google, Bing might get just five or 10. Also, for more popular terms, Google can slice its data into smaller groups — which results are most useful to people from Boston versus New York, which results are best during the day versus at night, and so forth. So Google is far better equipped to figure out what results users favor and to tailor its listings accordingly. Meanwhile, Microsoft needs additional data, such as Toolbar and Related Sites data, to attempt to improve its results in a similar way.

But of course the “additional data” that Microsoft has access to here is, to a large extent, the same data that Google has.  Although Danny Sullivan’s follow up story (also linked above) suggests that Bing doesn’t do all it could to make use of Google’s data (for example, Bing does not, it seems, copy Google search results wholesale, nor does it use user behavior as extensively as it could (by, for example, seeing searches in Google and then logging the next page visited, which would give Bing a pretty good idea which sites in Google’s results users found most relevant)), it doesn’t change the fundamental fact that Microsoft and other search engines can overcome a significant amount of the so-called barrier to entry afforded by Google’s impressive scale by simply imitating much of what Google does (and, one hopes, also innovating enough to offer something better).

Perhaps Google is “better equipped to figure out what users favor.”  But it seems to me that only a trivial amount of this advantage is plausibly attributable to Google’s scale instead of its engineering and innovation.  The fact that Microsoft can (because of its own impressive scale in various markets) and does take advantage of accessible data to benefit indirectly from Google’s own prowess in search is a testament to the irrelevance of these unfortunately-pervasive scale and network effect arguments.

Filed under: antitrust, armen alchian, business, google, markets, monopolization, technology Tagged: antitrust, Armen Alchian, Bing, Danny Sullivan, economies of scale, google, Google Search, Internet search, microsoft, minimum viable scale, network effects

Continue reading
Antitrust & Consumer Protection

Pro-Business v. Pro-Growth

Popular Media Don Boudreaux explains the distinction with reference to President Obama’s State of the Union address. Filed under: business, economics, politics

Don Boudreaux explains the distinction with reference to President Obama’s State of the Union address.

Filed under: business, economics, politics

Continue reading
Financial Regulation & Corporate Governance