Showing Latest Publications

Extending & Rebutting Edelman & Lockwood on Search Bias

Popular Media In my last post, I discussed Edelman & Lockwood’s (E&L’s) attempt to catch search engines in the act of biasing their results—as well as their . . .

In my last post, I discussed Edelman & Lockwood’s (E&L’s) attempt to catch search engines in the act of biasing their results—as well as their failure to actually do so.  In this post, I present my own results from replicating their study.  Unlike E&L, I find that Bing is consistently more biased than Google, for reasons discussed further below, although neither engine references its own content as frequently as E&L suggest.

I ran searches for E&L’s original 32 non-random queries using three different search engines—Google, Bing, and Blekko—between June 23 and July 5 of this year.  This replication is useful, as search technology has changed dramatically since E&L recorded their results in August 2010.  Bing now powers Yahoo, and Blekko has had more time to mature and enhance its results.  Blekko serves as a helpful “control” engine in my study, as it is totally independent of Google and Microsoft, and so has no incentive to refer to Google or Microsoft content unless it is actually relevant to users.  In addition, because Blekko’s model is significantly different than Google and Microsoft’s, if results on all three engines agree that specific content is highly relevant to the user query, it lends significant credibility to the notion that the content places well on the merits rather than being attributable to bias or other factors.

How Do Search Engines Rank Their Own Content?

Focusing solely upon the first position, Google refers to its own products or services when no other search engine does in 21.9% of queries; in another 21.9% of queries, both Google and at least one other search engine rival (i.e. Bing or Blekko) refer to the same Google content with their first links.

But restricting focus upon the first position is too narrow.  Assuming that all instances in which Google or Bing rank their own content first and rivals do not amounts to bias would be a mistake; such a restrictive definition would include cases in which all three search engines rank the same content prominently—agreeing that it is highly relevant—although not all in the first position. 

The entire first page of results provides a more informative comparison.  I find that Google and at least one other engine return Google content on the first page of results in 7% of the queries.  Google refers to its own content on the first page of results without agreement from either rival search engine in only 7.9% of the queries.  Meanwhile, Bing and at least one other engine refer to Microsoft content in 3.2% of the queries.  Bing references Microsoft content without agreement from either Google or Blekko in 13.2% of the queries:

This evidence indicates that Google’s ranking of its own content differs significantly from its rivals in only 7.9% of queries, and that when Google ranks its own content prominently it is generally perceived as relevant.  Further, these results suggest that Bing’s organic search results are significantly more biased in favor of Microsoft content than Google’s search results are in favor of Google’s content.

Examining Search Engine “Bias” on Google

The following table presents the percentages of queries for which Google’s ranking of its own content differs significantly from its rivals’ ranking of that same content.

Note that percentages below 50 in this table indicate that rival search engines generally see the referenced Google content as relevant and independently believe that it should be ranked similarly.

So when Google ranks its own content highly, at least one rival engine typically agrees with this ranking; for example, when Google places its own content in its Top 3 results, at least one rival agrees with this ranking in over 70% of queries.  Bing especially agrees with Google’s rankings of Google content within its Top 3 and 5 results, failing to include Google content that Google ranks similarly in only a little more than a third of queries.

Examining Search Engine “Bias” on Bing

Bing refers to Microsoft content in its search results far more frequently than its rivals reference the same Microsoft content.  For example, Bing’s top result references Microsoft content for 5 queries, while neither Google nor Blekko ever rank Microsoft content in the first position:

This table illustrates the significant discrepancies between Bing’s treatment of its own Microsoft content relative to Google and Blekko.  Neither rival engine refers to Microsoft content Bing ranks within its Top 3 results; Google and Blekko do not include any Microsoft content Bing refers to on the first page of results in nearly 80% of queries.

Moreover, Bing frequently ranks Microsoft content highly even when rival engines do not refer to the same content at all in the first page of results.  For example, of the 5 queries for which Bing ranks Microsoft content in its top result, Google refers to only one of these 5 within its first page of results, while Blekko refers to none.  Even when comparing results across each engine’s full page of results, Google and Blekko only agree with Bing’s referral of Microsoft content in 20.4% of queries.

Although there are not enough Bing data to test results in the first position in E&L’s sample, Microsoft content appears as results on the first page of a Bing search about 7 times more often than Microsoft content appears on the first page of rival engines.  Also, Google is much more likely to refer to Microsoft content than Blekko, though both refer to significantly less Microsoft content than Bing.

A Closer Look at Google v. Bing

On E&L’s own terms, Bing results are more biased than Google results; rivals are more likely to agree with Google’s algorithmic assessment (than with Bing’s) that its own content is relevant to user queries.  Bing refers to Microsoft content other engines do not rank at all more often than Google refers its own content without any agreement from rivals.  Figures 1 and 2 display the same data presented above in order to facilitate direct comparisons between Google and Bing.

As Figures 1 and 2 illustrate, Bing search results for these 32 queries are more frequently “biased” in favor of its own content than are Google’s.  The bias is greatest for the Top 1 and Top 3 search results.

My study finds that Bing exhibits far more “bias” than E&L identify in their earlier analysis.  For example, in E&L’s study, Bing does not refer to Microsoft content at all in its Top 1 or Top 3 results; moreover, Bing refers to Microsoft content within its entire first page 11 times, while Google and Yahoo refer to Microsoft content 8 and 9 times, respectively.  Most likely, the significant increase in Bing’s “bias” differential is largely a function of Bing’s introduction of localized and personalized search results and represents serious competitive efforts on Bing’s behalf.

Again, it’s important to stress E&L’s limited and non-random sample, and to emphasize the danger of making strong inferences about the general nature or magnitude of search bias based upon these data alone.  However, the data indicate that Google’s own-content bias is relatively small even in a sample collected precisely to focus upon the queries most likely to generate it.  In fact—as I’ll discuss in my next post—own-content bias occurs even less often in a more representative sample of queries, strongly suggesting that such bias does not raise the competitive concerns attributed to it.

Filed under: antitrust, business, economics, google, Internet search, law and economics, monopolization, technology Tagged: antitrust, Bias, Bing, Blekko, google, microsoft, search, Web search engine, Yahoo

Continue reading
Antitrust & Consumer Protection

Investigating Search Bias: Measuring Edelman & Lockwood’s Failure to Measure Bias in Search

Popular Media Last week I linked to my new study on “search bias.”  At the time I noted I would have a few blog posts in the . . .

Last week I linked to my new study on “search bias.”  At the time I noted I would have a few blog posts in the coming days discussing the study.  This is the first of those posts.

A lot of the frenzy around Google turns on “search bias,” that is, instances when Google references its own links or its own content (such as Google Maps or YouTube) in its search results pages.  Some search engine critics condemn such references as inherently suspect and almost by their very nature harmful to consumers.  Yet these allegations suffer from several crucial shortcomings.  As I’ve noted (see, e.g., here and here), these naked assertions of discrimination are insufficient to state a cognizable antitrust claim, divorced as they are from consumer welfare analysis.  Indeed, such “discrimination” (some would call it “vertical integration”) has a well-recognized propensity to yield either pro-competitive or competitively neutral outcomes, rather than concrete consumer welfare losses.  Moreover, because search engines exist in an incredibly dynamic environment, marked by constant innovation and fierce competition, we would expect different engines, utilizing different algorithms and appealing to different consumer preferences, to emerge.  So when search engines engage in product differentiation of this sort, there is no reason to be immediately suspicious of these business decisions.

No reason to be immediately suspicious – but there could, conceivably, be a problem.  If there is, we would want to see empirical evidence of it—of both the existence of bias, as well as the consumer harm emanating from it.  But one of the most notable features of this debate is the striking lack of empirical data.  Surprisingly little research has been done in this area, despite frequent assertions that own-content bias is commonly practiced and poses a significant threat to consumers (see, e.g., here).

My paper is an attempt to rectify this.  In the paper, I investigate the available data to determine whether and to what extent own-content bias actually occurs, by analyzing and replicating a study by Ben Edelman and Ben Lockwood (E&L) and conducting my own study of a larger, randomized set of search queries.

In this post I discuss my analysis and critique of E&L; in future posts I’ll present my own replication of their study, as well as the results of my larger study of 1,000 random search queries.  Finally, I’ll analyze whether any of these findings support anticompetitive foreclosure theories or are otherwise sufficient to warrant antitrust intervention.

E&L “investigate . . . [w]hether search engines’ algorithmic results favor their own services, and if so, which search engines do most, to what extent, and in what substantive areas.”  Their approach is to measure the difference in how frequently search engines refer to their own content relative to how often their rivals do so.

One note at the outset:  While this approach provides useful descriptive facts about the differences between how search engines link to their own content, it does little to inform antitrust analysis because Edelman and Lockwood begin with the rather odd claim that competition among differentiated search engines for consumers is a puzzle that creates an air of suspicion around the practice—in fact, they claim that “it is hard to see why results would vary . . . across search engines.”  This assertion, of course, is simply absurd.  Indeed, Danny Sullivan provides a nice critique of this claim:

It’s not hard to see why search engine result differ at all.  Search engines each use their own “algorithm” to cull through the pages they’ve collected from across the web, to decide which pages to rank first . . . . Google has a different algorithm than Bing.  In short, Google will have a different opinion than Bing.  Opinions in the search world, as with the real world, don’t always agree.

Moreover, this assertion completely discounts both the vigorous competitive product differentiation that occurs in nearly all modern product markets as well as the obvious selection effects at work in own-content bias (Google users likely prefer Google content).  This combination detaches E&L’s analysis from the consumer welfare perspective, and thus antitrust policy relevance, despite their claims to the contrary (and the fact that their results actually exhibit very little bias).

Several methodological issues undermine the policy relevance of E&L’s analysis.  First, they hand select 32 search queries and execute searches on Google, Bing, Yahoo, AOL and Ask.  This hand-selected non-random sample of 32 search queries cannot generate reliable inferences regarding the frequency of bias—a critical ingredient to understanding its potential competitive effects.  Indeed, E&L acknowledge their queries are chosen precisely because they are likely to return results including Google content (e.g., email, images, maps, video, etc.).

E&L analyze the top three organic search results for each query on each engine.  They find that 19% of all results across all five search engines refer to content affiliated with one of them.  They focus upon the first three organic results and report that Google refers to its own content in the first (“top”) position about twice as often as Yahoo and Bing refer to Google content in this position.  Additionally, they note that Yahoo is more biased than Google when evaluating the first page rather than only the first organic search result.

E&L also offer a strained attempt to deal with the possibility of competitive product differentiation among search engines.  They examine differences among search engines’ references to their own content by “compar[ing] the frequency with which a search engine links to its own pages, relative to the frequency with which other search engines link to that search engine’s pages.”  However, their evidence undermines claims that Google’s own-content bias is significant and systematic relative to its rivals’.  In fact, almost zero evidence of statistically significant own-content bias by Google emerges.

E&L find, in general, Google is no more likely to refer to its own content than other search engines are to refer to that same content, and across the vast majority of their results, E&L find Google search results are not statistically more likely to refer to Google content than rivals’ search results.

The same data can be examined to test the likelihood that a search engine will refer to content affiliated with a rival search engine.  Rather than exhibiting bias in favor of an engine’s own content, a “biased” search engine might conceivably be less likely to refer to content affiliated with its rivals.  The table below reports the likelihood (in odds ratios) that a search engine’s content appears in a rival engine’s results.

The first two columns of the table demonstrate that both Google and Yahoo content are referred to in the first search result less frequently in rivals’ search results than in their own.  Although Bing does not have enough data for robust analysis of results in the first position in E&L’s original analysis, the next three columns in Table 1 illustrate that all three engines’ (Google, Yahoo, and Bing) content appears less often on the first page of rivals’ search results than on their own search engine.  However, only Yahoo’s results differ significantly from 1.  As between Google and Bing, the results are notably similar.

E&L also make a limited attempt to consider the possibility that favorable placement of a search engine’s own content is a response to user preferences rather than anticompetitive motives.  Using click-through data, they find, unsurprisingly, that the first search result tends to receive the most clicks (72%, on average).  They then identify one search term for which they believe bias plays an important role in driving user traffic.  For the search query “email,” Google ranks its own Gmail first and Yahoo Mail second; however, E&L also find that Gmail receives only 29% of clicks while Yahoo Mail receives 54%.  E&L claim that this finding strongly indicates that Google is engaging in conduct that harms users and undermines their search experience.

However, from a competition analysis perspective, that inference is not sound.  Indeed, the fact that the second-listed Yahoo Mail link received the majority of clicks demonstrates precisely that Yahoo was not competitively foreclosed from access to users.  Taken collectively, E&L are not able to muster evidence of potential competitive foreclosure.

While it’s important to have an evidence-based discussion surrounding search engine results and their competitive implications, it’s also critical to recognize that bias alone is not evidence of competitive harm.  Indeed, any identified bias must be evaluated in the appropriate antitrust economic context of competition and consumers, rather than individual competitors and websites.  E&L’s analysis provides a useful starting point for describing how search engines differ in their referrals to their own content.  But, taken at face value, their results actually demonstrate little or no evidence of bias—let alone that the little bias they do find is causing any consumer harm.

As I’ll discuss in coming posts, evidence gathered since E&L conducted their study further suggests their claims that bias is prevalent, inherently harmful, and sufficient to warrant antitrust intervention are overstated and misguided.

Filed under: antitrust, business, economics, google, Internet search, law and economics, monopolization, technology Tagged: antitrust, Bing, google, search, search bias, Search Engines, search neutrality, Web search engine, Yahoo

Continue reading
Antitrust & Consumer Protection

My New Empirical Study on Defining and Measuring Search Bias

Popular Media Tomorrow is the deadline for Eric Schmidt to send his replies to the Senate Judiciary Committee’s follow up questions from his appearance at a hearing . . .

Tomorrow is the deadline for Eric Schmidt to send his replies to the Senate Judiciary Committee’s follow up questions from his appearance at a hearing on Google antitrust issues last month.  At the hearing, not surprisingly, search neutrality was a hot topic, with representatives from the likes of Yelp and Nextag, as well as Expedia’s lawyer, Tom Barnett (that’s Tom Barnett (2011), not Tom Barnett (2006-08)), weighing in on Google’s purported bias.  One serious problem with the search neutrality/search bias discussions to date has been the dearth of empirical evidence concerning so-called search bias and its likely impact upon consumers.  Hoping to remedy this, I posted a study this morning at the ICLE website both critiquing one of the few, existing pieces of empirical work on the topic (by Ben Edelman, Harvard economist) as well as offering up my own, more expansive empirical analysis.  Chris Sherman at Search Engine Land has a great post covering the study.  The title of his article pretty much says it all:  “Bing More Biased Than Google; Google Not Behaving Anti-competitively.”

Read the full piece here

Continue reading
Antitrust & Consumer Protection

The Bulldozer Solution to the Housing Crisis

TOTM My inaugural blog on two-sided markets did not elicit much reaction from TOTM readers. Perhaps it was too boring. In a desperate attempt to generate . . .

My inaugural blog on two-sided markets did not elicit much reaction from TOTM readers. Perhaps it was too boring. In a desperate attempt to generate a hostile comment from at least one housing advocate, I have decided to advocate bulldozing homes in foreclosure as one (of several) means to relieve the housing crisis. Not with families inside them, of course. In my mind, the central problem of U.S. housing markets is the misallocation of land: Thanks to the housing boom, there are too many houses and not enough greenery. And bulldozers are the fastest way to convert unwanted homes into parks.

Read the full piece here.

Continue reading
Financial Regulation & Corporate Governance

New York Taxis

Popular Media The New York Times reports that the most recent price for a taxi in New York medallion is $1,000,000.  Wikipedia reports that there are 13,237 . . .

The New York Times reports that the most recent price for a taxi in New York medallion is $1,000,000.  Wikipedia reports that there are 13,237 licensed cabs in New York.   (A “medallion” is  the physical form of a taxicab license.)  This means that the present value of the rents created by limiting taxicabs is $13,237,000,000  — thirteen billion dollars.  This is just the rents; the total lost consumer surplus is much greater because the lack of taxicabs creates substantial deadweight losses.  For example, I am confident that many people have cars in New York only because they cannot count on getting a cab.  Cabs change shifts during rush hour because they can earn less at this time and so that is when they go out of Manhattan to change drivers, just when demand is greatest.  (This is also caused by the relatively too low price for waiting compared with the price for driving.)  There is a proposal which will make it easier for limousines to pick up passengers.  Of course, the taxi owners are opposed to this plan, but it would clearly be an efficient change.

Filed under: business, licensing, regulation Tagged: New York, Taxicabs

Continue reading
Financial Regulation & Corporate Governance

Law Review Publishing Norms and Inefficient Performance

Popular Media One of my colleagues recently accepted a publication offer on a law review article, only to receive a later publication offer from a much more prestigious journal.  This . . .

One of my colleagues recently accepted a publication offer on a law review article, only to receive a later publication offer from a much more prestigious journal.  This sort of occurrence is not uncommon in the legal academy, where scholars submitting articles for publication do not offer to publish their work in a journal but rather solicit publication offers from journals (and generally solicit multiple offers at the same time).  One may easily accept an inferior journal’s offer before receiving another from a preferred journal. 

I’ve been in my colleague’s unfortunate position three times: once when I was trying to become a professor, once during my first semester of teaching, and once in the semester before I went up for tenure.  Each time, breaching my initial publication contract and accepting the later-received offer from the more prestigious journal would have benefited me by an amount far greater than the harm caused to the jilted journal.  Accordingly, the welfare-maximizing outcome would have been for me to breach my initial publication agreement and to pay the put-upon journal an amount equal to the damage caused by my breach.  Such a move would have been Pareto-improving:  I would have been better off, and the original publisher, the breach “victim,” would have been as well off as before I breached.  

As all first-year law students learn (or should learn!), the law of contracts is loaded with doctrines designed to encourage efficient breach and discourage inefficient performance.  Most notable among these is the rule precluding punitive damages for breach of contract:  If a breaching party were required to pay such damages, in addition to the so-called “expectancy” damages necessary to compensate the breach victim for her loss, then promisors contemplating breach might perform even though doing so would cost more than the value of the performance to the promisee.  Such performance would be wasteful.

So why didn’t I — a contracts professor who knows that a promisor’s contract duty is always disjunctive: “perform or pay” — breach my initial publication agreements and offer the jilted journal editors some amount of settlement (say, $1,000 for an epic staff party — an amount far less than the incremental value to me of going with the higher-ranked journal)?  Because of a silly social norm frowning upon such conduct as indicative of a flawed character.  When I was looking for a teaching job, I was informed that breaching a publication agreement is a definite no-no and might impair my job prospects.  After I became a professor, I learned that members of my faculty had threatened to vote against the tenure of professors who breached publication agreements.  To be fair, I’m not sure those faculty members would do so if the breaching professor compensated the jilted journal, effectively “buying himself out” of his contract.  But who would run that risk?

So I empathize with my colleague who now feels stuck publishing in the less prestigious journal.  And, while I recognize the difference between a legal and moral obligation, I would commend the following wise words to those law professors who would imbue law review publishing contracts with “mystic significance”:

Nowhere is the confusion between legal and moral ideas more manifest than in the law of contract.  Among other things, here again the so-called primary rights and duties are invested with a mystic significance beyond what can be assigned and explained.  The duty to keep a contract at common law means a prediction that you must pay damages if you do not keep it — and nothing else.  If you commit a tort, you are liable to pay a compensatory sum.  If you commit a contract, you are liable to pay a compensatory sum unless the promised event comes to pass, and that is all the difference.  But such a mode of looking at the matter stinks in the nostrils of those who think it advantageous to get as much ethics into the law as they can.

Oliver Wendell Holmes, Jr., The Path of the Law, 10 Harv. L. Rev. 457 (1897).  

Filed under: contracts, law school, musings

Continue reading
Financial Regulation & Corporate Governance

A Macro Conference

Popular Media I was invited to attend the Financial Times Global Conference “The View From the Top: The Future of America” and since I was in New . . .

I was invited to attend the Financial Times Global Conference “The View From the Top: The Future of America” and since I was in New York anyway I thought it would be fun.  I don’t hang around with macro types much, and even less with liberal macro types.  I will not summarize the entire conference, but a few observations:

  1. Reinhart-Rogoff was a hit, mentioned several times.  Aside from the merits of the book, I think people were trying to give Obama cover for no recovery.  R-R apparently says it takes an average of 7 years to get out of a financial crisis.
  2. The first speaker (Gene Sperling) was late and the Gillian Tett of the FT, the moderator, took some informal polls of the audience (mainly business journalists.)  Pretty pessimistic: Thought that there would be a double-dip, the EU would lose at least one member, and yields would not increase.
  3. Sperling (Director of the National Economic Council) spent a lot of time talking about how bad unemployment is and arguing for the President’s Jobs plan (which the Senate has already rejected.)  Not much new to propose.
  4. Peter Orszagh (former OMB Director, now with CITI) made a few interesting points.  He said that the Administration got the original forecast wrong, and did not realize that the recession was “L” and not “V” shaped.  He also predicted that middle class incomes will not return to their original level and that policy should not fool people into thinking they would.
  5. Several speakers (Laura Tyson of Berkeley and former CEA Chair; Steve Case , AOL founder) argued for better immigration laws (no quarrel there: the Republicans have got themselves into a terrible position on immigration).  Tyson in particular argued for more STEM (science, technology, engineering, mathematics) education.  I asked her if she thought the increasing gender imbalance in colleges (now about 2 women per man) was responsible for the STEM problem and she indicated that it might be part of the problem.  Really something worth further examination and some policy analysis.  Of course the immigration mess makes this problem worse since it is harder to import engineers from abroad.
  6. Someone (I think Steve Rattner, former Auto Czar) made the point that while the American economy is doing badly and unemployment is a real problem, American companies are doing very well, in part because of foreign earnings.  There were also several inconclusive discussions of a tax holiday for repatriation of foreign earnings.  Some said that this would be “unfair” but others understood that future effects, not past fairness, was what was relevant.  Not clear what the effects would be, however.
  7. A few mentions of Sarbanes-Oxley and Dodd-Frank, but mostly the role of regulation was ignored.  Health care was mentioned but not, I believe, Obamacare.  Everyone agreed that businesses were “afraid” to spend money but little discussion of the source of the fear.
  8. Most were not worried about conflict with China.  I asked about Chinese demographics (aging population, gender imbalance with too many males.)  Whenever I hear discussions of China I raise this issue since people seem to ignore it and it is a serious issue.  Michael Spence (Nobel Laureate, now at NYU) said that China was in a position to establish a viable retirement program (no details) but that the gender issue was not one that was being dealt with.  There seemed to be almost envy of the ability of the Chinese to do what they wanted independent of the desires of the people.
  9. Laurence Fink of BlackRock made the interesting point that the current situation seems a lot like the 1970s, including the widespread pessimism.  Martin Wolf, Chief Economics Commentator of the FT, agreed.  But the lesson he drew was that we need more and wiser regulation.  I spoke with him briefly and indicated that I was in the Reagan Administration, and that last time we got in a pessimistic mess like this deregulation al la Reagan was the solution.  He rejected this approach.  But I am hopeful.

Filed under: business, economics, Education, financial regulation, markets, sarbanes-oxley Tagged: macro

Continue reading
Financial Regulation & Corporate Governance

Amazon and Internet Commerce

Popular Media Stewart Baker at the Volokh Conspiracy has a very interesting post on the new Amazon browser.  He thinks it might revolutionize doing business on the Web, with a tremendous increase in security. . . .

Stewart Baker at the Volokh Conspiracy has a very interesting post on the new Amazon browser.  He thinks it might revolutionize doing business on the Web, with a tremendous increase in security.  This increase in security will entail a loss in privacy, so let’s hope the privacy guys don’t stop it.

Filed under: business, Internet search, markets, privacy Tagged: Amazon’s new browser

Continue reading
Financial Regulation & Corporate Governance

Zywicki on the Unintended Consequences of the Durbin Bank Fees

Popular Media Here’s Professor Zywicki in the WSJ on the debit card interchange price controls going into effect, and their unintended but entirely predictable consequences: Faced with . . .

Here’s Professor Zywicki in the WSJ on the debit card interchange price controls going into effect, and their unintended but entirely predictable consequences:

Faced with a dramatic cut in revenues (estimated to be $6.6 billion by Javelin Strategy & Research, a global financial services consultancy), banks have already imposed new monthly maintenance fees—usually from $36 to $60 per year—on standard checking and debit-card accounts, as well as new or higher fees on particular bank services. While wealthier consumers have avoided many of these new fees—for example, by maintaining a sufficiently high minimum balance—a Bankrate survey released this week reported that only 45% of traditional checking accounts are free, down from 75% in two years.

Some consumers who previously banked for free will be unable or unwilling to pay these fees merely for the privilege of a bank account. As many as one million individuals will drop out of the mainstream banking system and turn to check cashers, pawn shops and high-fee prepaid cards, according to an estimate earlier this year by economists David Evans, Robert Litan and Richard Schmalensee. (Their study was supported by banks.)

Consumers will also be encouraged to shift from debit cards to more profitable alternatives such as credit cards, which remain outside the Durbin amendment’s price controls. According to news reports, Bank of America has made a concerted effort to shift customers from debit to credit cards, including plans to charge a $5 monthly fee for debit-card purchases. Citibank has increased its direct mail efforts to recruit new credit card customers frustrated by the increased cost and decreased benefits of debit cards.

This substitution will offset the hemorrhaging of debit-card revenues for banks. But it is also likely to eat into the financial windfall expected by big box retailers and their lobbyists. They likely will return to Washington seeking to extend price controls to credit cards. …

Todd closes with a nice point about where the impact of these regulations will be felt most:

Conceived of as a narrow special-interest giveaway to large retailers, the Durbin amendment will have long-term consequences for the consumer banking system. Wealthier consumers will be able to avoid the pinch of higher banking fees by increasing their use of credit cards. Many low-income consumers will not.

Read the whole thing.

 

Filed under: banking, business, consumer protection, credit cards, economics

Continue reading
Antitrust & Consumer Protection