Showing Latest Publications

Commissioner Wright’s Speech at the ABA Antitrust Section’s Spring Meeting

Popular Media Friday I discussed FTC Commissioner (and TOTM alumnus) Josh Wright’s speech at the Spring Meeting of the ABA’s Antitrust Section.  Wright’s speech, What’s Your Agenda?, is now available online. As . . .

Friday I discussed FTC Commissioner (and TOTM alumnus) Josh Wright’s speech at the Spring Meeting of the ABA’s Antitrust Section.  Wright’s speech, What’s Your Agenda?, is now available online.

As I mentioned, Commissioner Wright emphasized two matters on which he’d like to see FTC action.  First, he hopes the Commission will help fulfill the promise of Section 5 of the FTC Act by articulating an “Unfair Methods Policy Statement” that includes both “guiding principles for Section 5 theories of liability outside the scope of the Sherman and Clayton Acts” and “limiting principles confining the scope of unfair methods claims.”  Articulation of such principles would reduce the incidence of market power-enhancing conduct that could be difficult to pursue under the Sherman and Clayton Acts (the “guiding principles” would put firms on notice that such conduct is to be avoided), but they would also avoid chilling procompetitive conduct (the “limiting principles” would create zones of safety).  Giving guidance to business planners on what the FTC is likely to pursue — and what it’s not — would thereby enhance the effectiveness of the antitrust enterprise.

Commissioner Wright also stated his intention to utilize the FTC’s powers to pursue public restraints — i.e., output-limiting conduct authorized or required by governmental entities.  Wright explained:

An agency sensitive to efficiently executing its competition mission will look for low hanging fruit—in other words, it will identify and bring enforcement actions to prevent conduct that is clearly anticompetitive and thus bring immediate and certain benefits for consumers.

Public restraints upon trade represent precisely this type of increasingly rare low hanging fruit and, thus, should be a more central concern of U.S. competition policy. The legal hurdles facing enforcement against public restraints often render policy advocacy the primary weapon for the FTC in this area; and it is a weapon the FTC has wielded effectively and consistently over time. The FTC also has brought enforcement actions to challenge public restraints in recent years in appropriate cases. I support vigorous use of both tools….

I’m heartened by Commissioner Wright’s leadership on these matters and look forward to seeing how things develop at the Commission.

Filed under: antitrust, error costs, federal trade commission

Continue reading
Antitrust & Consumer Protection

Commissioner Wright lays down the gauntlet on Section 5

TOTM As Thom noted (here and here), Josh’s speech at the ABA Spring Meeting was fantastic.  In laying out his agenda at the FTC, Josh highlighted two areas on which . . .

As Thom noted (here and here), Josh’s speech at the ABA Spring Meeting was fantastic.  In laying out his agenda at the FTC, Josh highlighted two areas on which he intends to focus: Section 5 and public restraints on trade.  These are important, even essential, areas, and Josh’s leadership here will be most welcome.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

Some Thoughts on the Spring Meeting: Bummed About RPM, Happy About the FTC’s Future

Popular Media I’ve spent the last few days in DC at the ABA Antitrust Section’s Spring Meeting. The Spring Meeting is the extravaganza of the year for . . .

I’ve spent the last few days in DC at the ABA Antitrust Section’s Spring Meeting. The Spring Meeting is the extravaganza of the year for antitrust lawyers, bringing together leading antitrust practitioners, enforcers, and academics for in-depth discussions about developments in the law. It’s really a terrific event. I was honored this year to have been invited (by my old law school classmate, Adam Biegel) to present the “antitrust economics” and “monopolization” sections of the Antitrust Fundamentals session. Former TOTM blogger (now FTC Commissioner) Josh Wright has taught those sections in the past, so I had some pretty big shoes to fill. It was great fun.

Two sessions yesterday really got my blood pumping, albeit for different reasons. The first was a session on counseling clients on RPM after Leegin. Leegin, of course, was the 2007 Supreme Court decision overruling the 1911 Dr. Miles precedent that declared minimum resale price maintenance (RPM) to be per se illegal. Post-Leegin, a manufacturer’s setting of the resale price its downstream dealers may charge is evaluated under the Rule of Reason, at least for purposes of federal antitrust law.

While it was a 5-4 decision, the holding of Leegin is hardly controversial among antitrust scholars. Chicago School and neo-Chicago scholars like myself, Harvard School scholars like Herb Hovenkamp, and even post-Chicago scholars like Einer Elhauge are in agreement that RPM is not always or almost always anticompetitive and thus ought to be analyzed under the Rule of Reason. (Indeed, Elhauge queried: “The puzzle is what provoked a vigorous dissent from Justice Breyer, one of the world’s most sophisticated antitrust justices…”). There’s simply no doubt about Leegin among those who have studied RPM most closely: it was correctly decided.

It was most disheartening, then, to hear a group of esteemed panelist opine that Leegin hasn’t really changed the advice one should give clients considering RPM policies. It’s still wise, the panelists stated, to advise manufacturing clients to avoid RPM and instead to implement either (1) so-called Colgate policies where the manufacturer simply announces and follows a unilateral policy of not selling to dealers who discount, or (2) consignment arrangements where the manufacturer doesn’t sell its product to dealers but instead enlists them as its sales agents and retains title to its product until the product is sold to the end-user consumer. The former approach avoids RPM liability because there is no “agreement” concerning resale prices; the latter, because there is technically no “resale.” Both approaches, though, involve costly and cumbersome methods by which manufacturers may exert control over the resale prices of their products. (See, e.g., golf club manufacturer Ping’s now-classic discussion of the difficulties involved in implementing a Colgate policy.)  So why counsel clients to adopt Colgate policies and consignment/agency arrangements when RPM is now adjudged under the Rule of Reason?

Because of the states — a number of them, at least. Maryland has adopted an explicit Leegin-repealer; California’s Cartwright Act uses language that appears to declare RPM to be per se illegal; and the Supreme Court of Kansas recently held that RPM is per se illegal under that state’s predictably unenlightened antitrust laws.  (Sorry Kansas folk. Proud Mizzou Tiger here.) In addition, a number of states lack statutes or court decisions harmonizing state antitrust law with federal precendents, and at least six have rejected certain federal precedents –chiefly, Illinois Brick — even without statutory repealers. How those states will treat RPM post-Leegin is anybody’s guess. (For an exhaustive and regularly updated list of state law treatment of RPM, see this helpful article and chart by Michael Lindsay.)

So what’s behind states’ hostility toward RPM?  At yesterday’s RPM session, California Senior Assistant Attorney General Kathleen Foote suggested that state attorneys general tend to oppose RPM because they are particularly concerned about consumer protection and because states have had actual experience with RPM under the so-called “Fair Trade” laws that for several decades allowed states to create antitrust immunity for RPM arrangements.  The empirical evidence of conditions under Fair Trade, Ms. Foote says, establishes that RPM leads to higher consumer prices and therefore tends to be anticompetitive.

But these arguments, each of which was considered and rejected in Leegin, have been soundly refuted.  A heightened concern for consumer protection in no way supports adherence to Dr. Miles, for manufacturers generally have an incentive to impose RPM only when doing so benefits consumers.  The retail mark-up — the difference between the price the retailer pays and that which it charges to consumers — is the “price” manufacturers effectively pay for product distribution.  Like consumers, they have no incentive to raise that price (i.e., to increase the mark-up through imposition of RPM) unless doing so generates retailer services that are worth more to consumers than the incremental retail mark-up.  Only then would RPM enhance a manufacturer’s profits, but in that case, it also enhances overall consumer surplus.  In short, manufacturer and consumer interests are generally aligned when it comes to RPM.

With respect to Fair Trade, Ms. Foote was playing a little fast and loose.  The Fair Trade laws did not, like Leegin, simply declare RPM arrangements not to be per se illegal; rather, they said that such arrangements were per se legal.  Hardly anyone doubts that RPM arrangements may sometimes be harmful and should be scrutinized.  But under Leegin — unlike under Fair Trade — anticompetitive instances of RPM (those that facilitate manufacturer or retailer collusion or serve as exclusionary devices for dominant manufacturers or retailers) may be condemned.  Thus, the fact that states witnessed consumer harm under Fair Trade’s regime of per se legality says nothing about how consumers will fare under Leegin’s Rule of Reason.

Finally, Ms. Foote’s reasoning that RPM is anticompetitive because the evidence shows it tends to raise prices is fallacious.  Of course RPM raises prices.  It is, after all, the imposition of a price floor.  But that price effect is beside the point.  Each one of the procompetitive, output-enhancing justifications for RPM assumes an increase in consumer prices.  The key is that the increase in retail mark-up will induce dealer services that consumers value more than the amount of the mark-up and will thereby enhance overall sales.  The fact that RPM raises prices, then, is a red herring.

If legislators, courts, and enforcement officials in states like California, Maryland, and Kansas can’t understand these fairly simple points (yes, I realize I’m asking a lot of the Kansans), then the promise of Leegin may go unfulfilled.  It was pretty clear from yesterday’s session that legal advice — and, accordingly, manufacturer practice — will look much as it did pre-Leegin unless the states get their act together.  That’s pretty depressing.

Fortunately, the session following the RPM session was a good bit more promising.  The highlight was a speech by FTC Commissioner Wright, in which he laid out his intentions to promote a more principled understanding of Section 5 of the FTC Act and to pursue the “low-hanging fruit” (his words) of public restraints.  Both developments would be warmly welcomed.

Commissioner Wright maintains that the promise of Section 5 (which enables the FTC, but not private parties, to enjoin unfair methods of competition that do not necessarily constitute antitrust violations) will remain unfulfilled until the FTC lays out the guiding and limiting principles that will govern its use of the provision.  He’s right.  Absent such articulated principles, use of Section 5 could well end up the way Robert Bork once described mid-20th Century antitrust, which he likened to a frontier sheriff who “did not sift the evidence, distinguish between suspects, and solve crimes, but merely walked the main street and every so often pistol-whipped a few people.” The evidence-based principles Commissioner Wright proposes to develop would avoid the frontier sheriff problem by bringing predictability and fairness to the Commission’s implementation of its Section 5 authority.

Even more exciting were Commissioner Wright’s remarks on public restraints.  Without doubt, competition-reducing laws and regulations are responsible for the destruction of vast amounts of consumer welfare.  State action immunity and other legal hurdles, though, make it difficult to police welfare-reducing public restraints.

But litigation isn’t the only weapon in the FTC’s arsenal.  As Commissioner Wright observed, the FTC is uniquely positioned to advocate for the removal of competition-destructive public restraints.  I was heartened to learn that the Commission recently helped persuade Colorado officials not to impose regulations that would have squelched Uber, a smart phone application that is creating much-needed competition in the taxi and private car service market.  It also took the side of the angels in St. Joseph Abbey case, helping to persuade the Fifth Circuit to strike protectionist regulations that reduced competition among casket sellers in Louisiana.  Commissioner Wright also noted that the FTC’s recent victory in the Phoebe Putney case, which narrowed somewhat the scope of state action immunity, will allow it to pursue more public restraints by state and sub-state governmental entities.  This all bodes well for consumers.

So here’s an idea for the FTC: How about using some of that advocacy prowess to convince the anti-Leegin states to bring their RPM doctrine into conformity with federal law?  It might be tough — and Kansas may be beyond help — but I’m confident that Commissioner Wright and his colleagues could help the anti-Leegin states see that they’re not helping consumers by clinging to moth-eaten Dr. Miles.  Instead, they’re just guaranteeing more jobs for lawyers charged with crafting and implementing Colgate policies, consignment relationships, etc.

Filed under: antitrust, consumer protection, markets, regulation, resale price maintenance

Continue reading
Antitrust & Consumer Protection

How Copyright Drives Innovation in Scholarly Publishing

Popular Media [Cross posted at the Center for the Protection of Intellectual Property blog.] Today’s public policy debates frame copyright policy solely in terms of a “trade . . .

[Cross posted at the Center for the Protection of Intellectual Property blog.]

Today’s public policy debates frame copyright policy solely in terms of a “trade off” between the benefits of incentivizing new works and the social deadweight losses imposed by the access restrictions imposed by these (temporary) “monopolies.” I recently posted to SSRN a new research paper, called How Copyright Drives Innovation in Scholarly Publishing, explaining that this is a fundamental mistake that has distorted the policy debates about scholarly publishing.

This policy mistake is important because it has lead commentators and decision-makers to dismiss as irrelevant to copyright policy the investments by scholarly publishers of $100s of millions in creating innovative distribution mechanisms in our new digital world. These substantial sunk costs are in addition to the $100s of millions expended annually by publishers in creating, publishing and maintaining reliable, high-quality, standardized articles distributed each year in a wide-ranging variety of academic disciplines and fields of research. The articles now number in the millions themselves; in 2009, for instance, over 2,000 publishers issued almost 1.5 million articles just in the scientific, technical and medical fields, exclusive of the humanities and social sciences.

The mistaken incentive-to-invent conventional wisdom in copyright policy is further compounded by widespread misinformation today about the allegedly “zero cost” of digital publication. As a result, many people are simply unaware of the substantial investments in infrastructure, skilled labor and other resources required to create, publish and maintain scholarly articles on the Internet and in other digital platforms.

This is not merely a so-called “academic debate” about copyright policy and publishing.

The policy distortion caused by the narrow, reductionist incentive-to-create conventional wisdom, when combined with the misinformation about the economics of digital business models, has been spurring calls for “open access” mandates for scholarly research, such as at the National Institute of Health and in recently proposed legislation (FASTR Act) and in other proposed regulations. This policy distortion even influenced Justice Breyer’s opinion in the recent decision in Kirtsaeng v. John Wiley & Sons (U.S. Supreme Court, March 19, 2013), as he blithely dismissed commercial incentivizes as being irrelevant to fundamental copyright policy. These legal initiatives and the Kirtsaeng decision are motivated in various ways by the incentive-to-create conventional wisdom, by the misunderstanding of the economics of scholarly publishing, and by anti-copyright rhetoric on both the left and right, all of which has become more pervasive in recent years.

But, as I explain in my paper, courts and commentators have long recognized that incentivizing authors to produce new works is not the sole justification for copyright—copyright also incentivizes intermediaries like scholarly publishers to invest in and create innovative legal and market mechanisms for publishing and distributing articles that report on scholarly research. These two policies—the incentive to create and the incentive to commercialize—are interrelated, as both are necessary in justifying how copyright law secures the dynamic innovation that makes possible the “progress of science.” In short, if the law does not secure the fruits of labors of publishers who create legal and market mechanisms for disseminating works, then authors’ labors will go unrewarded as well.

As Justice Sandra Day O’Connor famously observed in the 1984 decision in Harper & Row v. Nation Enterprises: “In our haste to disseminate news, it should not be forgotten the Framers intended copyright itself to be the engine of free expression. By establishing a marketable right to the use of one’s expression, copyright supplies the economic incentive to create and disseminate ideas.” Thus, in Harper & Row, the Supreme Court reached the uncontroversial conclusion that copyright secures the fruits of productive labors “where an author and publisher have invested extensive resources in creating an original work.” (emphases added)

This concern with commercial incentives in copyright law is not just theory; in fact, it is most salient in scholarly publishing because researchers are not motivated by the pecuniary benefits offered to authors in conventional publishing contexts. As a result of the policy distortion caused by the incentive-to-create conventional wisdom, some academics and scholars now view scholarly publishing by commercial firms who own the copyrights in the articles as “a form of censorship.” Yet, as courts have observed: “It is not surprising that [scholarly] authors favor liberal photocopying . . . . But the authors have not risked their capital to achieve dissemination. The publishers have.” As economics professor Mark McCabe observed (somewhat sardonically) in a research paper released last year for the National Academy of Sciences: he and his fellow academic “economists knew the value of their journals, but not their prices.”

The widespread ignorance among the public, academics and commentators about the economics of scholarly publishing in the Internet age is quite profound relative to the actual numbers.  Based on interviews with six different scholarly publishers—Reed Elsevier, Wiley, SAGE, the New England Journal of Medicine, the American Chemical Society, and the American Institute of Physics—my research paper details for the first time ever in a publication and at great length the necessary transaction costs incurred by any successful publishing enterprise in the Internet age.  To take but one small example from my research paper: Reed Elsevier began developing its online publishing platform in 1995, a scant two years after the advent of the World Wide Web, and its sunk costs in creating this first publishing platform and then digitally archiving its previously published content was over $75 million. Other scholarly publishers report similarly high costs in both absolute and relative terms.

Given the widespread misunderstandings of the economics of Internet-based business models, it bears noting that such high costs are not unique to scholarly publishers.  Microsoft reportedly spent $10 billion developing Windows Vista before it sold a single copy, of which it ultimately did not sell many at all. Google regularly invests $100s of millions, such as $890 million in the first quarter of 2011, in upgrading its data centers.  It is somewhat surprising that such things still have to be pointed out a scant decade after the bursting of the dot.com bubble, a bubble precipitated by exactly the same mistaken view that businesses have somehow been “liberated” from the economic realities of cost by the Internet.

Just as with the extensive infrastructure and staffing costs, the actual costs incurred by publishers in operating the peer review system for their scholarly journals are also widely misunderstood.  Individual publishers now receive hundreds of thousands—the large scholarly publisher, Reed Elsevier, receives more than one million—manuscripts per year. Reed Elsevier’s annual budget for operating its peer review system is over $100 million, which reflects the full scope of staffing, infrastructure, and other transaction costs inherent in operating a quality-control system that rejects 65% of the submitted manuscripts. Reed Elsevier’s budget for its peer review system is consistent with industry-wide studies that have reported that the peer review system costs approximately $2.9 billion annually in operation costs (translating into dollars the British £1.9 billion pounds reported in the study). For those articles accepted for publication, there are additional, extensive production costs, and then there are extensive post-publication costs in updating hypertext links of citations, cyber security of the websites, and related digital issues.

In sum, many people mistakenly believe that scholarly publishers are no longer necessary because the Internet has made moot all such intermediaries of traditional brick-and-mortar economies—a viewpoint reinforced by the equally mistaken incentive-to-create conventional wisdom in the copyright policy debates today. But intermediaries like scholarly publishers face the exact same incentive problems that is universally recognized for authors by the incentive-to-create conventional wisdom: no will make the necessary investments to create a work or to distribute if the fruits of their labors are not secured to them. This basic economic fact—dynamic development of innovative distribution mechanisms require substantial investment in both people and resources—is what makes commercialization an essential feature of both copyright policy and law (and of all intellectual property doctrines).

It is for this reason that copyright law has long promoted and secured the value that academics and scholars have come to depend on in their journal articles—reliable, high-quality, standardized, networked, and accessible research that meets the differing expectations of readers in a variety of fields of scholarly research. This is the value created by the scholarly publishers. Scholarly publishers thus serve an essential function in copyright law by making the investments in and creating the innovative distribution mechanisms that fulfill the constitutional goal of copyright to advance the “progress of science.”

DISCLOSURE: The paper summarized in this blog posting was supported separately by a Leonardo Da Vinci Fellowship and by the Association of American Publishers (AAP). The author thanks Mark Schultz for very helpful comments on earlier drafts, and the AAP for providing invaluable introductions to the five scholarly publishers who shared their publishing data with him.

NOTE: Some small copy-edits were made to this blog posting.

 

Filed under: copyright, economics, intellectual property, legal scholarship, markets, scholarship, SSRN, technology Tagged: American Chemical Society, American Institute of Physics, commercialization, copyright policy, Kirtsaeng, New England Journal of Medicine, open access, Reed Elsevier, SAGE, scholarly publishers, Wiley

Continue reading
Intellectual Property & Licensing

Hey Hey! Ho Ho! Partial De Facto Exclusive Dealing Claims Have Got to Go!

Popular Media Today, a group of eighteen scholars, of which I am one, filed an amicus brief encouraging the Supreme Court to review a Court of Appeals decision involving loyalty rebates.  The . . .

Today, a group of eighteen scholars, of which I am one, filed an amicus brief encouraging the Supreme Court to review a Court of Appeals decision involving loyalty rebates.  The U.S. Court of Appeals for the Third Circuit recently upheld an antitrust judgment based on a defendant’s loyalty rebates even though the rebates resulted in above-cost prices for the defendant’s products and could have been matched by an equally efficient rival.  The court did so because it decided that the defendant’s overall selling practices, which involved no exclusivity commitments by buyers, had resulted in “partial de facto exclusive dealing” and thus were not subject to the price-cost test set forth in Brooke Group.  (For the uniniated, Brooke Group immunizes price cuts that result in above-cost prices for the discounter’s goods.)  We amici, who were assembled by Michigan Law’s Dan Crane, believe the Third Circuit’s decision threatens to chill proconsumer discounting practices and should be overruled.

The defendant in the case, Eaton, manufactures transmissions for big trucks (semis, cement trucks, etc.).  So did plaintiff Meritor.  Eaton and Meritor sold their products to the four manufacturers of big trucks.  Those “OEMs” installed the transmissions into the trucks they sold to end-user buyers, who typically customized their trucks and thus could select whatever transmissions they wanted.  Meritor claimed that Eaton drove it from the market by entering into purportedly exclusionary “long-term agreements” (LTAs) with the four OEMs.  The agreements did not require the OEMs to purchase any particular amount of Eaton’s products, but they did provide the OEMs with rebates (resulting in above-cost prices) if they bought high percentages of their requirements from Eaton.  The agreements also provided that Eaton could terminate the agreements if the market share targets were not met. Each LTA contained a “competitiveness clause” that allowed the OEM to purchase transmissions from another supplier without counting the purchases against the share target, or to terminate the LTA altogether, if another supplier offered a lower price or better product and Eaton could not match that offering.  Following adoption of the LTAs, Eaton’s market share grew, and Meritor’s shrank.  Before withdrawing from the U.S. market altogether, Meritor filed an antitrust action against Eaton.

Eaton insisted, not surprisingly, that it had simply engaged in hard competition.  It grew its market share by offering a lower price that an equally efficient rival could have matched.  Meritor’s failure, then, resulted from either its relative inefficiency or its unwillingness to lower its price to the level of its cost.  By immunizing above-cost discounted prices from liability, the Brooke Group rule permits and encourages the sort of competition in which Eaton engaged, and it should, the company argued, control here.

The Third Circuit disagreed.  This was not, the court said, a simple case of price discounting.  Instead, Eaton had engaged in what the court called “partial de facto exclusive dealing.”  The exclusive dealing was “partial”  because OEMs could purchase some transmissions from other suppliers and still obtain Eaton’s loyalty rebates (i.e., complete exclusivity was not required).  It was “de facto” because purchasing exclusively (or nearly exclusively) from Eaton was not contractually required but was instead simply the precondition for earning a rebate.  Nonetheless, reasoned the court, the gravamen of Meritor’s complaint was some sort of exclusive dealing, which is evaluated not under Brooke Group but instead under a rule of reason that focuses on the degree to which the seller’s practices foreclose its rivals from available sales opportunities.  Under that test, the court concluded, the judgment against Eaton could be upheld.  After all, Eaton’s sales practices won lots of business from Meritor, whose sales eventually shrunk so much that the company exited the market.

As we amici point out in our brief to the Supreme Court, the Third Circuit ignored the fact that it was Eaton’s discounts that led OEMs to buy so much from the company (and forego its rival’s offerings).  Absent an actual promise to buy a high level of one’s requirements from a seller, any “exclusive dealing” resulting from a loyalty rebate scheme results from the fact that buyers voluntarily choose to patronize the seller over its competitors because the discounter’s products are cheaper.  In other words, low pricing is the very means by which any “exclusivity” — and, hence, any market foreclosure — is achieved.  Any claim alleging that an agreement not mandating a certain level of purchases but instead providing for loyalty rebates results in “partial de facto exclusive dealing” is therefore, at its heart, a complaint about price competition.  Accordingly, it should be subject to the Brooke Group screening test for discounts resulting in above-cost pricing.

The Third Circuit wrongly insisted that Eaton had done something more sinister than win business by offering above-cost loyalty rebates.  It concluded that Eaton “essentially forced” the four OEMs (who likely had a good bit of buyer market power themselves) to accept its terms by threatening “financial penalties or supply shortages.”  But these purported “penalties” and threats of “supply shortages” appear nowhere in the record.

The only “penalty” an OEM would have incurred by failing to meet a purchase target is the denial of a rebate from Eaton.  If that’s enough to make Brooke Group inapplicable, then any conditional price cut resulting in an above-cost price falls outside the decision’s safe harbor, for failure to meet the discount condition would subject buyers to a “penalty.”  Proconsumer price competition would surely be chilled by such an evisceration of Brooke Group.  As for threats of supply shortages, the only thing Meritor and the Third Circuit could point to was Eaton’s contractual right to cancel its LTAs if OEMs failed to meet purchase targets.  But if that were enough to make Brooke Group inapplicable, then the decision’s price-cost test could never apply when a dominant seller offers a conditional rebate or discount.  Because the seller could refuse in the future to supply buyers who fail to qualify for the discount, there would be, under the Third Circuit’s reasoning, not just a loyalty rebate but also an implicit threat of “supply shortages” for buyers that fail to meet the seller’s purchase targets.

This is not the first case in which a plaintiff has sought to evade a price-cost test, and thereby impose liability on a discounting scheme that would otherwise pass muster, by seeking to recharacterize the defendant’s conduct.  A few years back, a plaintiff (Masimo) sought to evade the Ninth Circuit’s PeaceHealth decision, which creates a Brooke Group-like safe harbor for certain bundled discounts that could not exclude equally efficient rivals, by construing the defendant’s conduct as “de facto exclusive dealing.”  Dan Crane and I participated as amici in that case as well.

I won’t speak for Dan, but I for one am getting tired of working on these briefs!  It’s time for the Supreme Court to clarify that prevailing price-cost safe harbors cannot be evaded simply through the use of creative labels like “partial de facto exclusive dealing.”  Hopefully, the Court will heed our recommendation that it review — and overrule — the Third Circuit’s Meritor decision.

[In case you’re interested, the other scholars signing the brief urging cert in Meritor are Ken Elzinga (Virginia Econ), Richard Epstein (NYU and Chicago Law), Jerry Hausman (MIT Econ), Rebecca Haw (Vanderbilt Law), Herb Hovenkamp (Iowa Law), Glenn Hubbard (Columbia Business), Keith Hylton (Boston U Law), Bill Kovacic (GWU Law), Alan Meese (Wm & Mary Law), Tom Morgan (GWU Law), Barak Orbach (Arizona Law), Bill Page (Florida Law), Robert Pindyck (MIT Econ), Edward Snyder (Yale Mgt), Danny Sokol (Florida Law), and Robert Topel (Chicago Business).]

Filed under: antitrust, economics, exclusionary conduct, exclusive dealing, law and economics, monopolization, regulation, Supreme Court

Continue reading
Antitrust & Consumer Protection

The SHIELD Act: When Bad Economic Studies Make Bad Laws

Popular Media Earlier this month, Representatives Peter DeFazio and Jason Chaffetz picked up the gauntlet from President Obama’s comments on February 14 at a Google-sponsored Internet Q&A . . .

Earlier this month, Representatives Peter DeFazio and Jason Chaffetz picked up the gauntlet from President Obama’s comments on February 14 at a Google-sponsored Internet Q&A on Google+ that “our efforts at patent reform only went about halfway to where we need to go” and that he would like “to see if we can build some additional consensus on smarter patent laws.” So, Reps. DeFazio and Chaffetz introduced on March 1 the Saving High-tech Innovators from Egregious Legal Disputes (SHIELD) Act, which creates a “losing plaintiff patent-owner pays” litigation system for a single type of patent owner—patent licensing companies that purchase and license patents in the marketplace (and who sue infringers when infringers refuse their requests to license). To Google, to Representative DeFazio, and to others, these patent licensing companies are “patent trolls” who are destroyers of all things good—and the SHIELD Act will save us all from these dastardly “trolls” (is a troll anything but dastardly?).

As I and other scholars have pointed out, the “patent troll” moniker is really just a rhetorical epithet that lacks even an agreed-upon definition.  The term is used loosely enough that it sometimes covers and sometimes excludes universities, Thomas Edison, Elias Howe (the inventor of the lockstitch in 1843), Charles Goodyear (the inventor of vulcanized rubber in 1839), and even companies like IBM.  How can we be expected to have a reasonable discussion about patent policy when our basic terms of public discourse shift in meaning from blog to blog, article to article, speaker to speaker?  The same is true of the new term, “Patent Assertion Entities,” which sounds more neutral, but has the same problem in that it also lacks any objective definition or usage.

Setting aside this basic problem of terminology for the moment, the SHIELD Act is anything but a “smarter patent law” (to quote President Obama). Some patent scholars, like Michael Risch, have begun to point out some of the serious problems with the SHIELD Act, such as its selectively discriminatory treatment of certain types of patent-owners.  Moreover, as Professor Risch ably identifies, this legislation was so cleverly drafted to cover only a limited set of a specific type of patent-owner that it ended up being too clever. Unlike the previous version introduced last year, the 2013 SHIELD Act does not even apply to the flavor-of-the-day outrage over patent licensing companies—the owner of the podcast patent. (Although you wouldn’t know this if you read the supporters of the SHIELD Act like the EFF who falsely claim that this law will stop patent-owners like the podcast patent-owning company.)

There are many things wrong with the SHIELD Act, but one thing that I want to highlight here is that it based on a falsehood: the oft-repeated claim that two Boston University researchers have proven in a study that “patent troll suits cost American technology companies over $29 billion in 2011 alone.”  This is what Rep. DeFazio said when he introduced the SHIELD Act on March 1. This claim was repeated yesterday by House Members during a hearing on “Abusive Patent Litigation.” The claim that patent licensing companies cost American tech companies $29 billion in a single year (2011) has become gospel since this study, The Direct Costs from NPE Disputes, was released last summer on the Internet. (Another name of patent licensing companies is “Non Practicing Entity” or “NPE.”)  A Google search of “patent troll 29 billion” produces 191,000 hits. A Google search of “NPE 29 billion” produces 605,000 hits. Such is the making of conventional wisdom.

The problem with conventional wisdom is that it is usually incorrect, and the study that produced the claim of “$29 billion imposed by patent trolls” is no different. The $29 billion cost study is deeply and fundamentally flawed, as explained by two noted professors, David Schwartz and Jay Kesan, who are also highly regarded for their empirical and economic work in patent law.  In their essay, Analyzing the Role of Non-Practicing Entities in the Patent System, also released late last summer, they detailed at great length serious methodological and substantive flaws in The Direct Costs from NPE Disputes. Unfortunately, the Schwartz and Kesan essay has gone virtually unnoticed in the patent policy debates, while the $29 billion cost claim has through repetition become truth.

In the hope that at least a few more people might discover the Schwartz and Kesan essay, I will briefly summarize some of their concerns about the study that produced the $29 billion cost figure.  This is not merely an academic exercise.  Since Rep. DeFazio explicitly relied on the $29 billion cost claim to justify the SHIELD Act, and he and others keep repeating it, it’s important to know if it is true, because it’s being used to drive proposed legislation in the real world.  If patent legislation is supposed to secure innovation, then it behooves us to know if this legislation is based on actual facts. Yet, as Schwartz and Kesan explain in their essay, the $29 billion cost claim is based on a study that is fundamentally flawed in both substance and methodology.

In terms of its methodological flaws, the study supporting the $29 billion cost claim employs an incredibly broad definition of “patent troll” that covers almost every person, corporation or university that sues someone for infringing a patent that it is not currently being used to manufacture a product at that moment.  While the meaning of the “patent troll” epithet shifts depending on the commentator, reporter, blogger, or scholar who is using it, one would be extremely hard pressed to find anyone embracing this expansive usage in patent scholarship or similar commentary today.

There are several reasons why the extremely broad definition of “NPE” or “patent troll” in the study is unusual even compared to uses of this term in other commentary or studies. First, and most absurdly, this definition, by necessity, includes every university in the world that sues someone for infringing one of its patents, as universities don’t manufacture goods.  Second, it includes every individual and start-up company who plans to manufacture a patented invention, but is forced to sue an infringer-competitor who thwarted these business plans by its infringing sales in the marketplace.  Third, it includes commercial firms throughout the wide-ranging innovation industries—from high tech to biotech to traditional manufacturing—that have at least one patent among a portfolio of thousands that is not being used at the moment to manufacture a product because it may be “well outside the area in which they make products” and yet they sue infringers of this patent (the quoted language is from the study). So, according to this study, every manufacturer becomes an “NPE” or “patent troll” if it strays too far from what somebody subjectively defines as its rightful “area” of manufacturing. What company is not branded an “NPE” or “patent troll” under this definition, or will necessarily become one in the future given inevitable changes in one’s business plans or commercial activities? This is particularly true for every person or company whose only current opportunity to reap the benefit of their patented invention is to license the technology or to litigate against the infringers who refuse license offers.

So, when almost every possible patent-owning person, university, or corporation is defined as a “NPE” or “patent troll,” why are we surprised that a study that employs this virtually boundless definition concludes that they create $29 billion in litigation costs per year?  The only thing surprising is that the number isn’t even higher!

There are many other methodological flaws in the $29 billion cost study, such as its explicit assumption that patent litigation costs are “too high” without providing any comparative baseline for this conclusion.  What are the costs in other areas of litigation, such as standard commercial litigation, tort claims, or disputes over complex regulations?  We are not told.  What are the historical costs of patent litigation?  We are not told.  On what basis then can we conclude that $29 billion is “too high” or even “too low”?  We’re supposed to be impressed by a number that exists in a vacuum and that lacks any empirical context by which to evaluate it.

The $29 billion cost study also assumes that all litigation transaction costs are deadweight losses, which would mean that the entire U.S. court system is a deadweight loss according to the terms of this study.  Every lawsuit, whether a contract, tort, property, regulatory or constitutional dispute is, according to the assumption of the $29 billion cost study, a deadweight loss.  The entire U.S. court system is an inefficient cost imposed on everyone who uses it.  Really?  That’s an assumption that reduces itself to absurdity—it’s a self-imposed reductio ad absurdum!

In addition to the methodological problems, there are also serious concerns about the trustworthiness and quality of the actual data used to reach the $29 billion claim in the study.  All studies rely on data, and in this case, the $29 billion study used data from a secret survey done by RPX of its customers.  For those who don’t know, RPX’s business model is to defend companies against these so-called “patent trolls.”  So, a company whose business model is predicated on hyping the threat of “patent trolls” does a secret survey of its paying customers, and it is now known that RPX informed its customers in the survey that their answers would be used to lobby for changes in the patent laws.

As every reputable economist or statistician will tell you, such conditions encourage exaggeration and bias in a data sample by motivating participation among those who support changes to the patent law.  Such a problem even has a formal name in economic studies: self-selection bias.  But one doesn’t need to be an economist or statistician to be able to see the problems in relying on the RPX data to conclude that NPEs cost $29 billion per year. As the classic adage goes, “Something is rotten in the state of Denmark.”

Even worse, as I noted above, the RPX survey was confidential.  RPX has continued to invoke “client confidences” in refusing to disclose its actual customer survey or the resulting data, which means that the data underlying the $29 billion claim is completely unknown and unverifiable for anyone who reads the study.  Don’t worry, the researchers have told us in a footnote in the study, they looked at the data and confirmed it is good.  Again, it doesn’t take economic or statistical training to know that something is not right here. Another classic cliché comes to mind at this point: “it’s not the crime, it’s the cover-up.”

In fact, keeping data secret in a published study violates well-established and longstanding norms in all scientific research that data should always be made available for testing and verification by third parties.  No peer-reviewed medical or scientific journal would publish a study based on a secret data set in which the researchers have told us that we should simply trust them that the data is accurate.  Its use of secret data probably explains why the $29 billion study has not yet appeared in a peer-reviewed journal, and, if economics has any claim to being an actual science, this study never will.  If a study does not meet basic scientific standards for verifying data, then why are Reps. DeFazio and Chaffetz relying on it to propose national legislation that directly impacts the patent system and future innovation?  If heads-in-the-clouds academics would know to reject such a study as based on unverifiable, likely biased claptrap, then why are our elected officials embracing it to create real-world legal rules?

And, to continue our running theme of classic clichés, there’s the rub. The more one looks at the actual legal requirements of the SHIELD Act, the more, in the words of Professor Risch, one is left “scratching one’s head” in bewilderment.  The more one looks at the supporting studies and arguments in favor of the SHIELD Act, the more one is left, in the words of Professor Risch, “scratching one’s head.”  The more and more one thinks about the SHIELD Act, the more one realizes what it is—legislation that has been crafted at the behest of the politically powerful (such as an Internet company who can get the President to do a special appearance on its own social media website) to have the government eliminate a smaller, publicly reviled, and less politically-connected group.

In short, people may have legitimate complaints about the ways in which the court system in the U.S. generally has problems.  Commentators and Congresspersons could even consider revising the general legal rules governing patent ligtiation for all plaintiffs and defendants to make the ligitation system work better or more efficiently (by some established metric).   Professor Risch has done exactly this in a recent Wired op-ed.  But it’s time to call a spade a spade: the SHIELD Act is a classic example of rent-seeking, discriminatory legislation.

Filed under: cost-benefit analysis, economics, intellectual property, law and economics, licensing, litigation, patent, politics Tagged: Criticism of patents, economic studies, Law and economics, legislation, litigation, non practicing entity, NPE, PAE, patent assertion entity, Patent infringement, patent licensing, patent trolls, Patents, rent seeking, SHIELD Act

Continue reading
Intellectual Property & Licensing

Microsoft, the EU fine, and a browser ballot no one missed

Popular Media If a tree fell in the forest, and no one noticed… the European Commission would impose a staggering fine — and then congratulate itself for . . .

If a tree fell in the forest, and no one noticed… the European Commission would impose a staggering fine — and then congratulate itself for protecting consumers from falling trees. That’s essentially what just happened: the Commission fined Microsoft $732 million for failing to show its “browser ballot” when users installed one of its Windows 7 updates.

Read the full piece here.

Continue reading
Antitrust & Consumer Protection

Forbes commentary on Susan Crawford’s “broadband monopoly” thesis

Popular Media Over at Forbes Berin Szoka and I have a lengthy piece discussing “10 Reasons To Be More Optimistic About Broadband Than Susan Crawford Is.” Crawford has . . .

Over at Forbes Berin Szoka and I have a lengthy piece discussing “10 Reasons To Be More Optimistic About Broadband Than Susan Crawford Is.” Crawford has become the unofficial spokesman for a budding campaign to reshape broadband. She sees cable companies monopolizing broadband, charging too much, withholding content and keeping speeds low, all in order to suppress disruptive innovation — and argues for imposing 19th century common carriage regulation on the Internet. Berin and I begin (we expect to contribute much more to this discussion in the future) to explain both why her premises are erroneous and also why her proscription is faulty. Here’s a taste:

Things in the US today are better than Crawford claims. While Crawford claims that broadband is faster and cheaper in other developed countries, her statistics are convincingly disputed. She neglects to mention the significant subsidies used to build out those networks. Crawford’s model is Europe, but as Europeans acknowledge, “beyond 100 Mbps supply will be very difficult and expensive. Western Europe may be forced into a second fibre build out earlier than expected, or will find themselves within the slow lane in 3-5 years time.” And while “blazing fast” broadband might be important for some users, broadband speeds in the US are plenty fast enough to satisfy most users. Consumers are willing to pay for speed, but, apparently, have little interest in paying for the sort of speed Crawford deems essential. This isn’t surprising. As the LSE study cited above notes, “most new activities made possible by broadband are already possible with basic or fast broadband: higher speeds mainly allow the same things to happen faster or with higher quality, while the extra costs of providing higher speeds to everyone are very significant.”

Even if she’s right, she wildly exaggerates the costs. Using a back-of-the-envelope calculation, Crawford claims that slow downloads (compared to other countries) could cost the U.S. $3 trillion/year in lost productivity from wasted time spent “waiting for a link to load or an app to function on your wireless device.” This intentionally sensationalist claim, however, rests on a purely hypothetical average wait time in the U.S. of 30 seconds (vs. 2 seconds in Japan). Whatever the actual numbers might be, her methodology would still be shaky, not least because time spent waiting for laggy content isn’t necessarily simply wasted. And for most of us, the opportunity cost of waiting for Angry Birds to load on our phones isn’t counted in wages — it’s counted in beers or time on the golf course or other leisure activities. These are important, to be sure, but does anyone seriously believe our GDP would grow 20% if only apps were snappier? Meanwhile, actual econometric studies looking at the productivity effects of faster broadband on businesses have found that higher broadband speeds are not associated with higher productivity.

* * *

So how do we guard against the possibility of consumer harm without making things worse? For us, it’s a mix of promoting both competition and a smarter, subtler role for government.

Despite Crawford’s assertion that the DOJ should have blocked the Comcast-NBCU merger, antitrust and consumer protection laws do operate to constrain corporate conduct, not only through government enforcement but also private rights of action. Antitrust works best in the background, discouraging harmful conduct without anyone ever suing. The same is true for using consumer protection law to punish deception and truly harmful practices (e.g., misleading billing or overstating speeds).

A range of regulatory reforms would also go a long way toward promoting competition. Most importantly, reform local franchising so competitors like Google Fiber can build their own networks. That means giving them “open access” not to existing networks but to the public rights of way under streets. Instead of requiring that franchisees build out to an entire franchise area—which often makes both new entry and service upgrades unprofitable—remove build-out requirements and craft smart subsidies to encourage competition to deliver high-quality universal service, and to deliver superfast broadband to the customers who want it. Rather than controlling prices, offer broadband vouchers to those that can’t afford it. Encourage telcos to build wireline competitors to cable by transitioning their existing telephone networks to all-IP networks, as we’ve urged the FCC to do (here and here). Let wireless reach its potential by opening up spectrum and discouraging municipalities from blocking tower construction. Clear the deadwood of rules that protect incumbents in the video marketplace—a reform with broad bipartisan appeal.

In short, there’s a lot of ground between “do nothing” and “regulate broadband like electricity—or railroads.” Crawford’s arguments simply don’t justify imposing 19th century common carriage regulation on the Internet. But that doesn’t leave us powerless to correct practices that truly harm consumers, should they actually arise.

Read the whole thing here.

Filed under: antitrust, regulation, technology, telecommunications Tagged: at&t, Broadband, Comcast, Crawford, FCC, Google Fiber, Susan Crawford, Time Warner Cable, Verizon Wireless

Continue reading
Antitrust & Consumer Protection

10 Reasons To Be More Optimistic About Broadband Than Susan Crawford Is

Popular Media Susan Crawford thinks she sees the future of the Internet—and it isn’t pretty: Cable companies monopolizing broadband, charging too much, withholding content and keeping speeds . . .

Susan Crawford thinks she sees the future of the Internet—and it isn’t pretty: Cable companies monopolizing broadband, charging too much, withholding content and keeping speeds low, all in order to suppress disruptive innovation.

Wireless can’t compete because of sheer physics—and an AT&T/Verizon duopoly will mirror the cable monopoly, anyway. The Internet will increasingly resemble cable itself: a limited bundle of “channels” chosen by cable companies.  Facebook and Google might be strong enough to cut deals to stay in the “basic tier,” but new entrants will be shut out. So will cable TV alternatives like Netflix.  Only regulating broadband like a public utility can avoid this dire future.  Or better yet, just have government deliver the service.

Crawford’s become the unofficial spokesman for a budding campaign to reshape broadband. Her speech Thursday night at the New America Foundation provides a good introduction to her new book, Captive Audience: The Telecom Industry and Monopoly Power in the New Gilded Age.

But there are many reasons to think the Internet’s future will be brighter than she fears, and that her preferred, government-run future wouldn’t turn out so well. Here are just ten:

  1. Broadband isn’t like electricity. Crawford frequently invokes FDR’s rural electrification efforts. Like electricity, she considers high-speed Internet a fundamental need—specifically, symmetrical 100+ mbps cable or fiber. But unlike electricity, consumer preferences for broadband vary remarkably, and consumers want much more from their networks than the equivalent of simply turning on a light. Like so many of her generation, Crawford struggles to understand computers that don’t look like desktops—and assumes that wireless devices necessarily fit in your hand. Economist Ev Ehrlich calls her attitude“broadband effetery”: the presumption that “this is what ‘the right people’ want, and you should want it, too.” It’s one thing to say we should all want electricity, but quite another to insist we must use the Internet the way Crawford herself does.
  2. Providing broadband isn’t like delivering electricity. Crawford calls broadband a “dumb” pipe for “moving bits from Point A to Point B”—just like electricity. She worries about vertically integrated network providers intervening in the flow of bits. But the Internet isn’t so simple. There’s a heated debateongoing at the FCC over regulating the “IP Transition” — replacement of outdated telephone networks with all-IP infrastructure. Smartly managing such networks facilitates services and quality improvements unavailable today, but it means broadband networks are becoming considerably more than just “dumb pipes.” Even Crawford’s “dumb” electrical networks are slowly getting smarter—and less “neutral.”
  3. The public utility model doesn’t work well for dynamic services. Electric utilities deliver plain vanilla service. They’ve struggled to innovate, failing to deliver broadband over power lines and taking years to roll out smart grid technologies. The same is true for railroads. The nationwide fiber network Crawford calls for sounds like long-standing dreams of high-speed rail. But the reality would probably look a lot more like Amtrak: slow, grossly over-budget, heavily subsidized, and unable to compete with more nimble competitors. And let’s not forget that Crawford herself notes the perils of regulatory capture at the FCC and local regulators. Her idealized network may not be what consumers want anyway, and many may choose options that elites like Crawford look down their noses at: Bolt Bus is to Amtrak as 4G iPads are to Crawford’s high-speed fiber.
  4. Crawford paints too-rosy a picture of government-run broadband. Some municipalities have built wi-fi and fiber networks, but none has been the success Crawford claims.  The wi-fi schemes mostly fizzled. Google originally had planned to wire San Francisco and Mountain View with public wi-fi, but ultimately rolled out service only in the area around its own offices in Mountain View (population 75,000). Even that network is plagued with reliability issues. (Google plans a second network in New York, but again, only in the Chelsea neighborhood around its office).  A few muni-fiber schemes claim success, but even with heavy capital subsidies, service is expensive and the systems will break even only if take-up rates increase dramatically or if taxpayers foot the bill. As a London School of Economics reportnotes, “subsidising the deployment of superfast broadband to reach 92% of households would generate only £0.72 of consumer surplus for every £1 of subsidy”—not obviously a good investment. Or as Charles Kenny, a Fellow af the New America Foundation itself, writes in his study, Superfast: Is It Really Worth a Subsidy?, “[t]his paper suggests fiber to the home may be no more worthy of subsidy than Concorde.” And how much easier would government surveillance or censorship be if government actually ran the networks?
  5. Competition isn’t over. Crawford sees broadband as a natural monopoly, with economies of scale making competition impossible. Well, Google doesn’t seem to agree. Its Kansas City fiber network isn’t just a stunt; Google expects it to make money. AT&T is eager to replace its outdated switched networks with all-IP ones. This will bring U-Verse (45-75 mbps) to nearly a third of the country.  Most importantly, wireless services can check the power of wireline. One study predicts that, “By 2015, more U.S. Internet users will access the Internet through mobile devices than through PCs or other wireline devices.” As Crawford herself acknowledged at Thursday’s NAF event, explaining why she would probably block a cable/wireless merger, each currently has an incentive to build new infrastructure to draw customers from the other. That’s called competition.  Even full-length video streaming, supposedly the unassailable lynchpin of the “cable monopoly,” is well within the technical capacity of wireless: Consumers increasingly prefer to watch such videos on phones and tablets, and mobile video now comprises the majority of all mobile traffic. While doubtless some of this traffic flows over wi-fi, some of it doesn’t, and 4G download speeds and advanced devices clearly facilitate increasing wireless/wireline competition.
  6. Things in the US today are better than Crawford claims. While Crawford claims that broadband is faster and cheaper in other developed countries, her statistics are convincingly disputed. She neglects to mention the significant subsidies used to build out those networks. Crawford’s model is Europe, but as Europeans acknowledge, “beyond 100 Mbps supply will be very difficult and expensive. Western Europe may be forced into a second fibre build out earlier than expected, or will find themselves within the slow lane in 3-5 years time.” And while “blazing fast” broadband might be important for some users, broadband speeds in the US are plenty fast enough to satisfy most users. Consumers are willing to pay for speed, but, apparently, have little interest in paying for the sort of speed Crawford deems essential. This isn’t surprising. As the LSE study cited above notes, “most new activities made possible by broadband are already possible with basic or fast broadband: higher speeds mainly allow the same things to happen faster or with higher quality, while the extra costs of providing higher speeds to everyone are very significant.”
  7. Even if she’s right, she wildly exaggerates the costs. Using a back-of-the-envelope calculation, Crawford claims that slow downloads (compared to other countries) could cost the U.S. $3 trillion/year in lost productivity from wasted time spent “waiting for a link to load or an app to function on your wireless device.” This intentionally sensationalist claim, however, rests on a purely hypothetical average wait time in the U.S. of 30 seconds (vs. 2 seconds in Japan). Whatever the actual numbers might be, her methodology would still be shaky, not least because time spent waiting for laggy content isn’t necessarily simply wasted. And for most of us, the opportunity cost of waiting for Angry Birds to load on our phones isn’t counted in wages — it’s counted in beers or time on the golf course or other leisure activities. These are important, to be sure, but does anyone seriously believe our GDP would grow 20% if only apps were snappier? Meanwhile, actual econometric studies looking at the productivity effects of faster broadband on businesses have found that higher broadband speeds are not associated with higher productivity.
  8. She wildly exaggerates cable’s profitability. Cable companies simply don’t, as Crawford claims, “reap[] profit margins of 95 percent.” In fact, their margins are very small when you consider their massive capital costs: As our colleagues Matt Starr and Will Rinehart note, “Comcast… has averaged just a 4.5% ROIC [return on invested capital] over the last five years. Time Warner Cable’s 5-year average is -1.3%. Compare those with Apple (32%) or Google (16.1%).”
  9. Vertically integrated content distribution isn’t the menace she claims. Most of all, Crawford fears vertically integrated “gatekeepers” using their power over content to artificially constrain the range of content available and competitors’ ability to deliver it (think Netflix). If content is so valuable and powerful, why don’t content companies just engage in such abuse on their own? Somehow the distribution network’s ownership of content transforms “content is king” into “distribution is king.” Crawford spends 200 pages of her book on this, specifically the NBCU/Comcast merger. But she never really explains the complex dynamics of this market and ultimately fails to demonstrate that integrated content distribution is a singular brand of evil. The harms she suggests are merely speculative, and she offers no evidence to demonstrate that Comcast has actually acted to foreclose competition.
  10. “Independent” content isn’t dead. Specifically, Crawford fears that Comcast’s control over content will kill “independent” programming—diversity of content. But there’s a trade-off between quantity and quality; and there’s no easy way to prioritize one over the other. Moreover, by “independent” she means “not affiliated with a distribution network,” which amounts to a preference for ABC’s “The Bachelor” (owned by Disney) over NBC’s “The Biggest Loser” (owned by Comcast). Both “The Voice” on NBC and “Survivor” on CBS were developed by the same independent producer. But how likely is it really that Comcast would refuse to distribute “Survivor,” or forego the licensing fees and withhold “The Voice” from competing distributors? The incentives are complex, but that is exactly the point: The market’s complexity makes it impossible to draw the simplistic conclusions that Crawford does.

There’s nothing wrong with Crawford’s impulse — her sense that something’s in need of fixing. But intuition is a poor guide for policy-making. If government does have a role, it should be only when rigorous analysis suggests more good than bad will come of it, weighing the realities of both market failure and government failure.

So how do we guard against the possibility of consumer harm without making things worse? For us, it’s a mix of promoting both competition and a smarter, subtler role for government.

Despite Crawford’s assertion that the DOJ should have blocked the Comcast-NBCU merger, antitrust and consumer protection laws do operate to constrain corporate conduct, not only through government enforcement but also private rights of action. Antitrust works best in the background, discouraging harmful conduct without anyone ever suing. The same is true for using consumer protection law to punish deception and truly harmful practices (e.g., misleading billing or overstating speeds).

A range of regulatory reforms would also go a long way toward promoting competition. Most importantly, reform local franchising so competitors like Google Fiber can build their own networks. That means giving them “open access” not to existing networks but to the public rights of way under streets. Instead of requiring that franchisees build out to an entire franchise area—which often makes both new entry and service upgrades unprofitable—remove build-out requirements and craft smart subsidies to encourage competition to deliver high-quality universal service, and to deliver superfast broadband to the customers who want it. Rather than controlling prices, offer broadband vouchers to those that can’t afford it. Encourage telcos to build wireline competitors to cable by transitioning their existing telephone networks to all-IP networks, as we’ve urged the FCC to do (here and here). Let wireless reach its potential by opening up spectrum and discouraging municipalities from blocking tower construction. Clear the deadwood of rules that protect incumbents in the video marketplace—a reform with broad bipartisan appeal.

In short, there’s a lot of ground between “do nothing” and “regulate broadband like electricity—or railroads.” Crawford’s arguments simply don’t justify imposing 19th century common carriage regulation on the Internet. But that doesn’t leave us powerless to correct practices that truly harm consumers, should they actually arise.
Cross-posted from Forbes

Continue reading
Telecommunications & Regulated Utilities