Showing Latest Publications

Mandated “fair use” language has no place in trade promotion authority

Popular Media Earlier this week Senators Orrin Hatch and Ron Wyden and Representative Paul Ryan introduced bipartisan, bicameral legislation, the Bipartisan Congressional Trade Priorities and Accountability Act . . .

Earlier this week Senators Orrin Hatch and Ron Wyden and Representative Paul Ryan introduced bipartisan, bicameral legislation, the Bipartisan Congressional Trade Priorities and Accountability Act of 2015 (otherwise known as Trade Promotion Authority or “fast track” negotiating authority). The bill would enable the Administration to negotiate free trade agreements subject to appropriate Congressional review.

Nothing bridges partisan divides like free trade.

Top presidential economic advisors from both parties support TPA. And the legislation was greeted with enthusiastic support from the business community. Indeed, a letter supporting the bill was signed by 269 of the country’s largest and most significant companies, including Apple, General Electric, Intel, and Microsoft.

Among other things, the legislation includes language calling on trading partners to respect and protect intellectual property. That language in particular was (not surprisingly) widely cheered in a letter to Congress signed by a coalition of sixteen technology, content, manufacturing and pharmaceutical trade associations, representing industries accounting for (according to the letter) “approximately 35 percent of U.S. GDP, more than one quarter of U.S. jobs, and 60 percent of U.S. exports.”

Strong IP protections also enjoy bipartisan support in much of the broader policy community. Indeed, ICLE recently joined sixty-seven think tanks, scholars, advocacy groups and stakeholders on a letter to Congress expressing support for strong IP protections, including in free trade agreements.

Despite this overwhelming support for the bill, the Internet Association (a trade association representing 34 Internet companies including giants like Google and Amazon, but mostly smaller companies like coinbase and okcupid) expressed concern with the intellectual property language in TPA legislation, asserting that “[i]t fails to adopt a balanced approach, including the recognition that limitations and exceptions in copyright law are necessary to promote the success of Internet platforms both at home and abroad.”

But the proposed TPA bill does recognize “limitations and exceptions in copyright law,” as the Internet Association is presumably well aware. Among other things, the bill supports “ensuring accelerated and full implementation of the Agreement on Trade-Related Aspects of Intellectual Property Rights,” which specifically mentions exceptions and limitations on copyright, and it advocates “ensuring that the provisions of any trade agreement governing intellectual property rights that is entered into by the United States reflect a standard of protection similar to that found in United States law,” which also recognizes copyright exceptions and limitations.

What the bill doesn’t do — and wisely so — is advocate for the inclusion of mandatory fair use language in U.S. free trade agreements.

Fair use is an exception under U.S. copyright law to the normal rule that one must obtain permission from the copyright owner before exercising any of the exclusive rights in Section 106 of the Copyright Act.

Including such language in TPA would require U.S. negotiators to demand that trading partners enact U.S.-style fair use language. But as ICLE discussed in a recent White Paper, if broad, U.S.-style fair use exceptions are infused into trade agreements they could actually increase piracy and discourage artistic creation and innovation — particularly in nations without a strong legal tradition implementing such provisions.

All trade agreements entered into by the U.S. since 1994 include a mechanism for trading partners to enact copyright exceptions and limitations, including fair use, should they so choose. These copyright exceptions and limitations must conform to a global standard — the so-called “three-step test,” — established under the auspices of the 1994 Trade-Related Aspects of Intellectual Property Rights (TRIPS) Agreement, and with roots going back to the 1967 amendments to the 1886 Berne Convention.

According to that standard,

Members shall confine limitations or exceptions to exclusive rights to

  1. certain special cases, which
  2. do not conflict with a normal exploitation of the work and
  3. do not unreasonably prejudice the legitimate interests of the right holder.

This three-step test provides a workable standard for balancing copyright protections with other public interests. Most important, it sets flexible (but by no means unlimited) boundaries, so, rather than squeezing every jurisdiction into the same box, it accommodates a wide range of exceptions and limitations to copyright protection, ranging from the U.S.’ fair use approach to the fair dealing exception in other common law countries to the various statutory exceptions adopted in civil law jurisdictions.

Fair use is an inherently common law concept, developed by case-by-case analysis and a system of binding precedent. In the U.S. it has been codified by statute, but only after two centuries of common law development. Even as codified, fair use takes the form of guidance to judicial decision-makers assessing whether any particular use of a copyrighted work merits the exception; it is not a prescriptive statement, and judicial interpretation continues to define and evolve the doctrine.

Most countries in the world, on the other hand, have civil law systems that spell out specific exceptions to copyright protection, that don’t rely on judicial precedent, and that are thus incompatible with the common law, fair use approach. The importance of this legal flexibility can’t be understated: Only four countries out of the 166 signatories to the Berne Convention have adopted fair use since 1967.

Additionally, from an economic perspective the rationale for fair use would seem to be receding, not expanding, further eroding the justification for its mandatory adoption via free trade agreements.

As digital distribution, the Internet and a host of other technological advances have reduced transaction costs, it’s easier and cheaper for users to license copyrighted content. As a result, the need to rely on fair use to facilitate some socially valuable uses of content that otherwise wouldn’t occur because of prohibitive costs of contracting is diminished. Indeed, it’s even possible that the existence of fair use exceptions may inhibit the development of these sorts of mechanisms for simple, low-cost agreements between owners and users of content – with consequences beyond the material that is subject to the exceptions. While, indeed, some socially valuable uses, like parody, may merit exceptions because of rights holders’ unwillingness, rather than inability, to license, U.S.-style fair use is in no way necessary to facilitate such exceptions. In short, the boundaries of copyright exceptions should be contracting, not expanding.

It’s also worth noting that simple marketplace observations seem to undermine assertions by Internet companies that they can’t thrive without fair use. Google Search, for example, has grown big enough to attract the (misguided) attention of EU antitrust regulators, despite no European country having enacted a U.S-style fair use law. Indeed, European regulators claim that the company has a 90% share of the market — without fair use.

Meanwhile, companies like Netflix contend that their ability to cache temporary copies of video content in order to improve streaming quality would be imperiled without fair use. But it’s impossible to see how Netflix is able to negotiate extensive, complex contracts with copyright holders to actually show their content, but yet is somehow unable to negotiate an additional clause or two in those contracts to ensure the quality of those performances without fair use.

Properly bounded exceptions and limitations are an important aspect of any copyright regime. But given the mix of legal regimes among current prospective trading partners, as well as other countries with whom the U.S. might at some stage develop new FTAs, it’s highly likely that the introduction of U.S.-style fair use rules would be misinterpreted and misapplied in certain jurisdictions and could result in excessively lax copyright protection, undermining incentives to create and innovate. Of course for the self-described consumer advocates pushing for fair use, this is surely the goal. Further, mandating the inclusion of fair use in trade agreements through TPA legislation would, in essence, force the U.S. to ignore the legal regimes of its trading partners and weaken the protection of copyright in trade agreements, again undermining the incentive to create and innovate.

There is no principled reason, in short, for TPA to mandate adoption of U.S-style fair use in free trade agreements. Congress should pass TPA legislation as introduced, and resist any rent-seeking attempts to include fair use language.

Filed under: contracts, copyright, intellectual property, international center for law & economics, international politics, international trade, technology Tagged: copyright, copyright law, fair use, fast track, free trade agreements, Intellectual property, Intellectual Property Rights, TPA, trade agreement, trade agreements

Continue reading
Financial Regulation & Corporate Governance

Amicus Brief, Howard Stirk Holdings, LLC. et al. v. FCC, D.C. Circuit

Amicus Brief "'Capricious' is defined as 'given to sudden and unaccountable changes of mood or behavior.' That is just the word to describe the FCC’s decision in its 2014 Order to reverse a quarter century of agency practice by a vote of 3-to-2..."

Summary

“‘Capricious’ is defined as ‘given to sudden and unaccountable changes of mood or behavior.’ That is just the word to describe the FCC’s decision in its 2014 Order to reverse a quarter century of agency practice by a vote of 3-to-2 and suddenly declare unlawful scores of JSAs between local television broadcast stations, many of which were originally approved by the FCC and have been in place for a decade or longer. The FCC’s action was not only capricious, but also contrary to law for two fundamental reasons.

First, the 2014 Order extends the FCC’s outdated ‘duopoly’ rule to JSAs that have never before been subject to it, many of which were blessed by the agency, without first determining whether that rule is still in the public interest. The ‘duopoly’ rule — first adopted in 1964 during the age of black-and-white TV — prohibits one entity from owning FCC licenses to two or more TV stations in the same local market unless there are at least eight independently owned stations in that market…The FCC’s 2014 Order makes a mockery of this congressional directive. In it, the Commission announced that, instead of completing its statutorily-mandated 2010 Quadrennial Review of its local ownership rules, it would roll that review into a new 2014 Quadrennial Review, while retaining its duopoly rule pending completion of that review because it had ‘tentatively’ concluded that it was still necessary. This Court should not accept this regulatory legerdemain. The 1996 Act does not allow the FCC to retain its duopoly rule in its current form without making the statutorily-required determination that it is still necessary. A ‘tentative’ conclusion that does not take into account the significant changes both in competition policy and in the market for video programming that have occurred since the current rule was first adopted in 1999 is not an acceptable substitute.

Second, having illegally retained the outdated duopoly rule, the 2014 Order then dramatically expands its scope by amending the FCC’s local ownership attribution rules to make the rule applicable to JSAs, which had never before been subject to it. The Commission thereby suddenly declares unlawful JSAs in scores of local markets, many of which have been operating for a decade or longer without any harm to competition. Even more remarkably, it does so despite the fact that both the DOJ and the FCC itself had previously reviewed many of these JSAs and concluded that they were not likely to lessen competition. In doing so, the FCC also fails to examine the empirical evidence accumulated over the nearly two decades some of these JSAs have been operating. That evidence shows that many of these JSAs have substantially reduced the costs of operating TV stations and improved the quality of their programming without causing any harm to competition, thereby serving the public interest…”

Continue reading
Telecommunications & Regulated Utilities

Don’t tread on my Internet

Popular Media Ben Sperry and I have a long piece on net neutrality in the latest issue of Reason Magazine entitled, “How to Break the Internet.” It’s . . .

reason-mag-dont-tread-on-my-internetBen Sperry and I have a long piece on net neutrality in the latest issue of Reason Magazine entitled, “How to Break the Internet.” It’s part of a special collection of articles and videos dedicated to the proposition “Don’t Tread on My Internet!”

Reason has put together a great bunch of material, and packaged it in a special retro-designed page that will make you think it’s the 1990s all over again (complete with flaming graphics and dancing Internet babies).

Here’s a taste of our article:

“Net neutrality” sounds like a good idea. It isn’t.

As political slogans go, the phrase net neutrality has been enormously effective, riling up the chattering classes and forcing a sea change in the government’s decades-old hands-off approach to regulating the Internet. But as an organizing principle for the Internet, the concept is dangerously misguided. That is especially true of the particular form of net neutrality regulation proposed in February by Federal Communications Commission (FCC) Chairman Tom Wheeler.

Net neutrality backers traffic in fear. Pushing a suite of suggested interventions, they warn of rapacious cable operators who seek to control online media and other content by “picking winners and losers” on the Internet. They proclaim that regulation is the only way to stave off “fast lanes” that would render your favorite website “invisible” unless it’s one of the corporate-favored. They declare that it will shelter startups, guarantee free expression, and preserve the great, egalitarian “openness” of the Internet.

No decent person, in other words, could be against net neutrality.

In truth, this latest campaign to regulate the Internet is an apt illustration of F.A. Hayek’s famous observation that “the curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” Egged on by a bootleggers-and-Baptists coalition of rent-seeking industry groups and corporation-hating progressives (and bolstered by a highly unusual proclamation from the White House), Chairman Wheeler and his staff are attempting to design something they know very little about-not just the sprawling Internet of today, but also the unknowable Internet of tomorrow.

And the rest of the contents of the site are great, as well. Among other things, there’s:

  • “Why are Edward Snowden’s supporters so eager to give the government more control over the Internet?” Matt Welch’s  take on the contradictions in the thinking of net neutrality’s biggest advocates.
  • “The Feds want a back door into your computer. Again.” Declan McCullagh on the eternal return of government attempts to pre-hack your technology.
  • “Uncle Sam wants your Fitbit.” Adam Thierer on the coming clampdown on data coursing through the Internet of Things.
  • Mike Godwin on how net neutrality can hurt developing countries most of all.
  • “How states are planning to grab tax dollars for online sales,” by Veronique de Rugy
  • FCC Commissioner Ajit Pai on why net neutrality is “a solution that won’t work to a problem that simply doesn’t exist.”
  • “8 great libertarian apps that make your world a little freer and a whole lot easier to navigate.”

There’s all that, plus enough flaming images and dancing babies to make your eyes bleed. Highly recommended!

Filed under: net neutrality, regulation, technology, telecommunications, television Tagged: ajit pai, internet, net neutrality, reason magazine, regulating the Internet

Continue reading
Telecommunications & Regulated Utilities

The Ninth Circuit botched its efficiencies analysis in the FTC v St Lukes antitrust case

Popular Media Earlier this week the International Center for Law & Economics, along with a group of prominent professors and scholars of law and economics, filed an . . .

Earlier this week the International Center for Law & Economics, along with a group of prominent professors and scholars of law and economics, filed an amicus brief with the Ninth Circuit seeking rehearing en banc of the court’s FTC, et al. v. St Luke’s case.

ICLE, joined by the Medicaid Defense Fund, also filed an amicus brief with the Ninth Circuit panel that originally heard the case.

The case involves the purchase by St. Luke’s Hospital of the Saltzer Medical Group, a multi-specialty physician group in Nampa, Idaho. The FTC and the State of Idaho sought to permanently enjoin the transaction under the Clayton Act, arguing that

[T]he combination of St. Luke’s and Saltzer would give it the market power to demand higher rates for health care services provided by primary care physicians (PCPs) in Nampa, Idaho and surrounding areas, ultimately leading to higher costs for health care consumers.

The district court agreed and its decision was affirmed by the Ninth Circuit panel.

Unfortunately, in affirming the district court’s decision, the Ninth Circuit made several errors in its treatment of the efficiencies offered by St. Luke’s in defense of the merger. Most importantly:

  • The court refused to recognize St. Luke’s proffered quality efficiencies, stating that “[i]t is not enough to show that the merger would allow St. Luke’s to better serve patients.”
  • The panel also applied the “less restrictive alternative” analysis in such a way that any theoretically possible alternative to a merger would discount those claimed efficiencies.
  • Finally, the Ninth Circuit panel imposed a much higher burden of proof for St. Luke’s to prove efficiencies than it did for the FTC to make out its prima facie case.

As we note in our brief:

If permitted to stand, the Panel’s decision will signal to market participants that the efficiencies defense is essentially unavailable in the Ninth Circuit, especially if those efficiencies go towards improving quality. Companies contemplating a merger designed to make each party more efficient will be unable to rely on an efficiencies defense and will therefore abandon transactions that promote consumer welfare lest they fall victim to the sort of reasoning employed by the panel in this case.

The following excerpts from the brief elaborate on the errors committed by the court and highlight their significance, particularly in the health care context:

The Panel implied that only price effects can be cognizable efficiencies, noting that the District Court “did not find that the merger would increase competition or decrease prices.” But price divorced from product characteristics is an irrelevant concept. The relevant concept is quality-adjusted price, and a showing that a merger would result in higher product quality at the same price would certainly establish cognizable efficiencies.

* * *

By placing the ultimate burden of proving efficiencies on the defendants and by applying a narrow, impractical view of merger specificity, the Panel has wrongfully denied application of known procompetitive efficiencies. In fact, under the Panel’s ruling, it will be nearly impossible for merging parties to disprove all alternatives when the burden is on the merging party to address any and every untested, theoretical less-restrictive structural alternative.

* * *

Significantly, the Panel failed to consider the proffered significant advantages that health care acquisitions may have over contractual alternatives or how these advantages impact the feasibility of contracting as a less restrictive alternative. In a complex integration of assets, “the costs of contracting will generally increase more than the costs of vertical integration.” (Benjamin Klein, Robert G. Crawford, and Armen A. Alchian, Vertical Integration, Appropriable Rents, and the Competitive Contracting Process, 21 J. L. & ECON. 297, 298 (1978)). In health care in particular, complexity is a given. Health care is characterized by dramatically imperfect information, and myriad specialized and differentiated products whose attributes are often difficult to measure. Realigning incentives through contract is imperfect and often unsuccessful. Moreover, the health care market is one of the most fickle, plagued by constantly changing market conditions arising from technological evolution, ever-changing regulations, and heterogeneous (and shifting) consumer demand. Such uncertainty frequently creates too many contingencies for parties to address in either writing or enforcing contracts, making acquisition a more appropriate substitute.

* * *

Sound antitrust policy and law do not permit the theoretical to triumph over the practical. One can always envision ways that firms could function to achieve potential efficiencies…. But this approach would harm consumers and fail to further the aims of the antitrust laws.

* * *

The Panel’s approach to efficiencies in this case demonstrates a problematic asymmetry in merger analysis. As FTC Commissioner Wright has cautioned:

Merger analysis is by its nature a predictive enterprise. Thinking rigorously about probabilistic assessment of competitive harms is an appropriate approach from an economic perspective. However, there is some reason for concern that the approach applied to efficiencies is deterministic in practice. In other words, there is a potentially dangerous asymmetry from a consumer welfare perspective of an approach that embraces probabilistic prediction, estimation, presumption, and simulation of anticompetitive effects on the one hand but requires efficiencies to be proven on the other. (Dissenting Statement of Commissioner Joshua D. Wright at 5, In the Matter of Ardagh Group S.A., and Saint-Gobain Containers, Inc., and Compagnie de Saint-Gobain)

* * *

In this case, the Panel effectively presumed competitive harm and then imposed unduly high evidentiary burdens on the merging parties to demonstrate actual procompetitive effects. The differential treatment and evidentiary burdens placed on St. Luke’s to prove competitive benefits is “unjustified and counterproductive.” (Daniel A. Crane, Rethinking Merger Efficiencies, 110 MICH. L. REV. 347, 390 (2011)). Such asymmetry between the government’s and St. Luke’s burdens is “inconsistent with a merger policy designed to promote consumer welfare.” (Dissenting Statement of Commissioner Joshua D. Wright at 7, In the Matter of Ardagh Group S.A., and Saint-Gobain Containers, Inc., and Compagnie de Saint-Gobain).

* * *

In reaching its decision, the Panel dismissed these very sorts of procompetitive and quality-enhancing efficiencies associated with the merger that were recognized by the district court. Instead, the Panel simply decided that it would not consider the “laudable goal” of improving health care as a procompetitive efficiency in the St. Luke’s case – or in any other health care provider merger moving forward. The Panel stated that “[i]t is not enough to show that the merger would allow St. Luke’s to better serve patients.” Such a broad, blanket conclusion can serve only to harm consumers.

* * *

By creating a barrier to considering quality-enhancing efficiencies associated with better care, the approach taken by the Panel will deter future provider realignment and create a “chilling” effect on vital provider integration and collaboration. If the Panel’s decision is upheld, providers will be considerably less likely to engage in realignment aimed at improving care and lowering long-term costs. As a result, both patients and payors will suffer in the form of higher costs and lower quality of care. This can’t be – and isn’t – the outcome to which appropriate antitrust law and policy aspires.

The scholars joining ICLE on the brief are:

  • George Bittlingmayer, Wagnon Distinguished Professor of Finance and Otto Distinguished Professor of Austrian Economics, University of Kansas
  • Henry Butler, George Mason University Foundation Professor of Law and Executive Director of the Law & Economics Center, George Mason University
  • Daniel A. Crane, Associate Dean for Faculty and Research and Professor of Law, University of Michigan
  • Harold Demsetz, UCLA Emeritus Chair Professor of Business Economics, University of California, Los Angeles
  • Bernard Ganglmair, Assistant Professor, University of Texas at Dallas
  • Gus Hurwitz, Assistant Professor of Law, University of Nebraska-Lincoln
  • Keith Hylton, William Fairfield Warren Distinguished Professor of Law, Boston University
  • Thom Lambert, Wall Chair in Corporate Law and Governance, University of Missouri
  • John Lopatka, A. Robert Noll Distinguished Professor of Law, Pennsylvania State University
  • Geoffrey Manne, Founder and Executive Director of the International Center for Law and Economics and Senior Fellow at TechFreedom
  • Stephen Margolis, Alumni Distinguished Undergraduate Professor, North Carolina State University
  • Fred McChesney, de la Cruz-Mentschikoff Endowed Chair in Law and Economics, University of Miami
  • Tom Morgan, Oppenheim Professor Emeritus of Antitrust and Trade Regulation Law, George Washington University
  • David Olson, Associate Professor of Law, Boston College
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University
  • D. Daniel Sokol, Professor of Law, University of Florida
  • Mike Sykuta, Associate Professor and Director of the Contracting and Organizations Research Institute, University of Missouri

The amicus brief is available here.

Filed under: Affordable Care Act, antitrust, federal trade commission, health care, international center for law & economics, law and economics, merger guidelines, mergers & acquisitions Tagged: Amicus Brief, Daniel Crane, Efficiencies, Federal Trade Commission, ftc, health care, Hospital Mergers, icle, international center for law and economics, Josh Wright, joshua wright, merger efficiencies, Ninth Circuit, primary care physicians, St. Luke’s

Continue reading
Antitrust & Consumer Protection

FTC Staff Report on Google: Much Ado About Nothing

Popular Media The Wall Street Journal reported yesterday that the FTC Bureau of Competition staff report to the commissioners in the Google antitrust investigation recommended that the . . .

The Wall Street Journal reported yesterday that the FTC Bureau of Competition staff report to the commissioners in the Google antitrust investigation recommended that the Commission approve an antitrust suit against the company.

While this is excellent fodder for a few hours of Twitter hysteria, it takes more than 140 characters to delve into the nuances of a 20-month federal investigation. And the bottom line is, frankly, pretty ho-hum.

As I said recently,

One of life’s unfortunate certainties, as predictable as death and taxes, is this: regulators regulate.

The Bureau of Competition staff is made up of professional lawyers — many of them litigators, whose existence is predicated on there being actual, you know, litigation. If you believe in human fallibility at all, you have to expect that, when they err, FTC staff errs on the side of too much, rather than too little, enforcement.

So is it shocking that the FTC staff might recommend that the Commission undertake what would undoubtedly have been one of the agency’s most significant antitrust cases? Hardly.

Nor is it surprising that the commissioners might not always agree with staff. In fact, staff recommendations are ignored all the time, for better or worse. Here are just a few examples: R.J Reynolds/Brown & Williamson merger, POM Wonderful , Home Shopping Network/QVC merger, cigarette advertising. No doubt there are many, many more.

Regardless, it also bears pointing out that the staff did not recommend the FTC bring suit on the central issue of search bias “because of the strong procompetitive justifications Google has set forth”:

Complainants allege that Google’s conduct is anticompetitive because if forecloses alternative search platforms that might operate to constrain Google’s dominance in search and search advertising. Although it is a close call, we do not recommend that the Commission issue a complaint against Google for this conduct.

But this caveat is enormous. To report this as the FTC staff recommending a case is seriously misleading. Here they are forbearing from bringing 99% of the case against Google, and recommending suit on the marginal 1% issues. It would be more accurate to say, “FTC staff recommends no case against Google, except on a couple of minor issues which will be immediately settled.”

And in fact it was on just these minor issues that Google agreed to voluntary commitments to curtail some conduct when the FTC announced it was not bringing suit against the company.

The Wall Street Journal quotes some other language from the staff report bolstering the conclusion that this is a complex market, the conduct at issue was ambiguous (at worst), and supporting the central recommendation not to sue:

We are faced with a set of facts that can most plausibly be accounted for by a narrative of mixed motives: one in which Google’s course of conduct was premised on its desire to innovate and to produce a high quality search product in the face of competition, blended with the desire to direct users to its own vertical offerings (instead of those of rivals) so as to increase its own revenues. Indeed, the evidence paints a complex portrait of a company working toward an overall goal of maintaining its market share by providing the best user experience, while simultaneously engaging in tactics that resulted in harm to many vertical competitors, and likely helped to entrench Google’s monopoly power over search and search advertising.

On a global level, the record will permit Google to show substantial innovation, intense competition from Microsoft and others, and speculative long-run harm.

This is exactly when you want antitrust enforcers to forbear. Predicting anticompetitive effects is difficult, and conduct that could be problematic is simultaneously potentially vigorous competition.

That the staff concluded that some of what Google was doing “harmed competitors” isn’t surprising — there were lots of competitors parading through the FTC on a daily basis claiming Google harmed them. But antitrust is about protecting consumers, not competitors. Far more important is the staff finding of “substantial innovation, intense competition from Microsoft and others, and speculative long-run harm.”

Indeed, the combination of “substantial innovation,” “intense competition from Microsoft and others,” and “Google’s strong procompetitive justifications” suggests a well-functioning market. It similarly suggests an antitrust case that the FTC would likely have lost. The FTC’s litigators should probably be grateful that the commissioners had the good sense to vote to close the investigation.

Meanwhile, the Wall Street Journal also reports that the FTC’s Bureau of Economics simultaneously recommended that the Commission not bring suit at all against Google. It is not uncommon for the lawyers and the economists at the Commission to disagree. And as a general (though not inviolable) rule, we should be happy when the Commissioners side with the economists.

While the press, professional Google critics, and the company’s competitors may want to make this sound like a big deal, the actual facts of the case and a pretty simple error-cost analysis suggests that not bringing a case was the correct course.

Filed under: antitrust, error costs, exclusionary conduct, exclusive dealing, federal trade commission, google, Internet search, law and economics, monopolization, settlements, technology Tagged: error costs, Federal Trade Commission, ftc, google

Continue reading
Antitrust & Consumer Protection

A Star Chamber for Internet Regulation

Popular Media California’s leaders have worked long and hard to dispel the state’s image as an over-regulated maze for business.

Excerpt

California’s leaders have worked long and hard to dispel the state’s image as an over-regulated maze for business. In 2012, those efforts culminated in a landmark law, signed by Democratic Gov. Jerry Brown, declaring that the state would not regulate the Internet, preserving the freewheeling innovation and vitality of one the state’s most critical economic drivers.

But now an unelected state official reviewing the Comcast-Time Warner Cable deal has called for sweeping new Internet regulations — under the guise of its review of the transaction — that would render that bipartisan law a dead letter.

If Internet access, speeds and prices can suddenly be mandated by the California Public Utilities Commission (CPUC) under this kind of flimsy pretense, the legislative policy of Internet freedom and deregulation online would no longer exist. Hopefully cooler heads will prevail as the full commission moves forward.

Overall, the CPUC’s proposed decision correctly recognized that the transaction should be approved. That was no great surprise, because Comcast and Time Warner Cable operate in separate markets, it was always clear that the competitive issues raised by this deal were limited and could be readily addressed.

But most of the conditions suggested in the proposed decision have nothing to do with addressing the sort of competitive concerns that the deal might raise. Instead they address broader policy issues that are the province of the Legislature, not an unelected state official supposedly reviewing a single business transaction.

Continue reading at Los Angeles Daily News

 

Continue reading
Telecommunications & Regulated Utilities

Netflix’s Predictable Net Neutrality Conversion

Popular Media The line between a “principled change of views” and a “craven flip-flop” is a fine one. But if there were a land-speed record for changing one’s mind, Netflix would have just won it.

Excerpt

The line between a “principled change of views” and a “craven flip-flop” is a fine one. But if there were a land-speed record for changing one’s mind, Netflix would have just won it.

A mere four days into the new net neutrality world order it helped to usher in, Netflix was already expressing doubts about the regnant Title II canon. Twitter was agog much of Wednesday with reporters politely noting the absurdity. “Can’t believe that @netflix, the poster child for #Title II, is now saying it really didn’t mean it. WTF,” remarked one journalist.

It’s not an exaggeration to say that Netflix was one of the chief evangelists for an exhaustive net neutrality decree. One of the watershed moments in the debate leading up to the FCC’s vote last Thursday was Netflix’s fervent claim that the strongest possible rules were needed to prevent Internet providers from imposing “online tolls” or discriminating against the video giant’s traffic. With Google sitting out this round, net neutrality activists needed a corporate crusader. They found one in Netflix.
Of course, that inflammatory charge was never actually borne out by the facts. It turns out that Netflix (and its partner, Cogent) was slowing down its own traffic with spurious routing decisions, seemingly tailor-made to create the spectacle it needed to galvanize the faithful. The gambit worked. Disingenuous or not, there’s no disputing that it greatly influenced the outcome, ultimately leading to the passage of the most onerous form of net neutrality rules.

And this isn’t the only case where Netflix is sitting comfortably on both sides of the issue. The company recently announced a new initiative in Australia where one ISP will exempt Netflix traffic from its data limits. That practice — called “zero rating” — isn’t unusual, especially in the developing world, but it raises plenty of hackles among net neutrality disciples.

Continue reading on TheHill.com

Continue reading
Telecommunications & Regulated Utilities

ICLE Comments, Promoting Innovation and Competition in the Provision of MVPD Services

Regulatory Comments In this proceeding, the Commission proposes to expand the definition of a multichannel video programming distributor (MVPD) to encompass “subscription linear” online video distributors (OVDs), defined as services that make “multiple streams of prescheduled video programming available for purchase” over the Internet.

Summary

In this proceeding, the Commission proposes to expand the definition of a multichannel video programming distributor (MVPD) to encompass “subscription linear” online video distributors (OVDs), defined as services that make “multiple streams of prescheduled video programming available for purchase” over the Internet. We believe this proposal is unwise as a policy matter and incorrect as a matter of statutory interpretation. Instead, we urge the Commission to affirm the Media Bureau’s Transmission Path Interpretation, which holds that an MVPD
must “own or operate the facilities for delivering content to consumers.” We contend that this is the only permissible construction of the term MVPD as used in the Communications Act.

Continue reading
Telecommunications & Regulated Utilities

The FCC’s Net Neutrality Victory Is Anything But

Popular Media The day after the FCC's net neutrality vote, Washington was downright frigid. I'd spoken at three events about the ruling, mentioning at each that the order could be overturned in court. I was tired and ready to go home.

Excerpt

The day after the FCC’s net neutrality vote, Washington was downright frigid. I’d spoken at three events about the ruling, mentioning at each that the order could be overturned in court. I was tired and ready to go home.

I could see my Uber at the corner when I felt a hand on my arm. The woman’s face was anxious. “I heard your talk,” she said.“If net neutrality is overturned, will I still be able to Skype with my son in Turkey?”

The question reveals the problem with the supposed four million comments submitted in support of net neutrality. *Almost no one really gets it. *Fewer still understand Title II, the regulatory tool the FCC just invoked to impose its conception of net neutrality on the Internet.

Some internet engineers and innovators do get it. Mark Cuban rightly calls the uncertainty created by Title II a “Whac-a-Mole environment,” driven by political whims. And telecom lawyers? They love it: whatever happens, the inevitable litigation will mean a decade’s worth of job security.

As I’ve said in technically detailed comments, academic coalition letters, papers, and even here at Wired, while “”net neutrality”” sounds like a good idea, it isn’t. And reclassifying the internet under Title II, an antiquated set of laws repurposed in the 1930s for Ma Bell, is the worst way to regulate dynamic digital services.

Continue reading on WIRED.com

 

Continue reading
Telecommunications & Regulated Utilities