Showing Latest Publications

Google: Great Deal or Greatest Deal?

TOTM Critics of Google have argued that users overvalue Google’s services in relation to the data they give away.  One breath-taking headline asked Who Would Pay $5,000 . . .

Critics of Google have argued that users overvalue Google’s services in relation to the data they give away.  One breath-taking headline asked Who Would Pay $5,000 to Use Google?, suggesting that Google and its advertisers can make as much as $5,000 off of individuals whose data they track. Scholars, such as Nathan Newman, have used this to argue that Google exploits its users through data extraction. But, the question remains: how good of a deal is Google? My contention is that Google’s value to most consumers far surpasses the value supposedly extracted from them in data.

Read the full piece here

Continue reading
Antitrust & Consumer Protection

Manufacturing (Broadband) Dissent

Popular Media I have a new post up at TechPolicyDaily.com, excerpted below, in which I discuss the growing body of (surprising uncontroversial) work showing that broadband in . . .

I have a new post up at TechPolicyDaily.com, excerpted below, in which I discuss the growing body of (surprising uncontroversial) work showing that broadband in the US compares favorably to that in the rest of the world. My conclusion, which is frankly more cynical than I like, is that concern about the US “falling behind” is manufactured debate. It’s a compelling story that the media likes and that plays well for (some) academics.

Before the excerpt, I’d also like to quote one of today’s headlines from Slashdot:

“Google launched the citywide Wi-Fi network with much fanfare in 2006 as a way for Mountain View residents and businesses to connect to the Internet at no cost. It covers most of the Silicon Valley city and worked well until last year, as Slashdot readers may recall, when connectivity got rapidly worse. As a result, Mountain View is installing new Wi-Fi hotspots in parts of the city to supplement the poorly performing network operated by Google. Both the city and Google have blamed the problems on the design of the network. Google, which is involved in several projects to provide Internet access in various parts of the world, said in a statement that it is ‘actively in discussions with the Mountain View city staff to review several options for the future of the network.’”

The added emphasis is mine. It is added to draw attention to the simple point that designing and building networks is hard. Like, really really hard. Folks think that it’s easy, because they have small networks in their homes or offices — so surely they can scale to a nationwide network without much trouble. But all sorts of crazy stuff starts to happen when we substantially increase the scale of IP networks. This is just one of the very many things that should give us pause about calls for the buildout of a government run or sponsored Internet infrastructure.

Another of those things is whether there’s any need for that. Which brings us to my TechPolicyDaily.com post:

In the week or so since TPRC, I’ve found myself dwelling on an observation I made during the conference: how much agreement there was, especially on issues usually thought of as controversial. I want to take a few paragraphs to consider what was probably the most surprisingly non-controversial panel of the conference, the final Internet Policy panel, in which two papers – one by ITIF’s Rob Atkinson and the other by James McConnaughey from NTIA – were presented that showed that broadband Internet service in US (and Canada, though I will focus on the US) compares quite well to that offered in the rest of the world. […]

But the real question that this panel raised for me was: given how well the US actually compares to other countries, why does concern about the US falling behind dominate so much discourse in this area? When you get technical, economic, legal, and policy experts together in a room – which is what TPRC does – the near consensus seems to be that the “kids are all right”; but when you read the press, or much of the high-profile academic literature, “the sky is falling.”

The gap between these assessments could not be larger. I think that we need to think about why this is. I hate to be cynical or disparaging – especially since I know strong advocates on both sides and believe that their concerns are sincere and efforts earnest. But after this year’s conference, I’m having trouble shaking the feeling that ongoing concern about how US broadband stacks up to the rest of the world is a manufactured debate. It’s a compelling, media- and public-friendly, narrative that supports a powerful political agenda. And the clear incentives, for academics and media alike, are to find problems and raise concerns. […]

Compare this to the Chicken Little narrative. As I was writing this, I received a message from a friend asking my views on an Economist blog post that shares data from the ITU’s just-released Measuring the Information Society 2013 report. This data shows that the US has some of the highest prices for pre-paid handset-based mobile data around the world. That is, it reports the standard narrative – and it does so without looking at the report’s methodology. […]

Even more problematic than what the Economist blog reports, however, is what it doesn’t report. [The report contains data showing the US has some of the lowest cost fixed broadband and mobile broadband prices in the world. See the full post at TechPolicyDaily.com for the numbers.]

Now, there are possible methodological problems with these rankings, too. My point here isn’t to debate over the relative position of the United States. It’s to ask why the “story” about this report cherry-picks the alarming data, doesn’t consider its methodology, and ignores the data that contradicts its story.

Of course, I answered that question above: It’s a compelling, media- and public-friendly, narrative that supports a powerful political agenda. And the clear incentives, for academics and media alike, are to find problems and raise concerns. Manufacturing debate sells copy and ads, and advances careers.

Filed under: federal communications commission, net neutrality, regulation, technology, telecommunications, truth on the market, wireless Tagged: Broadband, FCC, Internet Access, Network neutrality, rankings, TPRC

Continue reading
Telecommunications & Regulated Utilities

My New Paper on Defining Exclusionary Conduct

Popular Media In our recent blog symposium on Section 5 of the FTC Act, Latham & Watkins partner Tad Lipsky exposed one of antitrust’s dark little secrets: . . .

In our recent blog symposium on Section 5 of the FTC Act, Latham & Watkins partner Tad Lipsky exposed one of antitrust’s dark little secrets: Nobody really knows what Sherman Act Section 2 forbids.  The provision bans monopolization, attempted monopolization, and conspiracies to monopolize, and courts have articulated formal elements for each claim.  But the element common to the two unilateral offenses—“exclusionary conduct”—remains essentially undefined.  Lipsky writes:

123 years of Section 2 enforcement and the best our Supreme Court can do is the Grinnell standard, defining [exclusionary conduct] as the “willful acquisition or maintenance of [monopoly] power as distinguished from growth or development as a consequence of a superior product, business acumen, or historic accident.”  Is this Grinnell definition that much better than [Section 5’s reference to] “unfair methods of competition”?

No, it’s not.  Nor are any of the other commonly cited judicial definitions of exclusionary conduct, such as “competition not on the merits.”  As Einer Elhauge has observed, such judicial definitions are not just vague but vacuous.

This is problematic because business planners need clarity.  On some specific unilateral practices—straightforward price cuts and aggressive input-bidding, for example—courts have provided clear liability rules and safe harbors.  But in a dynamic economy, business people are constantly coming up with new ideas for sales-enhancing practices that might have the effect of disadvantaging rivals, of “excluding” them from the market.  Absent some general understanding of what constitutes an “unreasonably exclusionary” act, business people are likely to forego novel but efficient sales-enhancing practices, to the detriment of consumers.

In the last decade or so, commentators have proposed four generally applicable definitions of unreasonably exclusionary conduct.  Judge Posner suggested that such conduct be defined as acts that could exclude an “equally efficient rival” from the perpetrator’s market (the “EER” approach).  Post-Chicago theorists would equate unreasonably exclusionary conduct with unjustifiably “raising rivals’ costs” (the “RRC” approach).  The Areeda-Hovenkamp treatise prescribes a balancing of the “consumer welfare effects” resulting from the practice at issue (“CWE-balancing”).  And the U.S. Department of Justice has called for defining unreasonably exclusionary conduct as that which would make “no economic sense” apart from its tendency to enhance market power (the “NES” test, or “NEST”).

Each of these approaches, it turns out, is troubling.  The EER approach is underdeterrent in that it fails to condemn practices that cause rivals to be less efficient than the perpetrator.  The RRC, CWE-balancing, and NEST approaches turn out to be difficult to apply—and largely indeterminate—for any exclusion-causing conduct involving “degrees.” For example, a 15% loyalty rebate conditioned upon purchasing 70% of one’s requirements from the defendant requires a certain “degree” of loyalty and provides a certain “degree” of price reduction.  It might well turn out that some degree of required loyalty (e.g., the increment from 60% to 70%) or some degree of discount (e.g., the increment from 10% to 15%) either (1) raised rivals’ costs unjustifiably (RRC) or (2) created greater consumer harm than benefit (CWE-balancing) or (3) made no economic sense but for its ability to enhance market power (NEST).  Because the RRC, CWE-balancing, and NEST approaches appear to require marginal analysis of exclusion-causing conduct, they become fairly inadministrable and indeterminate when applied to conduct involving degrees, a category that includes most of the novel conduct for which a generally applicable exclusionary conduct definition would be useful.  Because they provide little guidance and no reliable safe harbors, the RRC, CWE-balancing, and NEST approaches are likely to overdeter efficient, but novel, business practices.

In light of these and other difficulties with the proposed exclusionary conduct definitions, a number of scholars now advocate abandoning the search for a generally applicable definition and applying different liability standards to different types of behavior.  Eschewal of universal standards, though, is also troubling.  To the extent non-universalists are saying that there is no single definition of unreasonably exclusionary conduct—no common thread that runs through all instances of unreasonable exclusion—their position seems to violate rule of law norms.  After all, the Court has told us that unreasonably exclusionary conduct is an element of monopolization and attempted monopolization.  That means that the exclusionary conduct component of all Section 2 offenses must share something in common; otherwise, the “element” would consist of a non-exhaustive menu of unrelated features and would cease to be an element.

A less extreme “non-universalist” approach would concede that there is a single definition of unreasonably exclusionary conduct—that which reduces overall consumer welfare—but hold that there should be no universal test for identifying when a particular practice runs afoul of the definition.  This more defensible position resembles “rule utilitarianism” in ethical theory.  Rule utilitarians concede that morality is ultimately concerned with utility-maximization, but they would judge the morality of any particular act not on the basis of its actual consequences but instead according to whether it complies with a rule selected to maximize utility.  Similarly, “soft” non-universalists would select liability tests for particular business practices on the basis of whether those tests maximize overall consumer welfare, but they would evaluate particular instances of exclusion-causing behavior on the basis of whether they comply with applicable liability tests, not whether they actually enhance consumer welfare.

Because it reduces to a version of CWE-balancing (though at the rule level rather than the act level), “soft” non-universalism is subject to the same criticisms as CWE-balancing in general: it is difficult to apply and indeterminate.  Indeed, under a soft non-universal approach, a business planner considering a novel but efficient exclusion-causing practice would first have to predict the liability rule a reviewing court would adopt for the practice under consideration and then apply that rule.  Talk about a lack of clarity and reliable safe harbors!

I have recently authored a paper that critiques the proposed definitions of unreasonably exclusionary conduct as well as the non-universalist approaches discussed above and, finding each position deficient, proposes an alternative approach.  My approach would deem conduct to be unreasonably exclusionary if it would likely exclude from the perpetrator’s market a “competitive rival,” defined as a rival that is both as determined as the perpetrator and capable, at minimum efficient scale, of matching the perpetrator’s efficiency.  This “exclusion of a competitive rival” approach, the paper demonstrates, identifies a common thread running through instances of unreasonable exclusion, comports with prevailing intuitions about what constitutes appropriate competition, generates clear guidance and reliable safe harbors, and would minimize the sum of decision and error costs resulting from monopolization doctrine.

A draft of the paper, which is slated to appear as an article in the North Carolina Law Review, is available on SSRN.  Please download, and let me know if you have any comments.

Filed under: antitrust, exclusionary conduct, law and economics, monopolization, regulation

Continue reading
Antitrust & Consumer Protection

Credit Where It’s Due: How Payment Cards Benefit Consumers and Merchants and How Regulation Can Harm Them

Popular Media In recent years, some Canadian politicians and powerful interest groups have issued increasingly vocal calls for dramatic regulatory interventions into the country’s payment cards system. In particular, they have called for a "hard cap" price-controls on interchange fees, a ban on contractual terms that prohibit card-accepting merchants from imposing surcharges on consumers that use payment cards, and a ban on so-called "honour all cards" rules that require a merchant to accept all payment cards issued under any payment network’s logo.

Summary

In recent years, some Canadian politicians and powerful interest groups have issued increasingly vocal calls for dramatic regulatory interventions into the country’s payment cards system. In particular, they have called for a “hard cap” price-controls on interchange fees, a ban on contractual terms that prohibit card-accepting merchants from imposing surcharges on consumers that use payment cards, and a ban on so-called “honour all cards” rules that require a merchant to accept all payment cards issued under any payment network’s logo.

Advocates claim that these interventions will benefit businesses (especially small and medium-sized merchants) and consumers. An examination of economic theory and available empirical evidence, however, demonstrates that these claims of the benefits of intervention are unsupported. In particular, review of the effects of payment card regulation in the U.S., Australia, and elsewhere suggests that price controls and other interventions result in higher banking and credit card fees for consumers, while retailers are unlikely to pass on much of the savings to consumers. There is every reason to believe the same outcome will continue to occur in Canada if current efforts to regulate are enacted and unless existing regulations are relaxed.

Instead of imposing regulations on the operators of payment card networks, which would undermine competition and harm consumers, Canada should seek to promote increased competition. The most effective way it can do that is by removing currently-existing legal barriers to competition that support a monopolistic structure in the debit card market and prevent Interac and other card networks from competing fairly with each other. Equally important is avoiding the imposition of costly new restrictions that would interfere with freely-bargained contractual rules between card networks and merchants that benefit consumers, such as no-surcharge rules (which protect consumers from surprise price increases at the register) or honour-all-cards rules (which guarantees ubiquitous acceptance of consumers’ cards).

Download: “Credit Where It’s Due: How Payment Cards Benefit Consumers and Merchants and How Regulation Can Harm Them”

Continue reading
Financial Regulation & Corporate Governance

Comment, Connect America Fund Universal Service Gigabit Communities

Regulatory Comments "It’s been said, of the newest technology, that speed could change everything. If only we could cross a certain speed threshold, our basic infrastructure would catalyze new opportunities we can scarcely even conceive of..."

Summary

“It’s been said, of the newest technology, that speed could change everything. If only we could cross a certain speed threshold, our basic infrastructure would catalyze new opportunities we can scarcely even conceive of. All government needs to do is prime the pump: fund a demonstration project to prove that we can do it, and markets will follow. The demand may not be there yet, but “if you build it, they will come…”

“There are some impediments to the sort of broadband connectivity people actually do want — most importantly local and state regulations that reduce competition and increase the cost of new facilities. The FCC should consider ways to encourage state and local governments to reduce these regulatory barriers rather than create an expensive new program to subsidize a particular technology (fiber) picked because of an arbitrary, top-down decision that people should have a certain speed – even if they don’t yet want it. The FCC should heed the wisdom of Australia’s new Communications Minister who explained his government’s decision to abandon plans for a national fiber-to-the-home network in favor of subsidizing far less  expensive, but slightly slower, fiber-to-the node connectivity…”

Continue reading
Telecommunications & Regulated Utilities

Reply Comments, In the Matter of Connect America Fund Universal Service Gigabit Communities Race-to-the-Top Program Petition

Regulatory Comments "It’s been said, of the newest technology, that speed could change everything. If only we could cross a certain speed threshold, our basic infrastructure would catalyze new opportunities we can scarcely even conceive of..."

Summary

“It’s been said, of the newest technology, that speed could change everything. If only we could cross a
certain speed threshold, our basic infrastructure would catalyze new opportunities we can scarcely even
conceive of. All government needs to do is prime the pump: fund a demonstration project to prove that
we can do it, and markets will follow. The demand may not be there yet, but “if you build it, they will
come.”

Continue reading
Telecommunications & Regulated Utilities

The Second Century of the Federal Trade Commission

Popular Media You may not know much about the most important agency in Washington when it comes to regulating new technologies. Founded 99 years ago today, the Federal Trade Commission has become, for better or worse, the Federal Technology Commission.

Excerpt

You may not know much about the most important agency in Washington when it comes to regulating new technologies. Founded 99 years ago today, the Federal Trade Commission has become, for better or worse, the Federal Technology Commission.

The FTC oversees nearly every company in America. It polices competition by enforcing the antitrust laws. It tries to protect consumers by punishing deception and practices it deems “unfair.” It’s the general enforcer of corporate promises. It’s the de facto regulator of the media, from traditional advertising to Internet search and social networks. It handles novel problems of privacy, data security, online child protection, and patent claims, among others. Even Net neutrality may soon wind up in the FTC’s jurisdiction if the Federal Communications Commission’s rules are struck down in court.

But how should the FTC regulate technology? What’s the right mix of the certainty businesses need and the flexibility technological progress demands?

There are essentially three models: regulatory, discretionary and evolutionary.

The epitome of traditional regulatory model is the FTC’s chief rival: the FCC. The 1996 Telecom Act runs nearly 47,000 words — 65 times longer than the Sherman Act of 1890, the primary antitrust law enforced by the FTC. The FCC writes tech-specific before technology has even developed. Virginia Postrel described the mentality best in The Future and Its Enemies:

Technocrats are “for the future,” but only if someone is in charge of making it turn out according to plan. They greet every new idea with a “yes, but,” followed by legislation, regulation, and litigation…. By design, technocrats pick winners, establish standards, and impose a single set of values on the future. 

The less technocratic alternative is the evolutionary model: build flexible law that evolves alongside technology. Learn from, and adapt to, the ever-changing technological and business environments.

On antitrust, that’s essentially what the FTC (along with the Department of Justice) does today. Judicial decisions are firmly grounded in economics, and this feeds back into the agencies’ enforcement actions. Antitrust law has become nearly synonymous with antitrust economics: both courts and agencies weigh the perils of both under- and over-enforcement in the face of unavoidable uncertainty about the future.

But much of what the FTC does falls into the discretionary model, unmoored from both sound economics and judicial oversight. The discretionary and evolutionary models share a similar legal basis and so are often confused, but they’re profoundly different: The discretionary model harms technological progress and undermines the rule of law, while the evolutionary model promotes both.

Continue reading on Tech Dirt

Continue reading
Antitrust & Consumer Protection

Appropriate humility from Verizon over corporations’ role in stopping NSA surveillance

Popular Media Like most libertarians I’m concerned about government abuse of power. Certainly the secrecy and seeming reach of the NSA’s information gathering programs is worrying. But . . .

Like most libertarians I’m concerned about government abuse of power. Certainly the secrecy and seeming reach of the NSA’s information gathering programs is worrying. But we can’t and shouldn’t pretend like there are no countervailing concerns (as Gordon Crovitz points out). And we certainly shouldn’t allow the fervent ire of the most radical voices — those who view the issue solely from one side — to impel technology companies to take matters into their own hands. At least not yet.

Rather, the issue is inherently political. And while the political process is far from perfect, I’m almost as uncomfortable with the radical voices calling for corporations to “do something,” without evincing any nuanced understanding of the issues involved.

Frankly, I see this as of a piece with much of the privacy debate that points the finger at corporations for collecting data (and ignores the value of their collection of data) while identifying government use of the data they collect as the actual problem. Typically most of my cyber-libertarian friends are with me on this: If the problem is the government’s use of data, then attack that problem; don’t hamstring corporations and the benefits they confer on consumers for the sake of a problem that is not of their making and without regard to the enormous costs such a solution imposes.

Verizon, unlike just about every other technology company, seems to get this. In a recent speech, John Stratton, head of Verizon’s Enterprise Solutions unit, had this to say:

“This is not a question that will be answered by a telecom executive, this is not a question that will be answered by an IT executive. This is a question that must be answered by societies themselves.”

“I believe this is a bigger issue, and press releases and fizzy statements don’t get at the issue; it needs to be solved by society.

Stratton said that as a company, Verizon follows the law, and those laws are set by governments.

“The laws are not set by Verizon, they are set by the governments in which we operate. I think its important for us to recognise that we participate in debate, as citizens, but as a company I have obligations that I am going to follow.

I completely agree. There may be a problem, but before we deputize corporations in the service of even well-meaning activism, shouldn’t we address this as the political issue it is first?

I’ve been making a version of this point for a long time. As I said back in 2006:

I find it interesting that the “blame” for privacy incursions by the government is being laid at Google’s feet. Google isn’t doing the . . . incursioning, and we wouldn’t have to saddle Google with any costs of protection (perhaps even lessening functionality) if we just nipped the problem in the bud. Importantly, the implication here is that government should not have access to the information in question–a decision that sounds inherently political to me. I’m just a little surprised to hear anyone (other than me) saying that corporations should take it upon themselves to “fix” government policy by, in effect, destroying records.

But at the same time, it makes some sense to look to Google to ameliorate these costs. Google is, after all, responsive to market forces, and (once in a while) I’m sure markets respond to consumer preferences more quickly and effectively than politicians do. And if Google perceives that offering more protection for its customers can be more cheaply done by restraining the government than by curtailing its own practices, then Dan [Solove]’s suggestion that Google take the lead in lobbying for greater legislative protections of personal information may come to pass. Of course we’re still left with the problem of Google and not the politicians bearing the cost of their folly (if it is folly).

As I said then, there may be a role for tech companies to take the lead in lobbying for changes. And perhaps that’s what’s happening. But the impetus behind it — the implicit threats from civil liberties groups, the position that there can be no countervailing benefits from the government’s use of this data, the consistent view that corporations should be forced to deal with these political problems, and the predictable capitulation (and subsequent grandstanding, as Stratton calls it) by these companies is not the right way to go.

I applaud Verizon’s stance here. Perhaps as a society we should come out against some or all of the NSA’s programs. But ideological moralizing and corporate bludgeoning aren’t the way to get there.

Filed under: business, corporate social responsibility, cost-benefit analysis, national security, politics, privacy, social responsibility, technology Tagged: John Stratton, National Security Agency, NSA, politics, Surveilance, Verizon, Verizon Communications

Continue reading
Financial Regulation & Corporate Governance

Appropriate humility from Verizon over corporations’ role in stopping NSA surveillance

TOTM Like most libertarians I’m concerned about government abuse of power. Certainly the secrecy and seeming reach of the NSA’s information gathering programs is worrying. But . . .

Like most libertarians I’m concerned about government abuse of power. Certainly the secrecy and seeming reach of the NSA’s information gathering programs is worrying. But we can’t and shouldn’t pretend like there are no countervailing concerns (as Gordon Crovitz points out). And we certainly shouldn’t allow the fervent ire of the most radical voices — those who view the issue solely from one side — to impel technology companies to take matters into their own hands. At least not yet.

Read the full piece here.

Continue reading
Data Security & Privacy