What are you looking for?

Showing 9 of 85 Results in Energy & Environment

Credible Commitments, Adaptability, & Conservation Easements

Scholarship Abstract Conservation easements, a widely used tool to preserve land for conservation purposes, suffer from a fundamental flaw in lacking a means of adapting the . . .

Abstract

Conservation easements, a widely used tool to preserve land for conservation purposes, suffer from a fundamental flaw in lacking a means of adapting the permanent interests they create to changed conditions. This flaw is becoming more apparent as the early generation of these interests age and climate change threatens to bring more rapid demands for adaptation of existing conservation goals in light of changed conditions. Drawing on lessons from successes in international financial centers and U.S. states that are successful in jurisdictional competition, this article argues that the law should embrace measures that enable such competition in providing for shared governance of land between conservation interests and private landowners.

Continue reading
Financial Regulation & Corporate Governance

Pro-Social Change for the Most Challenging : Marketing and Testing Harm Reduction for Conservation

Scholarship Abstract This paper investigates the effectiveness of pursuing conservation goals by promoting harm reduction, a once controversial approach to health care that aims to reduce . . .

Abstract

This paper investigates the effectiveness of pursuing conservation goals by promoting harm reduction, a once controversial approach to health care that aims to reduce the harmful impacts of unhealthy behaviors without promoting full abstinence or stigmatizing said behaviors. Conservation proponents often heavily promote solutions more akin to full abstinence, which do not recognize the inherent preference trade-off the heaviest users face when giving up a behavior that may be harmful to the environment, such as driving a car, eating meat and dairy or watering a lawn. We employ two sequential field experiments to market and test effectiveness of a smart irrigation controller, a lawn watering efficiency device. This solution has an ex-ante lower expected impact on conservation than turf removal, the highest impact solution in this context, but is nevertheless more aligned with the preferences of the heaviest users. We show that marketing this preference-aligned solution induces the highest adoption among the heaviest irrigators and those previously disinclined to conserve. Given these compliance patterns, our interventions lead to large and long-lasting individual and social benefits: water savings from the device recover its cost in half a year and are of the magnitude of one household’s basic (indoor) water needs. We find no meaningful increase in water usage among those irrigating less and no evidence of reduced turf removal, suggesting that the harm reduction intervention grows, rather than cannibalizes, the adoption of water conservation alternatives. Our results underscore the importance of considering heterogeneous preferences when designing interventions aimed at fostering pro-social behaviors such as conservation and shed light on how to use marketing to engage the least pro-socially inclined
Continue reading
Innovation & the New Economy

Effects of Risk Attitudes and Information Friction on Willingness to Pay for Precautionary Building Standards

Scholarship Abstract Take-up rates for windstorm-resistant buildings remain relatively low even in areas with high exposure to hurricanes and tropical storms. To further investigate this issue, . . .

Abstract

Take-up rates for windstorm-resistant buildings remain relatively low even in areas with high exposure to hurricanes and tropical storms. To further investigate this issue, we extend the theory on WTP for prevention to include risk attitudes up to the fourth order, deriving total effects and testable predictions for mixed risk averters and mixed risk lovers’ WTP relative to higher order risk-neutral benchmarks. We then employ field experiments to elicit and test the effects of higher order risk attitudes (HORAs) and information friction on coastal homeowners’ WTP for precautionary building standards with insurance discounts. To elicit risk attitudes and WTP, we employ 50-50 model-free risk apportionment lotteries and payment card WTP experiments. Empirical analyses reveal insightful heterogeneity in the correlation between homeowners’ HORA subgroups and WTP for precautions that is partially consistent with theory. We demonstrate strong causal effects of information friction on WTP for precaution in the absence of a truncated WTP range; however, the effects appear to be positive among risk lovers.

Continue reading
Financial Regulation & Corporate Governance

Is the U.S. Insurance Industry Resilient to Climate Change? Insurer Capitalization and the Performance of State Guaranty Associations

Scholarship Abstract We assess the capacity of the U.S. property-liability insurance industry and the efficiency of the state guaranty fund system in response to large scale . . .

Abstract

We assess the capacity of the U.S. property-liability insurance industry and the efficiency of the state guaranty fund system in response to large scale loss events to assess the resilience of the current system to the growing challenges of climate change. We identify characteristics of the industry’s capital structure and the guaranty fund system that limit the ability to indemnify policyholders following extreme catastrophic losses. We also consider the sustainability of the system over time under assumptions of increasing loss frequency and severity. We find that some attributes of insurance guarantees present short-term problems for policyholders and create long-term challenges for competitive private insurance markets, particularly when a subset of insurers shoulders the burden for past losses.

Continue reading
Financial Regulation & Corporate Governance

Marketing & Experimentation for Social Change: Adapting to Drought in California

Scholarship Abstract In social change contexts such as conservation or public health, marketing can communicate information, nudge people toward more socially aligned behavior, or encourage adoption . . .

Abstract

In social change contexts such as conservation or public health, marketing can communicate information, nudge people toward more socially aligned behavior, or encourage adoption of long-run solutions that permanently shift personal outcomes and/or social spillovers. These marketing options, if effective, can substitute for regulatory change to address the respective social issue. In this paper, we focus on California’s drought response, where cease and desist orders and community level fines are contingent on the effectiveness of local level voluntary change. We illustrate that a marketing challenge for the favored voluntary conservation approach of turf removal is that it ignores the preference trade-offs of those who consume the most and/or are least motivated by the social objective of conservation. We conduct sequential randomized control trials to evaluate the marketing and effectiveness of an Internet of Things (IOT) irrigation controller that helps consumers more efficiently irrigate and grow their lawns. We find that our marketing interventions for this “preference aligned” solution have higher response rates among heavy irrigators who would not otherwise conserve. Rather than cannibalizing other solutions with greater potential water savings, as some conservationists worry, our interventions lead to large persistent reductions in water usage.

Continue reading
Innovation & the New Economy

Addressing Green Energy’s ‘Resource Curse’

Scholarship Abstract Policy changes that encourage non-fossil fuel energy means increased reliance on batteries and other technologies that must develop rapidly. This article focuses on batteries, . . .

Abstract

Policy changes that encourage non-fossil fuel energy means increased reliance on batteries and other technologies that must develop rapidly. This article focuses on batteries, noting that key inputs come from corrupt countries, so little of the benefits of exports flow to citizens, and many key finished mineral products come from China. The United States thereby becomes more reliant on autocratic regimes. Using cobalt as an example, this article looks at the nature of its production, the inability of the United States to shoulder its share of the environmental burden of mineral extraction and refining, and looks to previous examples of countries “cursed” with valuable resources desired by wealthy countries. Hints as to how the “resource curse” problem may be addressed arise from the mineral extraction history of the United States many decades past.

Continue reading
Innovation & the New Economy

Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins

Scholarship Dystopian thinking is pervasive within the antitrust community. Unlike entrepreneurs, antitrust scholars and policy makers often lack the imagination to see how competition will emerge and enable entrants to overthrow seemingly untouchable incumbents.

Introduction

The dystopian novel is a powerful literary genre. It has given us such masterpieces as Nineteen Eighty-Four, Brave New World, Fahrenheit 451, and Animal Farm. Though these novels often shed light on some of the risks that contemporary society faces and the zeitgeist of the time when they were written, they almost always systematically overshoot the mark (whether intentionally or not) and severely underestimate the radical improvements commensurate with the technology (or other causes) that they fear. Nineteen Eighty-Four, for example, presciently saw in 1949 the coming ravages of communism, but it did not guess that markets would prevail, allowing us all to live freer and more comfortable lives than any preceding generation. Fahrenheit 451 accurately feared that books would lose their monopoly as the foremost medium of communication, but it completely missed the unparalleled access to knowledge that today’s generations enjoy. And while Animal Farm portrayed a metaphorical world where increasing inequality is inexorably linked to totalitarianism and immiseration, global poverty has reached historic lows in the twenty-first century, and this is likely also true of global inequality. In short, for all their literary merit, dystopian novels appear to be terrible predictors of the quality of future human existence. The fact that popular depictions of the future often take the shape of dystopias is more likely reflective of the genre’s entertainment value than of society’s impending demise.

But dystopias are not just a literary phenomenon; they are also a powerful force in policy circles. For example, in the early 1970s, the so-called Club of Rome published an influential report titled The Limits to Growth. The report argued that absent rapid and far-reaching policy shifts, the planet was on a clear path to self-destruction:

If the present growth trends in world population, industrialization, pollution, food production, and resource depletion continue unchanged, the limits to growth on this planet will be reached sometime within the next one hundred years. The most probable result will be a rather sudden and uncontrollable decline in both population and industrial capacity.

Halfway through the authors’ 100-year timeline, however, available data suggests that their predictions were way off the mark. While the world’s economic growth has continued at a breakneck pace, extreme poverty, famine, and the depletion of natural resources have all decreased tremendously.

For all its inaccurate and misguided predictions, dire tracts such as The Limits to Growth perhaps deserve some of the credit for the environmental movements that followed. But taken at face value, the dystopian future along with the attendant policy demands put forward by works like The Limits to Growth would have had cataclysmic consequences for, apparently, extremely limited gain. The policy incentive is to strongly claim impending doom. There’s no incentive to suggest “all is well,” and little incentive even to offer realistic, caveated predictions.

As we argue in this Article, antitrust scholarship and commentary is also afflicted by dystopian thinking. Today, antitrust pessimists have set their sights predominantly on the digital economy—“big tech” and “big data”—alleging a vast array of potential harms. Scholars have argued that the data created and employed by the digital economy produces network effects that inevitably lead to tipping and more concentrated markets. In other words, firms will allegedly accumulate insurmountable data advantages and thus thwart competitors for extended periods of time. Some have gone so far as to argue that this threatens the very fabric of western democracy. Other commentators have voiced fears that companies may implement abusive privacy policies to shortchange consumers. It has also been said that the widespread adoption of pricing algorithms will almost inevitably lead to rampant price discrimination and algorithmic collusion. Indeed, “pollution” from data has even been likened to the environmental pollution that spawned The Limits to Growth: “If indeed ‘data are to this century what oil was to the last one,’ then—[it’s] argue[d]—data pollution is to our century what industrial pollution was to the last one.”

Some scholars have drawn explicit parallels between the emergence of the tech industry and famous dystopian novels. Professor Shoshana Zuboff, for instance, refers to today’s tech giants as “Big Other.” In an article called “Only You Can Prevent Dystopia,” one New York Times columnist surmised:

The new year is here, and online, the forecast calls for several seasons of hell. Tech giants and the media have scarcely figured out all that went wrong during the last presidential election—viral misinformation, state-sponsored propaganda, bots aplenty, all of us cleaved into our own tribal reality bubbles—yet here we go again, headlong into another experiment in digitally mediated democracy.

I’ll be honest with you: I’m terrified . . . There’s a good chance the internet will help break the world this year, and I’m not confident we have the tools to stop it.

Parallels between the novel Nineteen Eighty-Four and the power of large digital platforms were also plain to see when Epic Games launched an antitrust suit against Apple and its App Store in August 2020. Indeed, Epic Games released a short video clip parodying Apple’s famous “1984” ad (which upon its release was itself widely seen as a critique of the tech incumbents of the time).

Similarly, a piece in the New Statesman, titled “Slouching Towards Dystopia: The Rise of Surveillance Capitalism and the Death of Privacy,” concluded that: “Our lives and behaviour have been turned into profit for the Big Tech giants—and we meekly click ‘Accept.’ How did we sleepwalk into a world without privacy?”

Finally, a piece published in the online magazine Gizmodo asked a number of experts whether we are “already living in a tech dystopia.” Some of the responses were alarming, to say the least:

I’ve started thinking of some of our most promising tech, including machine learning, as like asbestos: … it’s really hard to account for, much less remove, once it’s in place; and it carries with it the possibility of deep injury both now and down the line.

. . . .

We live in a world saturated with technological surveillance, democracy-negating media, and technology companies that put themselves above the law while helping to spread hate and abuse all over the world.

Yet the most dystopian aspect of the current technology world may be that so many people actively promote these technologies as utopian.

Antitrust pessimism is not a new phenomenon, and antitrust enforcers and scholars have long been fascinated with—and skeptical of—high tech markets. From early interventions against the champions of the Second Industrial Revolution (oil, railways, steel, etc.) through the mid-twentieth century innovations such as telecommunications and early computing (most notably the RCA, IBM, and Bell Labs consent decrees in the US) to today’s technology giants, each wave of innovation has been met with a rapid response from antitrust authorities, copious intervention-minded scholarship, and waves of pessimistic press coverage. This is hardly surprising given that the adoption of antitrust statutes was in part a response to the emergence of those large corporations that came to dominate the Second Industrial Revolution (despite the numerous radical innovations that these firms introduced in the process). Especially for unilateral conduct issues, it has long been innovative firms that have drawn the lion’s share of cases, scholarly writings, and press coverage.

Underlying this pessimism is a pervasive assumption that new technologies will somehow undermine the competitiveness of markets, imperil innovation, and entrench dominant technology firms for decades to come. This is a form of antitrust dystopia. For its proponents, the future ushered in by digital platforms will be a bleak one—despite abundant evidence that information technology and competition in technology markets have played significant roles in the positive transformation of society. This tendency was highlighted by economist Ronald Coase:

[I]f an economist finds something—a business practice of one sort or another—that he does not understand, he looks for a monopoly explanation. And as in this field we are very ignorant, the number of ununderstandable practices tends to be rather large, and the reliance on a monopoly explanation, frequent.

“The fear of the new—and the assumption that ‘ununderstandable practices’ emerge from anticompetitive impulses and generate anticompetitive effects—permeates not only much antitrust scholarship, but antitrust doctrine as well.” While much antitrust doctrine is capable of accommodating novel conduct and innovative business practices, antitrust law—like all common law-based legal regimes—is inherently backward looking: it primarily evaluates novel arrangements with reference to existing or prior structures, contracts, and practices, often responding to any deviations with “inhospitality.” As a result, there is a built-in “nostalgia bias” throughout much of antitrust that casts a deeply skeptical eye upon novel conduct.

“The upshot is that antitrust scholarship often emphasizes the risks that new market realities create for competition, while idealizing the extent to which previous market realities led to procompetitive outcomes.” Against this backdrop, our Article argues that the current wave of antitrust pessimism is premised on particularly questionable assumptions about competition in data-intensive markets.

Part I lays out the theory and identifies the sources and likely magnitude of both the dystopia and nostalgia biases. Having examined various expressions of these two biases, the Article argues that their exponents ultimately seek to apply a precautionary principle within the field of antitrust enforcement, made most evident in critics’ calls for authorities to shift the burden of proof in a subset of proceedings.

Part II discusses how these arguments play out in the context of digital markets. It argues that economic forces may undermine many of the ills that allegedly plague these markets—and thus the case for implementing a form of precautionary antitrust enforcement. For instance, because data is ultimately just information, it will prove exceedingly difficult for firms to hoard data for extended periods of time. Instead, a more plausible risk is that firms will underinvest in the generation of data. Likewise, the main challenge for digital economy firms is not so much to obtain data, but to create valuable goods and hire talented engineers to draw insights from the data these goods generate. Recent empirical findings suggest, for example, that data economy firms don’t benefit as much as often claimed from data network effects or increasing returns to scale.

Part III reconsiders the United States v. Microsoft Corp. antitrust litigation—the most important precursor to today’s “big tech” antitrust enforcement efforts—and shows how it undermines, rather than supports, pessimistic antitrust thinking. It shows that many of the fears that were raised at the time didn’t transpire (for reasons unrelated to antitrust intervention). Rather, pessimists missed the emergence of key developments that greatly undermined Microsoft’s market position, and greatly overestimated Microsoft’s ability to thwart its competitors. Those circumstances—particularly revolving around the alleged “applications barrier to entry”—have uncanny analogues in the data markets of today. We thus explain how and why the Microsoft case should serve as a cautionary tale for current enforcers confronted with dystopian antitrust theories.

In short, the Article exposes a form of bias within the antitrust community. Unlike entrepreneurs, antitrust scholars and policy makers often lack the imagination to see how competition will emerge and enable entrants to overthrow seemingly untouchable incumbents. New technologies are particularly prone to this bias because there is a shorter history of competition to go on and thus less tangible evidence of attrition in these specific markets. The digital future is almost certainly far less bleak than many antitrust critics have suggested and yet the current swath of interventions aimed at reining in “big tech” presume. This does not mean that antitrust authorities should throw caution to the wind. Instead, policy makers should strive to maintain existing enforcement thresholds, which exclude interventions that are based solely on highly speculative theories of harm.

Read the full white paper here.

Continue reading
Antitrust & Consumer Protection

Written Comments for July 1, 2021 FTC Open Meeting

Written Testimonies & Filings The following comments were submitted to the Federal Trade Commission (FTC) for consideration in the commission’s scheduled July 1 vote on whether to rescind the . . .

The following comments were submitted to the Federal Trade Commission (FTC) for consideration in the commission’s scheduled July 1 vote on whether to rescind the 2015 Statement of Enforcement Principles Regarding “Unfair Methods of Competition” Under Section 5 of the FTC Act.


In antitrust law, the Consumer Welfare Standard (CWS) directs courts to focus on the effects that challenged business practices have on consumers, rather than on alleged harms to specific competitors. 

Critics of the standard claim this focus on consumer welfare fails to capture a wide variety of harmful conduct. In addition to believing that harm to competitors is itself a valid concern, critics of the CWS believe it leads to harmful concentrations of political and economic power by biasing antitrust enforcement against intervention. Under this view, the CWS contributes to such harms as environmental degradation, income inequality, and bargaining disparities for labor.

But returning to a pre-CWS state of the law would lead antitrust enforcement to become confused, contradictory, and ineffective at promoting competition. The CWS makes antitrust economically coherent and democratically accountable.

The CWS is agnostic about how much antitrust enforcement is necessary. Indeed, many advocates of more vigorous antitrust enforcement are also defenders of the CWS. The standard uses objective economic analysis to identify actual harms and to recommend remedies when those harms are not outweighed by countervailing benefits to consumers. While the issues the CWS critics care about may be important, antitrust law is a bad way to address them.

Competition usually has to hurt competitors. Prioritizing competitor welfare over consumer welfare, as some proposals would, means abandoning competition as the goal of antitrust. Businesses want a quiet life and large profits. If one firm outcompetes another with a better product or a lower price, it disadvantages that competitor by lowering its profits or forcing it to work harder to maintain them. The consumer ultimately wins in this struggle. Basing antitrust liability on conduct that “materially disadvantages” competitors would impose liability for the act of competing itself. 

The old model of antitrust was incoherent and unaccountable. Before the rise of the CWS, antitrust enforcement was incoherent and lacked underlying neutral principles. In the words of Justice Potter Stewart, the only consistency was that “the government always wins.” Competitive practices could be condemned because they hurt the profitability of some businesses. Sometimes, courts would worry that prices were too low and would therefore permit “price floors” to protect small business. This lack of consistency led to a body of law that was contradictory and unpredictable, and that regularly undermined competition. By entrusting enforcement and antitrust policy to the discretion of unelected enforcement officials, competition policy was effectively removed from democratic oversight.

The CWS grounds antitrust in objective economics and tractable evidence. Adherence to the CWS renders antitrust judgments transparent and quantifiable by giving a clear benchmark for economic analysis. Without the CWS, courts might trade reduced competition and consumer welfare for a reduction in, for example, a business’s political influence. While achieving the latter may (or may not) be a worthy goal, there is no objective way to assess trade-offs between the two priorities. The CWS requires testable claims and counterclaims as part of a competition case. It allows antitrust cases to focus on a question that can be answered objectively: “Is the challenged conduct likely to make consumers better or worse off?”

The CWS considers innovation and quality, as well as price. The CWS has always encompassed aspects of competition beyond price, including innovation, quality, and product variety. The CWS is thus fully compatible with markets where products are offered at a zero price to consumers, or where the alleged source of harm is the loss of innovation. United States v. Microsoft, for example, hinged on an innovation theory of harm, as did the U.S. Justice Department’s lawsuit against the Visa/Plaid merger, which led to the merger being abandoned. As in other supply markets, anticompetitive conduct by businesses in the labor market has been ruled illegal under the CWS and both of the federal antitrust agencies have brought cases against this kind of conduct.

Antitrust is not a public policy Swiss Army knife. Antitrust is a bad tool to achieve goals other than increased competition, because it is often impossible to objectively compare the value of different competing ends. Where difficult trade-offs must be made between competing social goals, such as balancing economic growth with the environment or workers’ welfare, the legislative process is a better mechanism to weigh society’s preferences than the judgement of a court. Trying to use antitrust to achieve these ends is often an attempt to bypass the democratic process when that process does not deliver the outcomes that advocates want.

For further information, see ICLE’s submission to the FTC’s Hearings on Competition & Consumer Protection in the 21st Century: https://laweconcenter.org/wp-content/uploads/2019/07/Antitrust-Principles-and-Evidence-Based-Antitrust-Under-the-Consumer-Welfare-Standard-FTC-Hearings-ICLE-Comment-5.pdf 

Continue reading
Antitrust & Consumer Protection

Testimony of R.J. Lehmann to the Senate Banking Committee: The Case to Reform the NFIP

Written Testimonies & Filings Introduction Chairman Brown, Ranking Member Toomey, and Members of the Committee, My name is R.J. Lehmann, and I am a Senior Fellow with and Editor-in-Chief . . .

Introduction

Chairman Brown, Ranking Member Toomey, and Members of the Committee,

My name is R.J. Lehmann, and I am a Senior Fellow with and Editor-in-Chief of the International Center for Law & Economics. ICLE is a nonprofit, nonpartisan research center that promotes the use of law & economics methodologies to inform public policy debates. Working with a roster of more than 50 academic affiliates and research centers from around the globe, we develop and disseminate academic output to build the intellectual foundation for rigorous, economically grounded policy.

I have been an active observer of the challenges facing the National Flood Insurance Program (NFIP) for 18 years. In my former life as a business journalist, I was from 2003 to 2011 the Washington bureau chief of two major trade publications covering the business of insurance. That stint included covering the 2004 and 2008 NFIP reauthorization debates and, of course, the devastating impact of Hurricane Katrina in 2005. In 2012, I co-founded the R Street Institute, where I remained until joining ICLE six months ago. Among my duties at R Street was running the Institute’s insurance policy research program, and I made NFIP reform a major part of my research focus.

I thank the committee for conducting this hearing, its first in four years on the topic of NFIP reauthorization. It has been nearly a decade since Congress last approved a long-term reauthorization of the NFIP. The program remains in need of structural reform, and the series of short-term extensions on which it has relied for the past four years leaves many parties—homeowners, business owners, builders, lenders, realtors, insurers, and insurance agents—without the clarity they need to make forward-looking decisions. One hopes Congress will be able to reach agreement this session on legislation that provides that clarity, while also making the adjustments needed to address the long-term challenges this program faces.

The past year has given us all an up-close view of how Americans respond when faced with the realization of remote and nominally “unforeseen” catastrophic risks. But, of course, the COVID-19 pandemic was not unforeseen at all. Public health officials and catastrophe modelers have long known that a viral contagion of this sort was not only possible, but inevitable. The 1918 influenza pandemic was within the lifetime of some Americans. Much more recently, another H1N1 influenza spread as a pandemic in 2009; we were merely fortunate that it did not prove to be terribly deadly. And we have in recent years seen other coronaviruses like SARS and MERS reach epidemic levels abroad.

Similarly, there is nothing unforeseen about the challenges facing the National Flood Insurance Program. It was established more than 50 years ago to provide coverage that private insurers would not; to reduce the nation’s reliance on post-hoc disaster assistance; to provide incentives for communities to invest in mitigation; and to be self-sustaining.

What is now clear is that the NFIP has not been—and, as currently structured, cannot be—self-sustaining. Since Hurricane Katrina, the program has been forced to borrow nearly $40 billion from the U.S. Treasury. Reforms passed as part of the Biggert-Waters Flood Insurance Reform Act of 2012 were intended to place the program on a path toward long-term fiscal sustainability by phasing out explicit premium subsidies and shifting more risk to the private insurance, reinsurance, and capital markets. But some of those reforms were scaled back or repealed almost immediately in the Grimm-Waters Act, passed in 2014. And even with $16 billion of the program’s debt erased by Congress in 2017, the NFIP remains $20.5 billion in debt to U.S. taxpayers as of the first quarter of Fiscal Year 2021,[1] with no feasible plan ever to repay that debt in full.

Post-hoc disaster spending also continues to grow, with more than 90 percent of all federally declared disasters involving floods. And while the program has provided incentives for mitigation, by making cheap flood insurance available, it also has played a role encouraging development in flood-prone and environmentally sensitive regions. Moreover, due to the looming threat of climate change—which we know will drive both rising sea levels and more frequent and more severe precipitation events—it is more crucial than ever that Congress address the NFIP’s structural issues and the ways to correct its perverse incentives for where and how Americans live.

Structural Problems of the NFIP

At its inception in 1968, the NFIP was not designed as a risk-based program. Property owners in participating communities were charged flat rates, irrespective of the level of flood risk their properties faced. That design was intentional, as the overarching goal was to encourage take-up, particularly by residents of the riskiest communities.

Shortly thereafter, in 1973, the program was redesigned to account for risk with the introduction of Flood Insurance Rate Maps (FIRM) that assign properties to various risk-rated zones and assess premium rates commensurate with the flood risk faced by that zone. Federally related mortgages on homes in high-risk zones that face a greater than 1 percent annual chance of flooding—also known as Special Flood Hazard Areas or 100-year flood zones—are required to purchase flood insurance.

But exceptions to risk-based ratings were built into the FIRM process from early on. Properties that joined the program prior to the introduction of rate maps could continue to pay non-risk-based rates through what are known as “subsidized” policies, which historically were assessed rates that were only 45 percent of their true actuarial liability.[2]

Moreover, while the Federal Emergency Management Agency (FEMA) is required by statute to revise and update all its maps at least once every five years, mapping changes do not force rate changes for existing structures. These are treated as “grandfathered” properties, which continue to pay the rates assigned when their community joined the program or when its prior rate designation was finalized.

As a result, the NFIP has always taken in less in policyholder premiums than actuarial assessments would recommend. In the years just prior to Biggert-Waters, from 2002 to 2013, the Government Accountability Office (GAO) estimates the program collected $11 billion to $17 billion less in premiums than was actuarially prudent.[3]

Biggert-Waters and the subsequent Homeowner and Flood Insurance Affordability Act of 2014 (HFIAA) placed business properties and second homes on a glidepath toward actuarial rates, with annual premium increases that are capped at 25 percent, while rate increases on subsidized primary homes are capped at 15 percent. Biggert-Waters would have placed grandfathered properties on a glidepath to actuarial soundness with a rate cap of 20 percent, but HFIAA amended that provision such that it only takes effect if a grandfathered property’s map changes again in the future. If it does, the grandfathered property would see rate increases that likewise would be capped at 15 percent.

Were the NFIP actuarially sound, it would have the resources to sustainably administer the program—including marketing and claims-adjustment expenses—and pay all expected claims that fall within the ordinary distribution of potential outcomes. But the NFIP would still experience—and in recent years, has experienced—outsized claims events like Hurricane Katrina, Superstorm Sandy, and Hurricane Harvey that fall in the tail end of the probability distribution. Indeed, those three events alone account for the overwhelming majority of the $40 billion the program has had to borrow since 2005.

Since Hurricane Katrina, the NFIP has made $5.06 billion in interest payments to service its debt to the Treasury but has managed to repay just $2.8 billion of principle. It is, by all accounts, completely infeasible that it will ever repay its debt in full. Indeed, in 2017, the Congressional Budget Office (CBO) estimated that, under its existing structure, the NFIP is expected to post an average annual loss of $1.4 billion.[4]

That the program has proven structurally unsustainable was foreseen by its creators. In 1966, Lyndon Johnson’s Presidential Task Force on Federal Flood Control Policy warned Congress that creating a federal flood insurance program “in which premiums are not proportionate to risk would be to invite economic waste of great magnitude.”[5]

Congress could assess that the benefits to homeowners and business owners in flood-prone regions merit the cost of subsidies. Even if that were the determination, however, the program’s existing structure largely functions not through express taxpayer subsidies, but by enabling cross-subsidies from inland NFIP policyholders to those in coastal regions. The CBO has found that 85 percent of NFIP properties exposed to coastal storm surge (properties classified as “Zone V”) pay below full risk-based rates. Altogether, 69 percent of Zone V properties are grandfathered, 29 percent are subsidized, and 13 percent are both grandfathered and subsidized.[6]

In addition to subsidies flowing from inland policyholders to coastal policyholders, NFIP subsidies broadly flow from higher-income areas to lower-income areas. In 2013, the GAO reported that 29 percent of subsidized policies were in counties in the top decile of median household income and 65 percent were in counties among the top three deciles, while just 4 percent were in the bottom decile and 10 percent in the bottom three deciles.[7]

Should Congress determine that it does expressly want to subsidize property owners in flood-prone regions, those subsidies should properly flow directly from the taxpayers. Laying the burden on inland and lower-risk NFIP policyholders discourages take-up of flood insurance, at the margin, when the goal should be much broader take-up to close the protection gap. Moreover, to the extent that subsidies are necessary to protect at-risk populations from displacement, it would be proper to transition to an income-based voucher system, rather than the existing system of subsidies and grandfathering tied to when the property joined the NFIP.

Rising Waters, Rising Risks

Over the past 50 years, the NFIP has helped to shape the landscape, with significant impact on the country’s built environment. In the first four decades after its passage, from 1970 through 2010, the number of Americans living in coastal counties grew by 45 percent and now comprises more than half the U.S. population.[8] Nowhere is this shift more apparent than in my state of Florida. At the beginning of World War II, Florida was the least-populated state in the Southeast. As the recent 2020 U.S. Census figures confirmed, it is now the third most populated state in the nation. The NFIP was not the sole driver of that growth, of course. Air-conditioning also played a part. But by providing guaranteed and affordable coverage for the most common catastrophe risk facing property owners in a place like Florida, the NFIP has been a key enabler of the mass conversion of wetlands and barrier islands—nature’s built-in defenses against tropical storms and flooding—into acres and acres of manicured lawns and suburban tract housing.

But even as Americans spent much of the past half-century moving to areas at greater risk of catastrophic flooding, we have begun to see how anthropogenic climate change will make that problem much worse. Global sea levels rose by 2.6 inches from 1993 through 2014 and are projected to continue to rise by an average of one-eighth of an inch per year for the foreseeable future.[9] Projections for the 21st century anticipate sea level rise of between 20 inches, should we manage to make sharp and immediate cuts to global carbon emissions,[10] and six and a half feet, should the Antarctic ice sheet break up.[11]

And yet, even facing these challenges, there has been no slowdown in Americans preferring to build and live in flood-prone areas. A 2018 analysis of FEMA records conducted by Governing magazine found that 15 million Americans lived in 100-year floodplains, where current rules for federally related mortgages require the purchase of flood insurance. That was a 14 percent increase from the turn of the 21st century, compared with 13 percent population growth in all other zones.[12] Even more strikingly, a 2019 report by ClimateCentral looked at areas projected to have a 10 percent risk of coastal flooding by 2050. They found that, in eight coastal states, there were more homes built within this project 10-year flood zone than in all other areas.[13] Development was twice as fast in the 10-year flood zone than outside of it in Delaware, Mississippi, New Jersey, and Rhode Island, while in Connecticut, it was three times as fast.

Losing land to the sea is not an entirely new phenomenon, of course. Sen. Kennedy’s home state of Louisiana has lost more than 2,000 square miles of land since the 1930s.  But the scale of land loss we may now face, combined with the surge in development in flood-prone areas, is new. In 2016, a piece in the journal Nature Climate Change overlaid anticipated population growth with projected sea-level rise of roughly three feet to six feet, finding that between 4.2 and 13.1 million Americans would be displaced by inundation.

The changing nature of flood risk makes it even more crucial that FEMA, as the NFIP’s administrator, regularly update its Flood Insurance Rate Maps to keep up with changes on the ground. The evidence, however, is that the agency is failing to do that. A 2017 Department of Homeland Security Inspector-General’s report found that FEMA was up to date on just 42 percent of the NFIP’s flood hazard miles, far short of a goal of 80 percent set in 2009, and that the agency had not properly ensured that “mapping partner quality reviews are completed in accordance with applicable guidance.”[14] In 2017, the CBO found that, of the 166 counties that produce more than $2 million in average annual flood claims, half were using maps that were more than five years old.[15]

Sea-level rise and other impacts from climate change threaten to radically transform how we must deal with the risk of flooding. A 2019 study in Nature Communications had a grim projection about the frequency and severity of flooding and coastal storm surge: By the end of this century, today’s 100-year flood events in the Southeast and Gulf Coast will be expected every 1 to 30 years. Today’s 100-year events in New England and the mid-Atlantic can be expected every single year.[16]

It is inevitable that, in at least some locations, we will have to consider pulling back from the coasts and moving Americans to higher ground. But an important first step is to stop making the problem worse.

Baby Steps Toward Managed Retreat

“Managed retreat” is a controversial phrase, both because it connotes surrender in the battle against climate change and because it is taken to mean a radical realignment in the way we live. And yet, in some respects, managed retreat is longstanding policy. It is seen most clearly in FEMA’s Hazard Mitigation Grant Program (HMGP), the Flood Mitigation Assistance (FMA) Grant Program, and Pre-Disaster Mitigation (PDM) Program, all of which execute buyouts of flood-prone properties, which are then demolished, and the vacated land dedicated in perpetuity to open space.

While buyouts are likely to continue to be part of the solution to rising flood risk, there are serious questions about whether they could ever scale anywhere close to meet the scope of the problem. A 2019 report from the Natural Resources Defense Council found that, at the pace FEMA executed buyouts in the 30 years between 1989 and 2019, it would only be able to buy out another 115,000 properties by the end of the 21st century.[17] For comparison, current projections are that as many as 13 million properties will be completely inundated by the year 2200.

Rather than tearing down the flood-prone properties that already exist, a more immediate approach would be to remove the incentives to build new ones. I authored a report last year that proposed doing so directly—by barring any new construction in 100-year floodplains from NFIP eligibility. Based on my review of NFIP claims data, had this policy been in place starting in 1980, the program’s payouts between 1990 and 2019 would have been roughly 13 percent smaller, representing $16.5 billion in savings.[18]

What I could not quantify in my research, but which is likely even more crucial to the pressing challenge of climate adaptation, is how many of those severely flood-prone properties would not have been built in the first place, but for guaranteed NFIP coverage. In some cases, property owners in such areas might turn to the emerging private market for flood insurance, but they would be assessed risk-based premiums that, in many cases, likely would be prohibitive.

Of course, an additional benefit of this approach is that, unlike phasing out subsidies, it does not lay any new burden on existing policyholders. Similar approaches can be found in the Coastal Barrier Resources System (CBRS), a 37-year-old program that bars federal subsidies to development across a 3.5-million-acre zone of beaches, wetlands, barrier islands, and estuaries along the Atlantic Ocean, Gulf of Mexico, and the Great Lakes. Likewise, in Florida, the Legislature adopted a similar rule in 2015 that bars the state-sponsored Citizens Property Insurance Corp. from writing coverage for new construction located seaward of the state’s Coastal Construction Control Line.[19]

But it is not enough simply to tell people where they cannot build; we must also tell people where they can. Many areas remain gripped by a serious housing affordability crisis caused by stringent land-use controls that make it excessively difficult to build new housing in places where people wish to live.

Therefore, in addition to the “stick” of removing the NFIP’s incentives to build in flood-prone areas, I propose an additional “carrot” of federal incentives for states that liberalize their land-use policies in areas of lowest flood risk.

Specifically, the Stafford Act currently requires that, when the federal government provides post-disaster relief to repair, restore, or replace damaged facilities, state, and local governments are responsible to pick up 25 percent of the cost. I propose that providing for dense housing in the lowest-risk flood zones—those classified as 1-in-500-year zones, or Zone X and Zone C in the current mapping system—would enable states to “buy down” the local cost share. For example, if a state were to abolish single-family zoning in the lowest-risk flood zones—e.g., allowing construction of accessory dwelling units and up to four-family homes, by right—the federal government’s cost-share for post-disaster recovery would rise to 80 percent or even 85 percent. This would begin the process of ensuring that, as rising seas force more Americans to move to higher ground, there will be sufficient housing stock to absorb them.

Other Considerations

  • I mentioned earlier that it would be proper to transition the current subsidies to an income-based voucher system. An important element of such a system is that policyholders should be made aware of the full risk-based cost of flood insurance. The same applies to current subsidized and grandfathered policyholders. Any NFIP policy with rates that are less than the full risk-based cost should disclose to policyholders what that cost would be. Similarly, a complete flood history should be made available to buyers as part of all real estate closings.
  • Many of the implementation questions that surrounded Biggert-Waters’ provisions permitting private flood insurance to satisfy mandatory purchase requirements have, in recent years, been resolved by state authorities and the federal lending regulators. One additional point of clarification this committee could provide is to determine whether consumers who move from the NFIP to private flood insurance and then maintain continuous coverage can later return to the NFIP at the same rate as if they had remained with the program all along. This would protect consumers if, for example, after switching to a private policy, the private insurer raised rates or exited the market.
  • Some have proposed forgiving the NFIP’s current $20.5 billion debt to the Treasury. Should Congress opt to do so, the program’s existing $30.425 billion borrowing authority would be far too large to offer any meaningful check. Congress should instead simply set the borrowing authority cap 1 percent of the NFIP’s total insurance in-force. Based on its current total of $1.3 trillion, this would mean the NFIP could borrow up to $13 billion without needing further authorization.

With that, I would be glad to answer any of the Committee’s questions.

 

[1] “The Watermark Fiscal Year 2021, First Quarter, Volume 13,” Federal Emergency Management Agency, p. 2. https://www.fema.gov/sites/default/files/documents/fema_watermark-report_12-2020.pdf.

[2] Bill Jones, “Biggert-Waters Flood Insurance Reform Act of 2012 Summary,” Nebraska Dept. of Natural Resources, February 2013. https://agriculture.ks.gov/docs/default-source/dwr-floodplains/summary-of-the-biggert-waters-act.pdf?sfvrsn=ce3ffec1_0.

[3] “Forgone Premiums Cannot Be Measured and FEMA Should Validate and Monitor Data System Changes,” Government Accountability Office, GAO-15-111, Dec. 11, 2014. https://www.gao.gov/products/GAO-15-111.

[4] “The National Flood Insurance Program: Financial Soundness and Affordability,” Congressional Budget Office, Sept. 1, 2017, p. 1. https://www.cbo.gov/publication/53028.

[5] Gary William Boulware, “Public Policy Evaluation of the National Flood Insurance Program (NFIP),” doctoral dissertation, University of Florida, December 2009, p. 14. https://ufdc.ufl.edu/UFE0041081/00001.

[6] “The National Flood Insurance Program: Financial Soundness and Affordability,” Congressional Budget Office, Sept. 1, 2017, p. 1. https://www.cbo.gov/publication/53028.

[7] “Flood Insurance: More Information Needed on Subsidized Policies,” Government Accountability Office, July

  1. https://www.gao.gov/assets/gao-13-607.pdf.

[8] Ross Toro, “Half of US Population Lives in Coastal Areas (Infographic),” LiveScience, March 12, 2012. https://www.livescience.com/18997-population-coastal-areas-infographic.html.

[9] “Is sea level rising?”, U.S. National Oceanic and Atmospheric Administration, Feb. 26, 2021. https://oceanservice.noaa.gov/facts/sealevel.html.

[10] Carling C. Hay et al., “Probabilistic reanalysis of twentieth-century sea-level rise,” Nature 517 (Jan. 14, 2015), pp. 481–84. https://www.nature.com/articles/nature14093.

[11] Robert E. Kopp et al., “Evolving Understanding of Antarctic Ice-Sheet Physics and Ambiguity in Probabilistic Sea-Level Projections,” Earth’s Future 5 (Dec. 13, 2017), pp. 1217–33. https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2017EF000663.

[12] Mike Maciag, “Analysis: Areas of the U.S. With Most Floodplain Population Growth,” Governing, August 2018. https://www.governing.com/gov-data/census/flood-plains-zone-local-population-growth-data.html.

[13] “Ocean at the Door: New Homes and the Rising Sea,” ClimateCentral, July 31, 2019. https://ccentralassets.s3.amazonaws.com/pdfs/2019Zillow_report.pdf.

[14] “FEMA Needs to Improve Management of Its Flood Mapping Programs,” U.S. Department of Homeland Security Office of the Inspector-General, Sept. 27, 2017, p. 3. https://www.oig.dhs.gov/sites/default/files/assets/2017/OIG-17-110-Sep17.pdf.

[15] “Age of Flood Maps in Selected Counties That Account for Most of the Expected Claims in the National Flood Insurance Program: Supplemental Material for The National Flood Insurance Program: Financial Soundness and Affordability,” Congressional Budget Office, November 2017, p. 3. https://www.cbo.gov/system/files?file=115th-congress-2017-2018/reports/53028-supplementalmaterial.pdf.

[16] Reza Marsooli et al., “Climate change exacerbates hurricane flood hazards along US Atlantic and Gulf Coasts in spatially varying patterns,” Nature Communications 10:3785 (Aug. 22, 2019). https://www.nature.com/articles/s41467-019-11755-z.

[17] Anna Weber and Rob Moore, “Going Under: Long Wait Times for Post-Flood Buyouts Leave Homeowners Underwater,” Natural Resources Defense Council, Sept. 12, 2019. https://www.nrdc.org/resources/going-under-long-wait-times-post-floodbuyouts-leave-homeowners-underwater.

[18] R.J. Lehmann, “Do No Harm: Managing Retreat By Ending New Subsidies,” R Street Institute, February 2020. https://www.rstreet.org/wp-content/uploads/2020/02/195.pdf.

[19] Jane Smith and Michelle Quigley, “Along the Coast…A line in the sand,” The Coastal Star, Aug. 30, 2017. https://thecoastalstar.com/profiles/blogs/along-the-coast-a-linein-the-sand.

Continue reading
Financial Regulation & Corporate Governance