Research Programs
More
What are you looking for?
Showing 9 of 283 Results in Platforms
Regulatory Comments Introduction On 11 October 2022, João Maia (Federal Deputy, Partido Liberal) proposed Bill 2768/22 (“Bill 2768” or “Bill”) on digital market regulation.[1] Bill 2768 is . . .
On 11 October 2022, João Maia (Federal Deputy, Partido Liberal) proposed Bill 2768/22 (“Bill 2768” or “Bill”) on digital market regulation.[1] Bill 2768 is Brazil’s response to global trends toward the ex-ante regulation of digital platforms, and was at least partially inspired by the EU’s Digital Markets Act (“DMA”).[2] In our contribution to the public consultation on Bill 2768 (“Consultation”),[3] however, we argue that Brazil should be wary of importing untested regulation into its own, unique context. Rather than impulsively replicating the EU’s latest regulatory whim, Brazil should adopt a more methodical, evidence-based approach. Sound regulation requires that new rules be underpinned by a clear vision of the specific market failures they aim to address, as well as an understanding of the costs and potential unintended consequences. Unfortunately, Bill 2768 fails to meet these prerequisites. As we show in our response to the Consultation, it is far from clear that competition law in Brazil has failed to address issues in digital markets to the extent that would make sui generis digital regulation necessary. Indeed, it is unlikely that there are any truly “essential facilities” in the Brazilian digital market that would make access regulation necessary, or that “data” represents an unsurmountable barrier to entry. Other aspects of the Bill—such as the designation of Anatel as the relevant enforcer, the extremely low turnover thresholds used to ascertain gatekeeper status, and the lack of consideration given to consumer welfare as a relevant parameter in establishing harm or claiming an exemption—are also misguided. As it stands, therefore, Bill 2768 not only risks straining Brazil’s limited public resources, but also harming innovation, consumer prices, and the country’s thriving startup ecosystem.
Identification of “essential facilities” in the universe of digital markets. Give examples of platform assets in the digital market operating in Brazil where at the same time: a) there are no digital platforms with substitute assets close to these assets b) these assets are difficult to duplicate efficiently at least close to the owning company c) without access to this asset, it would not be possible to operate in one or more markets, as it constitutes a fundamental input. Justify each of the examples given.
For the reasons we discuss below, it is unlikely that there are any examples of true “essential facilities” in digital markets in Brazil.
It important to define the meaning of “essential facility” precisely. The concept of essential facility is a state-of-the-art term used in competition law, which has been defined differently across jurisdictions. Still, the overarching idea of the essential facilities doctrines is that there are instances in which denial of access to a facility by an incumbent can distort competition. To demarcate between cases where denial of access constitutes a legitimate expression of competition on the merits from instances in which it indicates anticompetitive conduct, however, courts and competition authorities have devised a series of tests.
Thus, in the EU, the seminal Bronner case established that the essential facilities doctrine applies in Art. 102 TFEU cases when:
In addition, the facility must be genuinely “essential” to compete, not merely convenient.
Similarly, CADE has incorporated the essential facilities doctrine into Brazilian competition policy by imposing a duty to deal with competitors.[5]
The definition of “essential facilities” and, consequently, the breadth and limits of the essential facilities doctrine under Bill 2768/2022 (“Bill 2768”) should reflect tried and tested principles from competition law. There is no reason why essential facilities should be treated differently in “digital” markets, i.e., markets involving digital platforms, than in other markets. In this sense, we are concerned that the framing of Question 1 reveals an inconsistency that should be addressed before moving forward; namely, when a company’s assets are “difficult” to replicate efficiently, it is justified to force a competitor to grant access to those assets. This is misguided and could even produce the opposite of what Bill 2768 presumably aims to achieve.
As indicated above, the fundamental concept underpinning the essential facilities doctrine is that it applies to a product or service that is uneconomic or impossible to duplicate. Typically, this has applied to infrastructure, such as telecommunications or railways. For instance, expecting competitors to duplicate transport routes, such as railways, would be unrealistic — and economically wasteful. Instead, governments have often chosen to regulate these sectors as natural monopoly public utilities. Predominantly, this includes mandating access to all comers to such essential facilities under regulated prices and non-discriminatory conditions that make the activity of other companies viable and competitive—thus facilitating competition on a secondary market in situations in which competition might otherwise be impossible.
The government should ask itself to what extent this logic applies to so-called digital platforms, however.
Online search engines, for example, are not impossible or excessively difficult to replicate—nor is access to any one of them indispensable. Today, many search engines are on the market: Bing, Yandex, Ecosia, DuckDuckGo, Yahoo!, Google, Baidu, Ask.com, and Swisscows, among others.
More to the point, mere access to search engines isn’t really a problem. Rather, in most cases, those complaining about a search engine’s activity typically complain about access to the very first results, or they complain about the search engine prioritizing its own secondary-market services over those of the competitor. But this space is vanishingly scarce; there is no way for it to be allocated to all comers. Nor can it be allocated on neutral terms; by definition, a search engine must prioritize results.
Treating a search engine as an essential facility would generate problematic outcomes. For example, mandating non-discriminatory access to a search engine’s top results would be like requiring that a railroad offer service to all shippers at whatever time the shipper liked, regardless of railroad congestion, other shippers’ timetables, and the railroad’s optimization of its schedule. Not only would this be impossible, but it isn’t even required of traditional essential facilities.
Notably, while ranking high on a search engine results page is undoubtedly a boon for business, there are other ways of reaching customers. Indeed, as CADE ruled in a case concerning Google Shopping, even if the first page of Google’s result is relevant and important to ranked websites, it is not irreplaceable to the extent that there are other ways for consumers to find websites online. Google is not a mandatory intermediary for website access.[6] Moreover, as noted, search results pages must, by definition, discriminate in order to function correctly. Deeming them essential facilities would entail endless wrangling (and technically complicated determinations) to decide if the search engine’s prioritization decisions were “proper” or not.
Similarly, online retail platforms like Amazon and Mercado Livre are very successful and convenient, but sellers can use other methods to reach customers. For example, they can sell from brick-and-mortar stores or easily set up their own retail websites using myriad software-as-a-service (“SaaS”) providers to facilitate processing and fulfilling orders. Furthermore, the concurrent presence and success of Mercado Livre, B2W (Submarino.com, Americanas.com, Shoptime, Soubarato), Cnova (Extra.com.br, Casasbahia.com.br, Pontofrio.com), Magazine Louiza, and Amazon on the Brazilian market belies the claim that any one of these platforms is indispensable or irreplicable.[7]
Similar arguments can be made about the other digital platforms covered by Art. 6, paragraph II of Bill 2768. For example, WhatsApp may be by far the most popular interpersonal communication service in the country. Still, there are plenty of alternatives within easy (and mostly free) reach for Brazilian consumers, such as Messenger (62 million users), Telegram (30 million), Instagram (64 million), Viber (3 million), Hangouts (2 million), WeChat (1 million), Kik (500,000 users), and Line (1 million users). The sheer number of users of every app suggests that multi-homing is widespread.
In sum, while access to a particular digital platform may be convenient, especially if it is currently the most popular among users, it is highly questionable whether such access is essential. And, as Advocate General Jacobs noted in his opinion in Bronner, mere convenience does not create a right of access under the essential facilities doctrine.[8]
Recommendation: Bill 2768 should make it clear that the principles and requirements of “essential facilities” within the meaning of competition law apply in full to the duties and obligations contemplated in Art. 10 — and that the finding of an “essential facility” is a prerequisite to the imposition of any such duties or obligations.
Is regulation necessary to guarantee access to the asset(s) of the example(s) from Question 1? What should such regulation guarantee so that access to the asset enables third parties to enter those digital markets?
Before considering whether regulation is necessary to guarantee access to assets of certain companies, the government should first consider whether guaranteeing any such access is necessary and legitimate. In our response to Question 1, we have argued that it is unlikely to be. If the government nevertheless decides to the contrary, the next logical question should be whether competition law, including the essential facilities doctrine itself, are sufficient to address any such alleged problems as are identified in Question 2.
Arguably, the best way to answer this question would be through the natural experiment of letting CADE bring cases against digital platforms — assuming it can construct a prima facie case in each instance — and seeing whether or not traditional competition law tools provide a viable solution and, if not, whether these tools can be sharpened by reforming Brazil’s competition law or whether new, comprehensive ex-ante regulation is needed.
By comparison, the EU experimented with EU competition law before passing the DMA. In fact, most if not all the prohibitions and obligations of the DMA stem from competition law cases.[9] The EU eventually decided that it preferred to pass blanket ex-ante rules against certain practices rather than having to litigate through competition law. Whether or not this was the right decision is up for debate, but one thing is certain: The EU tried its competition toolkit extensively against digital platforms before learning from the outcomes and deciding it needed to be complemented with a new set of broader, enforcer-friendly, bright-line rules.
By contrast, Brazil has initiated only a handful of antitrust cases against digital platforms. According to numbers published by CADE,[10] CADE has reviewed 233 merger cases related to digital platform markets between 1995 and 2023 and, regarding unilateral conduct (monopolization cases)—those most relevant for the discussion on Bill 2768—opened 23 conduct cases. Regarding those 23 cases, 9 are still being investigated, 11 were dismissed, and only 3 were settled by the signature of a Cease-and-Desist Agreement (TCC). In this sense, only 3 cases (TCCs) out of 23 could be said to have been, to some extent, “condemned”. It is questionable whether these cases provide the sort of evidence of the existence of intrinsic competition problems in the eight service markets identified in Art. 6, paragraph II of Bill 2768 that would justify new, “sector-specific” access rules.[11]
In fact, the recent entry of companies into many of those markets suggests that the opposite is closer to the truth. There are numerous examples of entry in a variety of digital services, including the likes of TikTok, Shein, Shopee, and Daki, to name just a few.
Serious problems can arise when products that are not essential facilities are treated as such, of which we name two.
First, over-extending the essential facilities doctrine can encourage free riding.[12] This is not what the essential facilities doctrine, properly understood, aims to achieve, nor what it should be used for:
Consequently, the [European Court of Justice] implies that the [essential facilities doctrine] is not designed for the convenience of undertakings to free ride dominant undertakings, but only for the necessity of survival on the secondary market in situations where there are no effective substitutes.[13]
Why develop a competing online retail platform when access to Mercado Livre or Amazon is guaranteed by law? Free riding can discourage investments from third companies and targeted “gatekeepers,” especially in the development and improvement of competing business platforms (or alternative business models that are not exact replicas of existing platforms). Contrary to the stated goals of Bill 2768, this could further entrench incumbents, as the ability to free ride on others’ investments incentivizes companies to pivot away from contesting incumbents’ core markets to acting as complementors in those markets.
Indeed, a serious—and underappreciated—concern is the cost of excessive risk-taking by companies that can rely on regulatory protections to ensure continued viability even when it is not warranted.
Businesses must develop their business models and operate their businesses in recognition of the risk involved. A complementor that makes itself dependent upon a platform for distribution of its content does take a risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated platform changes over which it has no control. This is a species of the “asset specificity” problem that animates much of the Transaction Cost Economics literature.[14]
But the risk may be a calculated one. Firms occupy specialized positions in supply chains throughout the economy, and they make risky, asset-specific investments all the time. In most circumstances, firms use contracts to allocate both risk and responsibility in a way that makes the relationship viable. When it is too difficult to manage risk by contract, firms may vertically integrate (thus aligning their incentives) or simply go their separate ways.
The fact that a platform creates an opportunity for complementors to rely upon it does not mean that a firm’s decision to do so — and to do so without a viable contingency plan — makes good business sense. In the case of the comparison-shopping sites at issue in the EU’s Google Shopping decision,[15] for example, it was entirely predictable that Google’s algorithm would evolve. It was also entirely predictable that it would evolve in ways that could diminish or even eviscerate their traffic. As one online marketing expert put it, “counting on search engine traffic as your primary traffic source is a bit foolish, to say the least.”[16]
Providing guarantees (which is what a “gatekeeper” access rule accomplishes) in this situation creates a significant problem: Protecting complementors from the inherent risk in a business model in which they are entirely dependent upon another company with which they have no contractual relationship is at least as likely to encourage excessive risk taking and inefficient over-investment as it is to ensure that investment and innovation are not too low.[17]
Second, granting companies and competitors access to goods or services except in the very few and narrow cases[18] in which access to such goods and services is truly essential to sustain competition on the market sends platforms the wrong message. The message is that, after being encouraged to compete, successful companies will be punished for thriving. This is contrary to the spirit of competition law and the principle of free competition, which Bill 2768 should be careful not to eviscerate. As the great U.S. jurist Learned Hand observed in U.S. v. Aluminum Co. of America: “The successful competitor, having been urged to compete, must not be turned upon when he wins.”[19]
Furthermore, forcing companies to do business with third parties is at odds with the principle that, unless a violation of antitrust law can be ascertained, companies should be free to do business with whomever they choose.[20] Indeed, it is a cornerstone of the free market economy that “the antitrust laws [do] not impose a duty on [firms] . . . to assist [competitors] . . . to ‘survive or expand.’”[21]
Describe cases in digital markets where there is at least one other company with substitute assets close to these assets of the main company, but none of the digital platforms that hold the asset provide access to it. In other words, even if there is more than one asset in the market, there is still a problem of accessing the asset. How could Bill 2768/2022, especially its article 10, be improved to improve access to essential supplies?
We are aware of no such cases.
Describe cases in which the ownership of data in digital markets creates a barrier to entry that makes it very difficult or even impossible for incumbent digital platforms to enter the market. How could Bill 2768/2022 mitigate this problem, reducing the barrier to entry represented by access to data?
The extent to which data represents a barrier to entry is, in our opinion, vastly overstated. Bill 2768 should not assume that data is a barrier to entry and should assess claims to the contrary critically — especially if it intends to build a new, comprehensive regulatory regime on that assumption.[22]
In a nutshell, theories of “data as a barrier to entry” make the assertion that online data can amount to a barrier to entry, insulating incumbent services from competition and ensuring that only the largest providers thrive. This data barrier to entry, it is alleged, can then allow firms with monopoly power to harm consumers, either directly through “bad acts” like price discrimination, or indirectly by raising the costs of advertising, which then get passed on to consumers.[23]
However, the notion of data as an antitrust-relevant barrier to entry is more supposition than reality.
First, despite the rush to embrace “digital platform exceptionalism,” data is useful to all industries. “Data” is not some new phenomenon particular to online companies. It bears repeating that offline retailers also receive substantial benefit from, and greatly benefit consumers by, knowing more about what consumers want and when they want it. Through devices like coupons, membership discounts and loyalty cards (to say nothing of targeted mailing lists and the age-old practice of data mining check-out receipts), brick-and-mortar retailers can track purchase data and better serve consumers. Not only do consumers receive better deals for using them, but retailers know what products to stock and advertise and when and on what products to run sales.[24]
Of course, there are a host of other uses for data, as well, including security, fraud prevention, product optimization, risk reduction to the insured, knowing what content is most interesting to readers, etc. The importance of data stretches far beyond the online world, and far beyond mere retail uses more generally. To describe any one company as having a monopoly on data is therefore mistaken.
Second, it is not the amount of data that leads to success, but how that data is used to craft attractive products or services for users. In other words: information is important to companies because of the value that can be drawn from it, not for the inherent value of the data itself. Thus, many companies that accumulated vast amounts of data were subsequently unable to turn that data into a competitive advantage to succeed on the market. For instance, Orkut, AOL, Friendster, Myspace, Yahoo! and Flicker — to name a few — all gained immense popularity and access to significant amounts of data, but failed to retain their users because their products were ultimately lackluster.
Data is not only less important than what can be drawn from it, but data is also less important than the underlying product it informs. For instance, Snapchat created a challenger to Facebook so successfully (and in such a short time) that Facebook attempted to buy it for $3 billion (Google offered $4 billion). But Facebook’s interest in Snapchat was not about its data. Instead, Snapchat was valuable — and a competitive challenge to Facebook — because it cleverly incorporated the (apparently novel) insight that many people wanted to share information in a more private way.
Relatedly, Twitter, Instagram, LinkedIn, Yelp, TikTok (and Facebook itself) all started with little (or no) data but nevertheless found success. Meanwhile, despite its supposed data advantages, Google’s attempt at social networking, Google+, never caught up to Facebook in terms of popularity to users (and thus not to advertisers either) and shut down in 2019.
At the same, it is not the case that the alleged data giants — the ones supposedly insulating themselves behind data barriers to entry — actually have the type of data most relevant to startups anyway. As Andres Lerner has argued, if you wanted to start a travel business, the data from Kayak or Priceline (or local Decolar.com) would be far more relevant.[25] Or if you wanted to start a ride-sharing business, data from cab companies would be more useful than the broad, market-cross-cutting profiles Google and Facebook have. Consider companies like Uber and 99 that had no customer data when they began to challenge established cab companies that did possess such data. If data were really so significant, they could never have competed successfully. But Uber and 99 have been able to effectively compete because they built products that users wanted to use — they came up with an idea for a better mousetrap. The data they have accrued came after they innovated, entered the market, and mounted their successful challenges — not before.
Complaints about data facilitating unassailable competitive advantages thus have it exactly backwards. Companies need to innovate to attract consumer data, otherwise consumers will switch to competitors (including both new entrants and established incumbents). As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results: The continued explosion of new products, services and other apps is evidence that data is not a bottleneck to competition but a spur to drive it.
Third, competition online is (metaphorically—but not by much) one click or thumb swipe away. That is, barriers to entry and switching costs are low. Indeed, despite the alleged prevalence of data barriers to entry, competition online continues to soar, with newcomers constantly emerging and triumphing. The entry of online retailers and other digital platforms in Brazil is a case in point (See Questions 1 and 2). This suggests that the barriers to entry are not so high as to prevent robust competition.
Again, despite the supposed data-based monopolies of Facebook, Google, Amazon, Apple, and others, there exist powerful competitors in the markets they compete in:
Even assuming for the sake of argument that data creates a barrier to entry, there is little evidence that consumers cannot easily switch to a competitor. While there are sometimes network effects online, like with social networking, history still shows that people will switch. Myspace was considered a dominant network until it made a series of bad business decisions, and users ended up on Facebook instead; Orkut had a similar fate. Similarly, Internet users can and do use Bing, DuckDuckGo, Yahoo!, and a plethora of more specialized search engines on top of and instead of Google, and increasingly also turn to other ways to find information online (such as searching for a brand or restaurant directly on Instagram or TikTok, or asking ChatGPT a question). In fact, Google itself was once an upstart new entrant that replaced once-household names like Yahoo! and AltaVista.
Fourth, access to data is not exclusive. Data is not like oil. If, for example, Petrobras drills and extracts oil from the ground, that oil is no longer available to other companies. Data is not finite in the same way. Google knowing someone’s birthday doesn’t limit the ability of Facebook to know the same person’s birthday, as well. While databases may be proprietary, the underlying data is not. And what matters more than the data itself is how well it is analyzed (see first point). Because data is not exclusive like oil, any attempt to force the sharing of data in an attempt to help competitors creates a free-riding problem. Why go through the work of collecting valuable data on customers to learn what they want so you can better serve them when regulation mandates that Apple effectively give you the data?
In conclusion, the problem with granting competitors access to data is that data is a consequence of competition, not a prerequisite for it. Thus, rather than enhancing their ability to compete, “gifting” competitors the fruits of others’ successful attempts at competition risks destroying both groups’ incentives to design attractive products to accrue such data in the first place. By reversing the competition-data causality, Bill 2768 ultimately risks inadvertently stifling the same competition that it purportedly seeks to bolster.
Cite cases in which a company in the digital market in Brazil used third-party data because of its characteristic as an essential input provider, harming the third party competitively?
We are not aware of any such cases.
However, the framing of this question should be clear about what is meant by “harming a third party competitively.” The use of third-party data is a key driver of competition. Even if competitors are “harmed” as a result, they are harmed only insofar as they do not match the price or quality offered by the platform.
Competition is, to a large extent, driven by the use of knowledge of rivals’ products — including their price, quality, quantity, and how they are sold and presented to consumers. In fact, the model of perfect competition largely assumes that all the products on the market are homogeneous (even if this is rarely borne out in practice). The use of third-party data to match and beat competitor’s offerings can be seen as a modern expression of this dynamic. Indeed, as we have written before:
We cannot assume that something is bad for competition just because it is bad for certain competitors. A lot of unambiguously procompetitive behavior, like cutting prices, also tends to make life difficult for competitors. The same is true when a digital platform provides a service that is better than alternatives provided by the site’s third-party sellers. […]. There’s no doubt this is unpleasant for merchants that have to compete with these offerings. But it is also no different from having to compete with more efficient rivals who have lower costs or better insight into consumer demand. Copying products and seeking ways to offer them with better features or at a lower price, which critics of self-preferencing highlight as a particular concern, has always been a fundamental part of market competition—indeed, it is the primary way competition occurs in most markets.[26]
We cannot assume that something is bad for competition just because it is bad for certain competitors. A lot of unambiguously procompetitive behavior, like cutting prices, also tends to make life difficult for competitors. The same is true when a digital platform provides a service that is better than alternatives provided by the site’s third-party sellers. […].
There’s no doubt this is unpleasant for merchants that have to compete with these offerings. But it is also no different from having to compete with more efficient rivals who have lower costs or better insight into consumer demand. Copying products and seeking ways to offer them with better features or at a lower price, which critics of self-preferencing highlight as a particular concern, has always been a fundamental part of market competition—indeed, it is the primary way competition occurs in most markets.[26]
Any per se prohibition of the use of third-party data would preclude digital platforms from using data to improve their product offering in ways that could benefit consumers.
Recommendation: Assuming that competition law and IP law are not up to the task of curbing abuses of third-party data, Bill 2768 should ensure that such prohibitions are tailor-made to cover conduct that has no other rational explanation other than seeking to exclude a competitor. It should not capture uses of third-party data that drives competition and benefit consumers, even if this results in the exit of a competitor from the market.
Describe cases in which a difficulty in interoperability with a company’s systems makes it very difficult or impossible to enter one or more digital markets. How could Bill 2768/2022 mitigate this problem, reducing the barrier to entry represented by lack of interoperability?
However, when considering potential interoperability mandates, the government should be aware of the risks and trade-offs that come with such measures, especially in terms of safety, security, and privacy (see Question 8 for a more detailed discussion).
The European Digital Market Act (DMA) chose to implement absolute prohibitions (per se) on some conduct in digital markets, such as self-preferencing, among others. Bill 2768/2022, on the other hand, chose not to do any prohibited conduct ex ante. Should there be one or more conducts with absolute prohibitions (per se) in Bill 2768/2022? Why? Please propose wording, explaining where in the bill it would be located?
No, there should not be absolute prohibitions on these sorts of conduct, especially without substantive experience suggesting that such conduct is always or almost always harmful and largely irredeemable (in this item, we answer the question in general terms; please see Question 8 for a discussion of why particular conduct (e.g., self-preferencing) should not be prohibited).
Regardless of the harm to the business of the targeted companies, overly broad prohibitions (or mandates) can harm consumers by chilling procompetitive conduct and discouraging innovation and investment, especially when no showing of harm is required and the law is not amenable to efficiencies arguments (like in the case of the DMA). The fact that such prohibitions apply to vastly different markets (for example, cloud services have little to do with search engines) regardless of context is also a sure sign that they are overly broad and poorly designed.
In fact, there are indications that where the DMA has been introduced, it has delayed the advance of technology. For example, Google’s “Bard” AI was rolled out later in Europe due to the EU’s uncertain and strict AI And privacy regulations.[27] Similarly, Meta’s “Threads” is not available in the EU precisely due to the constraints imposed by the DMA and the EU’s data privacy regulation (GDPR).[28] Elon Musk, X’s (formerly Twitter) CEO, has indicated that the cost of complying with EU digital regulations, such as the DSA, could prompt it to exit the European market.[29] Recently, Microsoft delayed the European rollout of its new AI, “Copilot,” because of the DMA.[30]
Apart from capturing pro-competitive conduct that benefits consumers and freezing technology in time (which would ultimately exacerbate the technological chasm between more and less advanced countries), rigid per se rules could also capture many budding companies that cannot be considered “gatekeepers” by any stretch of the imagination. This risk is especially real in the case of Brazil given the extremely low threshold for what constitutes a “gatekeeper” enshrined in Article 9 (R$70 million, or approximately USD$14 million). Thus, many Brazilian unicorns could, either immediately or in the near future, be captured by the new, restrictive rules, which could stunt their growth and chill innovative products. Ultimately, this could imperil Brazil’s current status as “[Latin America’s] most established startup hub” and cast a shadow on what The Economist has referred to as the bright future of Latin American startups.[31]
The list of harmed companies could include some of Brazil’s most promising unicorns, such as:
Would there be behaviors in digital markets that would have a high potential to entail competitive problems, but which can be justified as generating greater efficiency for companies, transactions, and markets? Give examples of these behaviors? How should these behaviors be treated in Bill 2768/2022? In particular, a “reversal of the burden of proof” would be appropriate, in which such conduct would presumably be anti-competitive, but would it be appropriate to authorize a defense of digital platforms based on these efficiencies? Should these behaviors be considered not prohibited per se, but as a “reversal of the burden of proof” in Bill 2768/2022?
There are certain types of behavior in digital markets that have been targeted by ex-ante regulations but which are nevertheless capable of, or even central to, delivering significant procompetitive benefits. It would be unjustified and harmful to subject such conduct to per se prohibitions or to reverse the burden of proof. Instead, this type of conduct should be approached neutrally, and examined on a case-by-case basis.[33]
Self-preferencing occurs when a company gives preferential treatment to one of its own products (presumably, this type of behavior could be caught by Art. 10, paragraph II of Bill 2768). An example would be Google displaying its shopping service at the top of search results ahead of alternative shopping services. Critics of this practice argue that it puts dominant firms in competition with other firms that depend on their services, and this allows companies to leverage their power in one market to gain a foothold in an adjacent market, thus expanding and consolidating their dominance. However, this behavior can also be procompetitive and beneficial to users.
Over the past several years, a growing number of critics have argued that big tech platforms harm competition by favoring their own content over that of their complementors. Over time, this argument against self-preferencing has become one of the most prominent among those seeking to impose novel regulatory restrictions on these platforms.
According to this line of argument, complementors would be “at the mercy” of tech platforms. By discriminating in favor of their own content and against independent “edge providers,” tech platforms cause “the rewards for edge innovation [to be] dampened by runaway appropriation,” leading to “dismal” prospects “for independents in the internet economy—and edge innovation generally.”[34]
The problem, however, is that the claims of presumptive harm from self-preferencing (also known as “vertical discrimination”) are based neither on sound economics nor evidence.
The notion that platform entry into competition with edge providers is harmful to innovation is entirely speculative. Moreover, it is flatly contrary to a range of studies showing that the opposite is likely true. In reality, platform competition is more complicated than simple theories of vertical discrimination would have it,[35] and the literature establishes that there is certainly no basis for a presumption of harm.[36]
The notion that platforms should be forced to allow complementors to compete on their own terms, free of constraints or competition from platforms is a species of the idea that platforms are most socially valuable when they are most “open.” But mandating openness is not without costs, most importantly in terms of the effective operation of the platform and its own incentives for innovation.
“Open” and “closed” platforms are different ways of supplying similar services, and there is scope for competition between these alternative approaches. By prohibiting self-preferencing, a regulator might therefore close down competition to the detriment of consumers. As we have noted elsewhere:
For Apple (and its users), the touchstone of a good platform is not ‘openness,’ but carefully curated selection and security, understood broadly as encompassing the removal of objectionable content, protection of privacy, and protection from ‘social engineering’ and the like. By contrast, Android’s bet is on the open platform model, which sacrifices some degree of security for the greater variety and customization associated with more open distribution. These are legitimate differences in product design and business philosophy.[37]
Moreover, it is important to note that the appropriation of edge innovation and its incorporation into the platform (a commonly decried form of platform self-preferencing) greatly enhances the innovation’s value by sharing it more broadly, ensuring its coherence with the platform, incentivizing optimal marketing and promotion, and the like. Smartphones are now a collection of many features that used to be offered separately, such as phones, calculators, cameras and gaming consoles, and it is clear that the incorporation of these features in a single device has brought immense benefits to consumers and society as a whole. In other words, even if there is a cost in terms of reduced edge innovation, the immediate consumer welfare gains from platform appropriation may well outweigh those (speculative) losses.
Crucially, platforms have an incentive to optimize openness (and to assure complementors of sufficient returns on their platform-specific investments). This does not mean that maximum openness is optimal, however; in fact, typically a well-managed platform will exert top-down control where doing so is most important, and openness where control is least meaningful.[38]
But this means that it is impossible to know whether any particular platform constraint (including self-prioritization) on edge provider conduct is deleterious, and similarly whether any move from more to less openness (or the reverse) is harmful.
This is the situation that leads to the indeterminate and complex structure of platform enterprises. Consider the big online platforms like Google and Facebook, for example. These entities elicit participation from users and complementors by making access to their platforms freely available for a wide range of uses, exerting control over access only in limited ways to ensure high quality and performance. At the same time, however, these platform operators also offer proprietary services in competition with complementors or offer portions of the platform for sale or use only under more restrictive terms that facilitate a financial return to the platform.
The key is understanding that, while constraints on complementors’ access and use may look restrictive compared to an imaginary world without any restrictions, in such a world the platform would not be built in the first place. Moreover, compared to the other extreme — full appropriation (under which circumstances the platform also would not be built…) — such constraints are relatively minor and represent far less than full appropriation of value or restriction on access. As Jonathan Barnett aptly sums it up:
The [platform] therefore faces a basic trade-off. On the one hand, it must forfeit control over a portion of the platform in order to elicit user adoption. On the other hand, it must exert control over some other portion of the platform, or some set of complementary goods or services, in order to accrue revenues to cover development and maintenance costs (and, in the case of a for-profit entity, in order to capture any remaining profits).[39]
For instance, companies may choose to favor their own products or services because they are better able to guarantee their quality or quick delivery.[40] Mercado Livre, for instance, may be better placed to ensure that products provided by the ‘Mercado Envios logistics service are delivered in a timely manner compared to other services. Consumers may benefit from self-preferencing in other ways, too. If, for instance, Google were prevented from prioritizing Google Maps or YouTube videos in its search queries, it could be harder for users to find optimal and relevant results. If Amazon is prohibited from preferencing its own line of products on the marketplace, it may instead opt not to sell competitors’ products at all.
The power to prohibit the requiring or incentivizing of customers of one product to use another would enable the limiting or prevention of self-preferencing and other similar behavior. Granted, traditional competition law has sought to restrict the ‘bundling’ of products by requiring them to be purchased together, but to prohibit incentivization as well goes much further.
Another mot du jour is interoperability, which might fall under Art. 10, paragraph IV of Bill 2768. In the context of digital ex ante regulation, ‘interoperability’ means that covered companies could be forced to ensure that their products integrate with those of other firms. For example, requiring a social network to be open to integration with other services and apps, a mobile operating system to be open to third-party app stores, or a messaging service to be compatible with other messaging services. Without regulation, firms may or may not choose to make their software interoperable. However, Europe’s DMA and the UK’s prospective Digital Markets, Competition and Consumer Bill (“DMCC”),[41] will allow authorities to require it. Another example is data ‘portability,’ which allows customers to move their data from one supplier to another, in the same way that a telephone number can be kept when one changes network.
The usual argument is that the power to require interoperability might be necessary to ‘overcome network effects and barriers to entry/expansion.’ However, the Brazilian government should not overlook that this solution comes with costs to consumer choice, in particular by raising difficulties with security and privacy, as well as having questionable benefits for competition. In fact, it is not as though competition disappears when customers cannot switch as easily as they turn on a light. Companies compete upfront to attract such consumers through tactics like penetration pricing, introductory offers, and price wars.[42]
A closed system, that is, one with comparatively limited interoperability, can help limit security and privacy risks. This can encourage use of the platform and enhance the user experience. For example, by remaining relatively closed and curated, Apple’s App Store gives users the assurance that apps will meet a certain standard of security and trustworthiness. Thus, ‘open’ and ‘closed’ ecosystems are not synonymous with ‘good’ and ‘bad,’ and instead represent two different product design philosophies, either of which might be preferred by consumers. By forcing companies to operate ‘open’ platforms, interoperability obligations could thus undermine this kind of inter-brand competition and override consumer choices.
Apart from potentially damaging user experience, it is also doubtful whether some of the interoperability mandates, such as those between social media or messaging services, can achieve their stated objective of lowering barriers to entry and promoting greater competition. Consumers are not necessarily more likely to switch platforms simply because they are interoperable. In fact, there is an argument to be made that making messaging apps interoperable in fact reduces the incentive to download competing apps, as users can already interact with competitors’ apps from the incumbent messaging app.
Some ex-ante rules seek to address firms’ ability to influence user choice of apps through pre-installation, defaults, and the design of app stores (this could fall under Art. 10, paragraph II of Bill 2768). This has sometimes resulted in the imposition of requirements to provide users with ‘choice screens,’ for instance requiring users to choose which search engine or mapping service is installed on their phone. In this sense, it is important to understand the trade-offs at play here: choice screens may facilitate competition, but they may do so at the expense of the user experience, in terms of the time taken to make such choices. There is a risk, without evidence of consumer demand for ‘choice screens,’ that such rules impose the legislator’s preference for greater optionality over what is most convenient for users. Unless there is explicit public demand in Brazil for such measures, it would be ill-advised to implement a choice screen obligation.
In general, many of the prohibitions and obligations contemplated in ex-ante rules target incumbents’ size, scalability, and “strategic significance.”
It is widely claimed that because of network effects, digital markets are prone to ‘tipping’ whereby when one producer gains a sufficient share of the market, it quickly becomes a complete or near-complete monopolist. Although they may begin as very competitive, these markets therefore exhibit a marked ‘winner takes all’ characteristic. Ex ante rules often try to avert or revert this outcome by targeting a company’s size, or by targeting companies with market power.
However, there are many investments and innovations that will – if permitted – benefit consumers, either immediately or in the longer term, but which may have some effect on enhancing market power, a companies’ size, or its strategic significance. Indeed, improving a firm’s products and thereby increasing its sales will often lead to increased market power.
Accordingly, targeting “size” or conduct which bolsters market power, without any accompanying evidence of harm, creates a serious danger of a very broad inhibition of research, innovation, and investment – all to the detriment of consumers. Insofar as such rules prevent the growth and development of incumbent firms, they may also harm competition, since it may well be these firms that – if permitted – are most likely to challenge the market power of other firms in other, adjacent markets. The cases of Disney, Apple, Amazon and Globo’s launch of video-on-demand services to compete with Netflix, and Meta’s introduction of ‘Threads’ as a challenge to Twitter (or ‘X’), appear to be an example. Here, per se rules that have the aim of prohibiting the bolstering of size or market power in one area may in fact prevent entry by one firm into a market dominated by another. In that case, policymaker action protects monopoly power. Therefore, a much subtler approach to regulation is required.
Bill 2768’s reference to Tim Wu’s The Curse of Bigness, which notoriously adopts a reductive “big is bad” ethos, suggests that it could be making a similarly flawed assumption.[43]
We do not think it is appropriate to reverse the burden of proof in any instances in the context of digital platforms. Without substantive evidence that such conduct causes widespread harm to a well-defined public interest (e.g., similar to cartels in the context of antitrust law), there is no justification for a reversal of the burden of proof, and any such reversal of the burden of proof risks undermining consumer benefits, innovation, and discouraging investment in the Brazilian economy for a justified fear that procompetitive conduct will result in fines and remedies. By the same token, we do think that where the appointed enforcer makes a prima facie case of harm, whether in the context of antitrust law or ex-ante digital regulation, it should also be prepared to address arguments related to efficiencies.
Is there a need for a regulator? If so, which regulator would be better able to implement the regulation provided for in Bill 2768/2022? Anatel, CADE, ANPD, another existing or new regulator? Justify.
Despite the lack of clarity concerning the law’s goals and objectives, the rules proposed by Bill 2768 appear to be competition based, at least insofar as they seek to bolster free competition, consumer protection, and tackle “abuse of economic power” (Art. 4). Therefore, the agency best positioned to enforce it would, in principle, be CADE (the goals of Act 12.529/11, the Brazilian Competition Law, overlap significantly with those under Bill 2768). Conversely, there is a palpable risk that, in discharging its duties under Bill 2768, Anatel would transpose the logic and principles of telecommunications regulation to “digital” markets, which is misguided as these are two very different things.
Not only are “digital” markets substantively different from telecommunications markets, but there is really no such thing as a clearly demarcated concept of “digital market.” For example, the digital platforms described in Art. 6, paragraph II of Bill 2768 are not homogenous, and cover a range of different business models. In addition, virtually every market today incorporates “digital” elements, such as data. Indeed, companies operating in sectors as divergent as retail, insurance, healthcare, pharma, production, and distribution have all been “digitalized.” Thus, an enforcer with a nuanced understanding of the dynamics of digitalization and, especially, the idiosyncrasies of digital platforms as two-sided markets, appears necessary. While CADE arguably lacks substantive experience with digital platforms, it is better placed to enforce Bill 2768 than Anatel because of its deep experience with the enforcement of competition policy.
Do you think that there could be any risk of bis in idem between the regulator and the competition authority with the same conduct being analyzed by both?
Based on the EU experience, there is a risk of double jeopardy at the intersection of traditional competition law and ex-ante digital regulation.
By way of comparison, and as Giuseppe Colangelo has written, the DMA is grounded explicitly on the notion that competition law alone is insufficient to effectively address the challenges and systemic problems posed by the digital platform economy.[44] Indeed, the scope of antitrust is limited to certain instances of market power (e.g., dominance on specific markets) and of anti-competitive behavior. Further, its enforcement occurs ex post and requires extensive investigation on a case-by-case basis of what are often very complex sets of facts and may not effectively address the challenges to well-functioning markets posed by the conduct of gatekeepers, who are not necessarily dominant in competition-law terms — or so its proponents argue. As a result, regimes like the DMA invoke regulatory intervention to complement traditional antitrust rules by introducing a set of ex ante obligations for online platforms designated as gatekeepers. This also allows enforcers to dispense with the laborious process of defining relevant markets, proving dominance, and measuring market effects.
However, despite claims that the DMA is not an instrument of competition law, and thus would not affect how antitrust rules apply in digital markets, the regime does appear to blur the line between regulation and antitrust by mixing their respective features and goals. Indeed, the DMA shares the same aims and protects the same legal interests as competition law.
Further, its list of prohibitions is effectively a synopsis of past and ongoing antitrust cases, such as Google Shopping (Case T-612/17), Apple (AT.40437) and Amazon (Cases AT.40462 and AT.40703).[45] Acknowledging the continuum between competition law and the DMA, the European Competition Network (ECN) and some EU member states (self-anointed “friends of an effective DMA”) initially proposed empowering national competition authorities (NCAs) to enforce DMA obligations.[46]
Similarly, the prohibitions and obligations contemplated in Art. 10 of Bill 2768 could, in theory, all be imposed by CADE. In fact, CADE has investigated, and is still investigating, several large companies which would (likely) fall within the purview of Bill 2768, such as Google, Apple, Meta, (still under investigation) Booking.com, Decolar.com, Expedia and iFood (settled through case-and-desist agreements), and Uber (all investigations closed without penalties; following an economic study, CADE found that Uber’s entry benefitted consumers[47]). CADE’s past and current investigations against these companies already covered conducts that are targeted by the DMA and Bill 2768, such as refusal to deal, self-preferencing, and discrimination.[48] Existing competition law under Act 12.529/11, the Brazilian Competition Law, thus clearly already captures the sort of conduct which is included under Bill 2768. In addition, the requirement to use data “adequately” is likely covered by data protection regulation in Brazil (Lei Geral de Proteção de Dados, LGPD, Lei Federal Nº 13.709/2018).
The difference between the two regimes is that, while general antitrust law requires a showing of harm (even if potential) and exempts conduct with net benefits to consumers, Bill 2768 in principle does not. The only limiting principle to the prohibitions and obligations contained in Art. 10 Art. 11 (III) is the principle of proportionality — which is a general principle of constitutional law and should, in any case, apply regardless of Bill 2768. Thus, the only limiting principle of Art. 10, framed broadly, is redundant.
There is one additional complication. Bill 2768 pursues many (though not all) of the same objectives as Act 12.529/11. Insofar as these objectives are shared, it could lead to double jeopardy i.e., the same conduct being punished twice under slightly different regimes. But it could also produce contradictory results because, as pointed out above, the objectives pursued by the two bills are not identical. Act 12.529/11 is guided by the goals of “free competition, freedom of initiative, social role of property, consumer protection and prevention of the abuse of economic power” (Art. 1). To these objectives, Bill 2768 adds “reduction of regional and social inequalities,” and “increase of social participation in matters of public interest.” While it is true that these principles derive from Art. 170 of the Brazilian Constitution (“economic order”), the mismatch between the goals of Act 12.529/11 and Bill 2768 and their enforcing authorities is sufficient as to lead to situations in which conduct that is allowed or even encouraged under Act 12.529/11 is prohibited under Bill 2768. For instance, procompetitive conduct by a covered platform could nevertheless exacerbate “regional or social inequalities” because it invests heavily in one region, but not others. In a similar vein, safety, privacy, and security measures implemented by, say, an operator of an App Store, which would typically be considered beneficial for consumers under antitrust law,[49] could feasibly lead to less participation in discussions of public interest (assuming one could easily define the meaning of such a term).
Accordingly, Bill 2768 could fragment Brazil´s legal framework due to overlaps with competition law, stifle procompetitive conduct, and lead to contradictory results. This, in turn, is likely to impact legal certainty and the rule of law in Brazil, which could adversely affect Foreign Direct Investment.[50] Furthermore, coordination between CADE and Anatel is likely to be costly, if the latter ends up being the designated enforcer of Bill 2768. Brazil would essentially have two Acts pursuing the same or similar goals being implemented by two different agencies, with all the extra compliance and coordination costs that come with such duplicity.
What is your assessment of the criteria of art. 9 of Bill 2768/2022? Should it be changed? By what criteria? Is it necessary to designate the essential service-to-service access control power holder?
This criterion seems arbitrary and, in any case, extremely low. There is no objective reason that would link “power to control access” with turnover. Furthermore, even if one admits, for the sake of argument, that turnover is a relevant indication of gatekeeper power, a R$70 million threshold would capture dozens, if not hundreds of companies active in a range of industries. This can lead to a situation in which a law that was initially — and purportedly — aimed at very specific “digital” firms, like Google, Amazon, Apple, Microsoft, etc., ends up, by and large, covering a host of other, comparatively small firms, including some of Brazil’s most valuable unicorns (see Question 7). On the other hand, it is also questionable from a rule of law perspective whether a law should seek to identify the specific companies it will apply to in advance.
Lessons can be drawn from the UK’s DMCC, which has made a similar mistake. Pursuant to the current proposal for a DMCC, the UK’s CMA will be able to designate a company as having “significant market status” (“SMS”) where it takes part in a ‘digital activity linked to the United Kingdom’, and, in relation to this digital activity, has ‘substantial and entrenched market power’ and is in ‘a position of strategic significance’ (s. 2), and has a turnover of at least £1 billion in the UK or £25 billion globally (s. 7).[51] The British government has previously stated that the ‘regime will be targeted at a small number of firms’.
However, except for the monetary threshold, the SMS criteria are all broadly defined, and could in theory capture as many as 530 companies (as of March 2022, there were 530 companies with more than £1 billion in revenue in the United Kingdom, according to the Office for National Statistics).[52] Thus, although the government claims that the new regime is aimed at a handful of companies, in practice the CMA will have the power to interfere in a variety of new ways across wide swaths of the economy.
Article 9 of Bill 2768 runs into a similar problem. Granted, it identifies the types of services to which the Bill would apply in a way that the DMCC does not. However, some of the categories envisaged are still very broad: for example, online intermediation services could cover any website that connects buyers and sellers or facilitates transactions between two parties. “Operating systems” are prevalent electronic devices well beyond Apple’s iOS and Google’s Android. Indeed, an operating system is just a program or set of programs of a computer system, which manages the physical resources (hardware), the execution protocols of the rest of the content (software), as well as the user interface. They can be found in many everyday devices, either through graphical user interfaces, desktop environments, window managers or command lines, depending on the nature of the device.
Companies delivering these services, no matter their competitive position, market share, the industry they are a part of, or any other economic or factual considerations, would all be caught by Bill 2768, as long as they fulfilled the (low) R$70 million threshold. The upshot is that the enforcer will be able to apply Bill 2768 against a host of wildly different companies, some of which might not really be in a position to harm competition or misuse their market power. As a consequence, the Bill risks discouraging growth, innovation and, indeed, success, as companies become wary of growing past a certain threshold for fear of being caught in the regulator’s crosshairs. Coupled with a reversal of the burden of proof and the possibility of ignoring efficiencies arguments, the Bill would give the enforcer massive, unchecked powers, which could raise rule of law issues.
This problem can be remedied, at least to some extent, by adding a series of qualitative criteria that may or may not work cumulatively with the quantitative thresholds laid down in the Bill. These criteria should require a showing that the companies in question control access to essential facilities, that such facilities cannot be reasonably replicated, and that access is being denied with the threat that competition on the market may be eliminated (refer to Question 1 for discussion on integrating the essential facilities doctrine into Bill 2768). In addition, Bill 2768 should leverage existing measurements of market power from competition law, such as the ability to control output and increase prices. Quantitative criteria, if used, should be significantly higher and also refer to the number of active users on each platform service covered. “Active user” should in this sense be defined as a user who uses a specific service at least once daily and, at a minimum, once weekly.
What did you think of the rules on the Digital Platforms Supervisory Fund in art. 15 of Bill 2768/2022? Is there another way to finance this type of government regulatory activity?
There are many ways of financing governmental regulatory activity that do not require the targeted companies to pay an annual tax. Government agencies are typically financed from the general government budget — and it should be the same for the agency enforcing Bill 2768.
There are at least two issues with the current approach under Art. 15. The first is capture. If an agency’s activity is funded by the regulated companies, this can lead to the capture of the agency by the regulated company and facilitate rent-seeking — i.e., the situation in which a company uses the regulator to gain an unfair advantage over rivals. Second, it also creates an incentive on the part of the agency, and the government, to widen the scope of the targeted companies, as a way to secure more funding and resources. This creates a perverse incentive that does not align with the public interest. It also discourages investment and, in a sense, is tantamount to a racket by the government.
Moreover, to the extent that the Bill operates as a direct and targeted constraint on certain companies’ exercise of their economic liberty and private property rights for the presumed benefit of the public welfare, it seems appropriate that it should be funded by general-revenue funds, apportioned according to current tax policy over the entire tax-paying population.
To what extent do you believe that all the problems addressed in Bill 2768/2022 are already adequately addressed by competition law, more specifically by CADE, with the instruments of Law No. 12,529 of 2011?
Please see the response to Question 10.
The fact that the government is asking this question at this stage in the process suggests that perhaps the scope and the particulars of Bill 2768 have not been thoroughly thought out. Bill 2768 should be passed only if it is clear that Brazilian competition law is not up to the task. By comparison, and as indicated in the answer to Question 10 above, virtually all of the conduct in the EU’s DMA has also been addressed through EU competition law — often in the Commission’s favor. However, the EU wanted to codify a set of rules that would ensure that the Commission did not have to litigate cases before the courts and would win every case — or at least the vast majority of cases — against digital platforms. But this decision, which one may or may not agree with, came after at least some experience applying competition law to digital platforms and a determination that the gains of such an approach would outweigh the manifest costs.
Conversely, Brazil’s CADE enjoys much more limited experience in this sense, and Brazil itself presents very different economic realities and consumer interests that may not yield the same cost/benefit analysis. As mentioned above, the only “penalties” CADE has imposed against “digital platforms” resulted from voluntary settlements, meaning there has been limited need to litigate “digital” cases in Brazil. There is a lingering sense that Bill 2768 has been proposed not in response to deficiencies in the existing competition law framework, or in response to identified needs particular to Brazil, but as a response to “global trends” initiated by the EU.
Art. 13 of Bill 2768, for example, provides that mergers by covered companies will be scrutinized pursuant to the general competition law rules applicable to other companies and in other sectors. It is unclear why the same logic could not apply across the board — i.e., to all potentially anticompetitive conduct by targeted companies. Why does some conduct which can be addressed through antitrust law necessitate special regulation, but not others?
What problems could be generated for the innovation activity of digital platforms if there is the regulation of digital platforms proposed by Bill 2768/2022? Could this be dealt with in any way within Bill 2768/2022?
Indeed, it is by no means clear that Brazil’s particular circumstances are amenable to an “ex ante” approach similar to that of the EU.
Broad prohibitions and obligations such as the ones imposed by Art. 10 of Bill 2768 risk chilling innovative conduct and freezing technology in place. As the tenth ranked country in the global information technology market and with hundreds of startups in the AI sector, Brazil is a burgeoning market with tremendous potential.[53] Its 214 million population means that growth trends are poised to continue — and, sure enough, the number of app jobs grew by 54% in 2023 compared to 2019.[54]
However, static, strict rules such as those envisioned by Bill 2768 can nip the growth of Brazilian startups in the bud by imposing unsurmountable regulatory costs (which would, in any case, benefit incumbents compared to smaller competitors) and banning conduct capable of fostering growth, benefiting consumers, and igniting competition, such as self-preferencing and refusal to deal.
Indeed, both practices can — and often are — socially beneficial. As discussed in Question 8, despite its recent malignment by some policymakers, “self-preferencing” is normal business conduct and a key reason for efficient vertical integration, which avoids double marginalization and allows companies to better coordinate production, distribution, and sale more efficiently — all to the ultimate benefits of consumers. For example, retail services such as Amazon self-preferencing their own delivery services, as in the case of “Fulfilled by Amazon,” gives consumers something they value tremendously: a guarantee of quick delivery. As we have written elsewhere:
Amazon’s granting marketplace privileges to [Fulfilled by Amazon] products may help users to select the products that Amazon can guarantee will best satisfy their needs. This is perfectly plausible, as customers have repeatedly shown that they often prefer less open, less neutral options.[55]
In a recent report, the Australian Competition Commission recognized as much, stating that self-preferencing is often benign and can lead to procompetitive benefits.[56] Indeed, there are many legitimate reasons why companies may choose to self-preference, including better customer experience, customer service, more relevant choice (curation), and lower prices.[57] Thus, banning self-preferencing, or otherwise significantly discouraging companies from engaging in self-preferencing, could hamstring company growth — including by Brazilian companies that are currently in an early stage of development — and impede market entry by companies who could have been innovators.
Similarly, forcing companies to deal with third parties could stifle innovation by incentivizing free-riding and discouraging companies from making investments. Indeed, why would a company innovate or invest if it knows it will then have to share such investments and innovations with passive rivals who have undertaken none of these risks? The consequence is a stalemate where, rather than fighting to be the first to innovate and enjoy the fruits borne of such innovation, companies are rather encouraged to game the system by waiting for others to make the first step and then free riding on their achievements. This essentially upends the process of dynamic competition by artificially rearranging the incentive to innovate and invest vs. the incentive to free ride, reducing the benefits of the former and increasing the benefits of the latter.
It would be catastrophic to drive a wedge in Brazil’s ability to grow its technology sector and innovate — especially considering the country’s vast potential. Indeed, rather than a triumph of regulation over innovation, Brazil should strive to be precisely the opposite.[58]
What would be the practical difficulties of applying this type of legislation contemplated by Bill 2768/2022?
Funds to finance what could be a considerable amount of enforcement are necessary, but not sufficient, to ensure effectiveness. In the EU, the Commission’s DG Competition, one of the world’s foremost and best-endowed competition authorities, has famously struggled to hire the staff necessary to implement the Digital Markets Act. In short, “DMA experts” currently do not exist — and the Commission will either have to train such experts itself or hire them when expertise develops through enforcement. But this creates a chicken-and-egg scenario, where enforcement — or at least good enforcement — cannot happen without good experts, and good experts cannot materialize without enforcement. There is no reason to believe that these considerations do not map onto the Brazilian context.
Brazil faces an additional challenge, however: attracting talent. Unlike in the EU, where posts at the Commission are highly coveted due to the high salaries, perks, and job security they confer, CADE’s resources are more modest and likely cannot compete fully with the private sector. Thus, before passing Bill 2768, the government should be clear on how the law would be enforced, and by whom.
Other issues include the heavy compliance burden of the Bill, which will affect not only the so-called “tech giants” but any company above the modest R$70 million turnover threshold, the difficulties in interpreting the ambiguous prohibitions and obligations contemplated in Art. 10 (and the litigation which may ensue, on which see Question 16), the cost of crafting of adequate remedies within the meaning of Art. 10, and the looming possibility that the Bill will capture procompetitive conduct and stifle innovation. As we have written with respect to ASEAN countries and the possibility of implementing EU-style competition regulation there:
The ASEAN nations exhibit extremely diverse policies regarding the role of government in the economy. Put simply, some of the ASEAN nations seem ill-suited to the far-reaching technocracy that almost inevitably flows from adopting the European model of competition enforcement. Others might simply not have sufficient resources to staff agencies that could, satisfactorily, undertake the type of far-reaching investigations that the European Commission is famous for.[59]
Do you see a lot of room for the judicialization of this type of regulation provided for in Bill 2768/2022? On what devices?
The enforcement of Bill 2768 is likely to lead to substantial litigation, not least because many of the core concepts of the Bill are ambiguous and open to interpretation.
For instance, what does “discriminatory” conduct within the meaning of Art. 10, para. II entail? Can a covered platform treat business users differently based on objective criteria, such as quality, history, and trustworthiness, or must all business users be treated equally? In this sense, it is uncertain whether the specific meaning ascribed to “discriminatory conduct” under competition law applies in this context. Similarly, what does “adequate” use of data collected in the exercise of a firm’s activities mean (paragraph III)? Does paragraph IV of Art. 10 imply that a covered platform can never deny access to business users? Presumably, covered platforms will want to know how and why this general obligation deviates from the narrower essential facilities doctrine under Brazilian competition law.
Art. 11 adds certain caveats to this, such as that intervention should be tailored, proportionate and consider the impact, costs, and benefits. Again, what sort of impact, costs and benefits are relevant — on consumers, business users, the covered platform, society as a whole?
If this is anything to go by, Bill 2768 is likely to be a legally contentious one.
Are the definitions in article 6 of Bill 2768/2022 adequate for the purpose of this proposal?
Art. 6 and, indeed, the entire impetus behind Bill 2768, rests on two questionable assumptions:
The former would be more convincing if the remedies contemplated by the Bill, such as non-discrimination, adequate use of data, and access, had not been previously used in other markets and for other products. Granting access on “Fair, Reasonable, and Nondiscriminatory” (“FRAND”) terms is often used in the context of competition law and IP law, both of which apply across industries. The duty to use data “adequately” is generally contemplated by data protection laws, which also apply broadly. The same can be said for access obligations, which are frequent under competition law and in regulated industries (such as telecommunications or railways).
In addition, neither the products and services in Art. 6 of the Bill, the companies that operate them, nor the business models they employ are monolithic. Voice assistants and social media, for instance, are vastly different products. The same can be said about cloud computing, which is not really a “platform” in the sense that, say, online intermediation is. The products and services in Art. 6 themselves are also highly heterogeneous, with a single category encompassing a motley list of products, from e-commerce to online maps and app stores.
The same argument applies to the companies that sell these products and services, which — despite the ubiquitous “Big Tech” moniker — are ultimately very different firms.[60] As Apple CEO Tim Cook has said: “Tech is not monolithic. That would be like saying ‘All restaurants are the same’ or ‘All TV networks are the same.’”[61]
For instance, while Google (Alphabet) and Facebook (Meta) are information-technology firms that specialize in online advertising, Apple remains primarily an electronics company, with around 75% of its revenue coming from the sale of iMacs, iPhones, iPads, and accessories. As Amanda Lotz of the University of Michigan has observed:
The profits on those [hardware] sales let Apple use very different strategies than the non-hardware [“Big Tech”] companies with which it is often compared.[62]
It also means that most of its other businesses — such as iMessage, iTunes, Apple Pay, etc. — are complements that “Apple uses strategically to support its primary focus as a hardware company.” Amazon, on the other hand, is primarily a retailer, with its Amazon Web Services and advertising divisions accounting for just 15% and 7% of the company’s revenue, respectively.[63]
Even when two “gatekeepers” are active in the same products/service market, they often have markedly different business models and practices. Thus, despite both selling mobile-phone operating systems, Android (Google) and Apple employ very different product-design philosophies. As we argued in an amicus curiae brief submitted last month to the U.S. Supreme Court in Apple v. Epic Games:
For Apple and its users, the touchstone of a good platform is not “openness,” but carefully curated selection and security, understood broadly as encompassing the removal of objectionable content, protection of privacy, and protection from “social engineering,” and the like.… By contrast, Android’s bet is on the open platform model, which sacrifices some degree of security for the greater variety and customization associated with more open distribution. These are legitimate differences in product design and business philosophy.[64]
These various companies and markets have diverse incentives, strategies, and product designs, therefore belying the idea that there is any economically and technically coherent notion of what comprises “gatekeeping.” In other words, both the products and services that would be subject to Art. 6 of Bill 2768 and those companies themselves are highly heterogeneous, and it is unclear why they are placed under the same umbrella.
Instead of pure ex-ante regulation, would any other type of monitoring and/or regulation of digital markets make sense?
A special unit within CADE, operating within the limits of current antitrust laws, should be seriously assessed before rushing to adopt far-reaching, ex-ante regulation in digital markets. Most of the conduct covered by ex-ante regulation in the EU, for example, is spun off from competition law cases. This suggests that such conduct falls within the limits of traditional competition law and can be properly addressed through EU competition law.
Accordingly, a digital unit within CADE would leverage the expertise of staff with a background in applying antitrust law to “digital markets.” Chances are that, if such a unit cannot be formed within CADE, which boasts staff with the expertise that most closely resembles what would be required to enforce Bill 2768, it likely cannot be formed anywhere else — at least not without siphoning off talent from CADE. This would be a mistake, as CADE has a critical role in suppressing behavior that unambiguously harms the public interest, such as cartels (arguably, this is where Brazil should be focusing its resources).[65] Creating a new unit to prosecute novel conduct with uncertain effects on social welfare at the expense of suppressing conduct that is manifestly harmful does not pass a cost-benefit analysis and would ultimately damage Brazil’s economy.
Do you think that the set of solutions described in art. 10 of Bill 2768/2022 are adequate?
It is difficult to answer this question without a clear notion of what Bill 2768 aims to achieve. Adequate for what?
Are the set of sanctions provided for in art. 16 of Bill 2768/2022 adequate?
This is also difficult to answer. If the objective is to thwart all proscribed conduct, no matter the consequences for innovation, investment, and consumer satisfaction, then a high fine is called for — and many companies will stop doing business as a result (which will very effectively stop all undesirable behavior – but also all desirable behavior). If raising revenue is the objective, then the amount of enforcement times the level of sanction needs to be low enough to operate not as a bar to behavior but a fee for doing business. We do not know if the level of sanctions in Art. 16 is appropriate for this — nor, we hasten to add, should this ever be the intention of such a law!
On the other hand, if optimal deterrence is the objective, imposing sanctions considerably lower than those in the EU (as a sanction of 2% of the infringing companies’ Brazilian turnover would be) appears reasonable. Fines for antitrust infringements in the EU can be up to 10% of the company’s worldwide turnover; and fines for violations of the DMA can even reach 20%.[66] But Brazil should not seek to deter investment and innovation to the extent the EU has.
It is, of course, difficult to identify a causal link between competition fines and investment/innovation. But what we do know is this: The pace of economic growth in Europe has lagged that of the U.S. by a significant margin:
Fifteen years ago, the size of the European economy was 10% larger than that of the U.S., however, by 2022 it was 23% smaller. The GDP of the European Union (including UK before Brexit) has grown in this period by 21% (measured in dollars), compared to 72% for the US and 290% for China.[67]
Meanwhile, none of the world’s 10 largest technology companies, and only two of the 25 largest, are based in Europe.[68] And the large U.S. and Asian multinationals are spread across the entire technology industry, from electronic components (chips, mobile phones and computers) to app development companies, websites, and e-commerce. There may be many reasons for these discrepancies, but one of them is almost certainly the differences in the economic regulatory environments, including the extent of competition-law overdeterrence.[69]
Article 10 provides for several obligations in a non-exhaustive list on which the regulator could impose other measures. Should an exhaustive list of measures be envisaged?
Exhaustive lists have the advantage of fostering predictability and cabining the enforcer’s discretion, thus limiting rent-seeking, and ensuring that enforcement stays tethered to the public interest. Assuming, of course, that the sort of measures which are envisaged act in the public interest in the first place.
The problem with how Bill 2768 is framed in its current state is that it is too open-ended. It is understandable that Bill 2768 does not want to tie the enforcers’ hands and has opted for bespoke interventions rather than blanket prohibitions and obligations. This is to be welcomed. However, it should not come at the expense of legal certainty, and it must not fail to impose limits on the enforcer’s discretion. This currently does not seem to be the case.
Article 10 thus provides that platform operators will be subject to “amongst others, the following obligations…” It is not clear, from this numerus apertus list, what the enforcer can and cannot do. But the problem is deeper than just Article 10; nowhere in the Bill is it explained what the goals of the new rules are. The proposed redrafting of Article 19-A of Law 9.472 of 16 July 1997 states, in paragraphs III, IV, and V is vague – it does not impose sufficiently clear limiting principles on the Bill’s reach. Indeed, it suggests that the goals of Bill 2768 would be to prevent conflicts of interest, prevent infringements of user’s rights, and prevent economic infringements by digital platforms in areas which are competence of CADE. Article 4 of Bill 2768 includes other goals: freedom of initiative, free competition, consumer protection, a reduction in regional and social inequality, repressing economic power and bolstering social participation. Elsewhere, it is implied that the goal is to diminish “gatekeeper power” (under “Justifications”).
In other words, it is not clear what Bill 2768 doesn’t empower the enforcer to do.
Furthermore, the prohibitions and obligations in Paragraphs I-IV of Art. 10 are similarly opaque. For instance, what is “adequate” use of collected data? (III). Does paragraph IV imply that a targeted platform may never refuse access to their service? In fact, one thing that is missing from Bill 2768 is the ability to escape a prohibition or obligation by demonstrating efficiencies or through an objective justification (such as, e.g., safety and security or privacy).
Clearly, Bill 2768 cannot predict all of the instances in which Art. 10 will be used. But, in order to strike a balance between the enforcer’s nimbleness and the law’s administrability and predictability, it needs to give a more focused account of the Bill’s goals, and how the provisions in Art. 10 help to achieve them. In other words: Articles 3, 4, and 10 need to be much clearer. Otherwise, the Bill risks doing more harm than good to targeted companies, business users, competitors, and ultimately, consumers. The “Justifications” section of the Bill states that it does not wish to impose a “straitjacket” on targeted companies through the imposition of strict ex ante rules. This is reasonable, especially considering the lack of evidence of unambiguous harm. But granting an enforcer like Anatel, which lacks experience in “digital markets,” broadly defined powers to intervene on the basis of equally broad goals amounts to imposing a straitjacket by another name. In a regulatory “panopticon” in which companies are never sure of what is and is not allowed, some might reasonably choose not to take risks, innovate, and bring new products to the market —because they do not wish to risk being subject to fines (Art. 16) and potential structural remedies, like break-ups (Art. 10, paragrafo unico). In other words, they might assume that much more is prohibited than is actually prohibited.
[1] PL 2768/2022, Dispõe sobre a organização, o funcionamento e a operação das plataformas digitais que oferecem serviços ao público brasileiro e dá outras providências, available at https://www.camara.leg.br/proposicoesWeb/fichadetramitacao?idProposicao=2337417.
[2] REGULATION (EU) 2022/1925 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 14 September 2022, on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act).
[3] https://www.mercadosdigitais.org/.
[4] Case C-7/97 Bronner, EU:C:1998:569.
[5] See, e.g., Commissioner Ana Frazão’s majority decision in Procedure No. 08012.003918/2005-14 (Defendant: Telemar Norte Leste S.A.), paras. 60-62, https://tinyurl.com/4dc38vvk.
[6] See Commissioner Mauricio Maia’s reporting majority decision in Administrative Procedure No. 08012.010483/2011-94 (Defendants: Google Inc. and Google Brasil Internet Ltda.), paras. 180-94; 224-42, https://tinyurl.com/3c9emytw.
[7] A 2021 report by IBRAC identified the high entry rate into the market of online sales platforms. See IBRAC, Revista do Revista do IBRAC Número 2-2021, available at https://ibrac.org.br/UPLOADS/PDF/RevistadoIBRAC/Revista_do_IBRAC_2_2021.pdf.
[8] Bronner, Para. 67.
[9] See Colangelo, G., The Digital Markets Act and EU Antitrust Enforcement: Double & Triple Jeopardy, ICLE White Paper (2022), available at https://laweconcenter.org/resources/the-digital-markets-act-and-eu-antitrust-enforcement-double-triple-jeopardy.
[10] CADE, Mercados de Plataformas Digitais, SEPN 515 Conjunto D, Lote 4, Ed. Carlos Taurisano CEP: 70.770-504 – Brasília/DF, available at https://cdn.cade.gov.br/Portal/centrais-de-conteudo/publicacoes/estudos-economicos/cadernos-do-cade/Caderno_Plataformas-Digitais_Atualizado_29.08.pdf.
[11] On the notion that DMA-style rules are “sector-specific competition law,” see Nicolas Petit, The Proposed Digital Markets Act (DMA): A Legal and Policy Review, 12 J. Eur. Compet. Law & Pract. 529 (May 11, 2021).
[12] See Verizon Communications, Inc. v. Law Offices of Curtis V. Trinko, LLP, 540 U.S. 398 (2003). “Compelling such firms to share the source of their advantage is in some tension with the underlying purpose of antitrust law, since it may lessen the incentive for the monopolist, the rival, or both to invest in those economically beneficial facilities.”
[13] Hou, L., The Essential Facilities Doctrine – What Was Wrong in Microsoft?, 43(4) International Review of Intellectual Property and Competition Law 251-71, 260 (2012).
[14] See Williamson, O.E., The Vertical Integration of Production: Market Failure Considerations, 61 Am. Econ. Rev. 112 (1971); Klein, B., Asset Specificity and Holdups, in The Elgar Companion to Transaction Cost Economics, P. G. Klein & M. Sykuta, eds. (Edward Elgar Publishing, 2010), 120–126.
[15] Commission Decision No. AT.39740 — Google Search (Shopping).
[16] A. Hoffman, Where Does Website Traffic Come From: Search Engine and Referral Traffic, Traffic Generation Café (Dec. 25, 2018), https://trafficgenerationcafe.com/website-traffic-source-search-engine-referral.
[17] See Manne, G., Against the Vertical Discrimination Presumption, Concurrences N° 2-2020, Art. N° 94267 (May 2020), https://www.concurrences.com/en/review/numeros/no-2-2020/editorial/foreword.
[18] On the need for caution when granting a right to access see, for example, Trinko: “We have been very cautious in recognizing such exceptions [to the right of [a] trader or manufacturer engaged in an entirely private business, freely to exercise his own independent discretion as to parties with whom he will deal], because of the uncertain virtue of forced sharing and the difficulty of identifying and remedying anticompetitive conduct by a single firm.”
[19] United States v. Aluminum Co. of America, 148 F.2d 416, 430 (2d Cir. 1945).
[20] “Thus, as a general matter, the Sherman Act ‘does not restrict the long recognized right of [a] trader or manufacturer engaged in an entirely private business, freely to exercise his own independent discretion as to parties with whom he will deal.’” United States v. Colgate & Co., 250 U. S. 300, 307 (1919).
[21] Foremost Pro Color, Inc. v. Eastman Kodak Co., 703 F.2d 534, 545 (9th Cir. 1983) (citations omitted).
[22] See Manne, G. & B. Sperry, Debunking the Myth of a Data Barrier to Entry for Online Services, Truth on the Market (Mar. 26, 2015), https://truthonthemarket.com/2015/03/26/debunking-the-myth-of-a-data-barrier-to-entry-for-online-services; Manne, G. & B. Sperry (2014). The Law and Economics of Data and Privacy in Antitrust Analysis, 2014 TPRC Conference Paper, available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2418779.
[23] See generally, Grunes, A. & M. Stucke, Big Data and Competition Policy (Oxford University Press, Oxford, 2016); Newman, N, Antitrust and the Economics of the Control of User Data, 30 Yale Journal on Regulation 3 (2014).
[24] See the examples discussed in Manne, G. & B. Sperry, Debunking the Myth of a Data Barrier to Entry for Online Services, Truth on the Market (Mar. 26, 2015), https://truthonthemarket.com/2015/03/26/debunking-the-myth-of-a-data-barrier-to-entry-for-online-services.
[25] Lerner, A., The Role of ‘Big Data’ in Online Platform Competition (2014), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2482780.
[26] Bowman, S. & G. Manne, Platform Self-Preferencing Can Be Good for Consumers and Even Competitors, Truth on the Market (Mar. 4, 2021), https://truthonthemarket.com/2021/03/04/platform-self-preferencing-can-be-good-for-consumers-and-even-competitors.
[27] C. Goujard, Google Forced to Postpone Bard Chatbot’s EU Launch Over Privacy Concerns, Politico (Jun. 13, 2023), https://www.politico.eu/article/google-postpone-bard-chatbot-eu-launch-privacy-concern.
[28] M. Kelly, Here’s Why Threads Is Delayed in Europe, The Verge (Jul. 10, 2023), https://www.theverge.com/23789754/threads-meta-twitter-eu-dma-digital-markets.
[29] Musk Considers Removing X Platform From Europe Over EU Law, Euractiv (Oct. 19, 2023), https://www.euractiv.com/section/platforms/news/musk-considers-removing-x-platform-from-europe-over-eu-law.
[30] Jud, M., Still No Copilot in Europe: Microsoft Rolls Out 23H2 Update, Digitec.ch (Nov. 1, 2023), https://www.digitec.ch/en/page/still-no-windows-copilot-in-europe-microsoft-rolls-out-23h2-update-30279.
[31] The Future is Bright for Latin American Startups, The Economist (Nov.13, 2023), available at https://www.economist.com/the-world-ahead/2023/11/13/the-future-is-bright-for-latin-american-startups.
[32] See Distrito, Panorama Tech América Latina (2023), available at https://static.poder360.com.br/2023/09/latam-report-1.pdf.
[33] The following is adapted from Manne, G., Against the Vertical Discrimination Presumption, Concurrences N° 2-2020, Art. N° 94267 (May 2020) https://www.concurrences.com/en/review/numeros/no-2-2020/editorial/foreword and our comments on the UK’s proposed Digital Markets, Competition and Consumers (“DMCC”) Bill: Auer, D., M. Lesh & L. Radic (2023). Digital Overload: How the Digital Markets, Competition and Consumers Bill’s Sweeping New Powers Threaten Britain’s Economy, 4 IEA Perspectives 16-21 (2023), available at https://iea.org.uk/wp-content/uploads/2023/09/Perspectives_4_Digital-overload_web.pdf.
[34] H. Singer, How Big Tech Threatens Economic Liberty, The Am. Conserv. (May 7, 2019), https://www.theamericanconservative.com/articles/how-big-tech-threatens-economic-liberty.
[35] Most of these theories, it must be noted, ignore the relevant and copious strategy literature on the complexity of platform dynamics. See, e.g., J. M. Barnett, The Host’s Dilemma: Strategic Forfeiture in Platform Markets for Informational Goods, 124 Harv. L. Rev. 1861 (2011); D. J. Teece, Profiting from Technological Innovation: Implications for Integration, Collaboration, Licensing and Public Policy, 15 Res. Pol’y 285 (1986); A. Hagiu & K. Boudreau, Platform Rules: Multi-Sided Platforms as Regulators, in Platforms, Markets and Innovation, A. Gawer, ed. (Edward Elgar Publishing, 2009); K. Boudreau, Open Platform Strategies and Innovation: Granting Access vs. Devolving Control, 56 Mgmt. Sci. 1849 (2010).
[36] For examples of this literature and a brief discussion of its findings, see Manne, G., Against the Vertical Discrimination Presumption, Concurrences N° 2-2020, Art. N° 94267 (May 2020), https://www.concurrences.com/en/review/numeros/no-2-2020/editorial/foreword.
[37] International Center for Law & Economics, International Center for Law & Economics Amicus Curiae Brief Submitted to the U.S. Court of Appeals for the Ninth Circuit 20-21 (2022), https://tinyurl.com/ywu553vb.
[38] See generally, Hagiu & Boudreau, Platform Rules: Multi-Sided Platforms as Regulators, supra note 31; Barnett, The Host’s Dilemma, supra note 31.
[39] Barnett, J., id.
[40] See Radic, L. and G. Manne, Amazon Italy’s Efficiency Offense, Truth on the Market (Jan. 11, 2022), https://tinyurl.com/2uht4fvw.
[41] Introduced as Bill 294 (2022-23), currently HL Bill 12 (2023-24), Digital Markets, Competition and Consumers Bill, available at https://bills.parliament.uk/bills/3453.
[42] Farrell, J., & P. Klemperer Coordination and Lock-In: Competition with Switching Costs and Network Effects, 3 Handbook of Industrial Organization1967-2072 (2007), available at https://www.sciencedirect.com/science/article/abs/pii/S1573448X06030317.
[43] Bill 2768, “Justifications.” See also Wu, T, The Curse of Bigness: Antitrust in the New Gilded Age, Columbia Global Reports (2018).
[44] Colangelo, G., The Digital Markets Act and EU Antitrust Enforcement: Double & Triple Jeopardy, ICLE White Paper 2022-03-23 (2022), available at https://laweconcenter.org/wp-content/uploads/2022/03/Giuseppe-Double-triple-jeopardy-final-draft-20220225.pdf.
[45] See also Caffarra, C. and F. Scott Morton, The European Commission Digital Markets Act: A Translation, Vox EU (Jan. 5, 2021), https://voxeu.org/article/european-commission-digital-markets-act-translation.
[46] How National Competition Agencies Can Strengthen the DMA, European Competition Network (Jun. 22, 2021), available at https://ec.europa.eu/competition/ecn/DMA_joint_EU_NCAs_paper_21.06.2021.pdf.
[47] For the full study, see https://cdn.cade.gov.br/Portal/centrais-de-conteudo/publicacoes/estudos-economicos/documentos-de-trabalho/2018/documento-de-trabalho-n01-2018-efeitos-concorrenciais-da-economia-do-compartilhamento-no-brasil-a-entrada-da-uber-afetou-o-mercado-de-aplicativos-de-taxi-entre-2014-e-2016.pdf.
[48] For a detailed overview of CADE’s decisions in digital platforms and payments services, see https://cdn.cade.gov.br/Portal/centrais-de-conteudo/publicacoes/estudos-economicos/cadernos-do-cade/mercado-de-instrumentos-de-pagamento-2019.pdf; https://cdn.cade.gov.br/Portal/centrais-de-conteudo/publicacoes/estudos-economicos/cadernos-do-cade/Caderno_Plataformas-Digitais_Atualizado_29.08.pdf.
[49] See, e.g., Epic Games, Inc. v. Apple Inc. 20-cv-05640-YGR.
[50] Staats, J. L., & G. Biglaiser, Foreign Direct Investment in Latin America: The Importance of Judicial Strength and Rule of Law, 56(1) International Studies Quarterly 193–202 (2012), https://doi.org/10.1111/j.1468-2478.2011.00690.x.
[51] HL Bill 12 (2023-24), Digital Markets, Competition and Consumers Bill, https://bills.parliament.uk/bills/3453.
[52] Auer, D., M. Lesh, & L. Radic (2023). Digital Overload: How the Digital Markets, Competition and Consumers Bill’s Sweeping New Powers Threaten Britain’s Economy, 4 IEA Perspectives 16-21, available at https://iea.org.uk/wp-content/uploads/2023/09/Perspectives_4_Digital-overload_web.pdf.
[53] See Dailey, M. Why the US Rejected European Style Digital Markets Regulation: Considerations for Brazil’s Tech Landscape, Progressive Policy Institute (Oct. 2, 2023), pp 5-6, available at https://www.progressivepolicy.org/wp-content/uploads/2023/10/PPI-Brazil-EU-Tech.pdf.
[54] Id.
[55] See Radic, L. and G. Manne, Amazon Italy’s Efficiency Offense. Truth on the Market (Jan. 11, 2022), available at https://tinyurl.com/2uht4fvw.
[56] ACCC, Digital Platform Services Inquiry, Discussion Paper for Interim Report No. 5: Updating Competition and Consumer Law for Digital Platform Services (Feb. 2022), available at https://www.accc.gov.au/system/files/Digital%20platform%20services%20inquiry.pdf.
[57] Bowman, S. & G. Manne, Platform Self-Preferencing Can Be Good for Consumers and Even Competitors, Truth on the Market (Mar. 4, 2021), https://laweconcenter.wpengine.com/2021/03/04/platform-self-preferencing-can-be-good-for-consumers-and-even-competitors.
[58] See Portuese, A. The Digital Markets Act: A Triumph of Regulation Over Innovation, ITIF Schumpeter Project (Aug. 24, 2022), available at https://itif.org/publications/2022/08/24/digital-markets-act-a-triumph-of-regulation-over-innovation.
[59] Auer, D., G. Manne & S. Bowman, Should ASEAN Antitrust Laws Emulate European Competition Policy?, 67(5) Singapore Economic Review 1637–1697, 1687 (2022).
[60]See Lotz, A. ‘Big Tech’ Isn’t a Monolith. It’s 5 Companies, All in Different Businesses, Houston Chronicle (Mar. 26, 2018), https://www.houstonchronicle.com/techburger/article/Big-Tech-isn-t-a-monolith-It-s-5-companies-12781761.php; see also Chaiehloudj, W. & Petit, N. On Big Tech and The Digital Economy, Competition Forum (Jan. 11, 2021), https://competition-forum.com/on-big-tech-and-the-digital-economy-interview-with-professor-nicolas-petit.
[61] Asher Hamilton, I. Tim Cook Says He’s Tired of Big Tech Being Painted as a ‘Monolithic’ Force That Needs Tearing Apart, Business Insider (May 7, 2019), https://www.businessinsider.com/apple-ceo-tim-cook-tired-of-big-tech-being-viewed-as-monolithic-2019-5.
[62] Lotz, 2018.
[63] G. Cuofano, Amazon Revenue Breakdown, Four Week MBA (Aug. 10, 2023), https://fourweekmba.com/amazon-revenue-breakdown.
[64] International Center for Law & Economics, International Center for Law & Economics Amicus Curiae Brief Submitted to the U.S. Supreme Court (2022), available at https://laweconcenter.org/wp-content/uploads/2023/11/ICLE-Amicus-Apple-v-Epic-SCt-10.27.23-FINAL.pdf.
[65] See Zúñiga, M. Latin America Should Follow Its Own Path on Digital-Markets Competition, Truth on the Market (Nov. 7, 2023), https://truthonthemarket.com/2023/11/07/latin-america-should-follow-its-own-path-on-digital-markets-competition.
[66] As pointed out in Question 10, however, there is a risk of double jeopardy considering that some of the conduct caught by Bill 2768 might also be covered by Brazilian competition law. In such cases, the 2% would be compounded by the penalties contemplated under Act 12.529/11, the Brazilian competition law, and the level could easily be too high.
[67] Weekly Foreign Policy Report No. 1329: A Europe Vassal to the US?, Política Exterior (Jun. 26, 2023) https://www.politicaexterior.com/articulo/una-europa-vasalla-de-eeuu.
[68] See, e.g., 100 Biggest Technology Companies in the World, Yahoo Finance (Aug. 23, 2023), available at https://finance.yahoo.com/news/100-biggest-technology-companies-world-175211230.html.
[69] See, e.g., Weekly Foreign Policy Report No. 1329: A Europe Vassal to the US?, Política Exterior (Jun. 26, 2023) https://www.politicaexterior.com/articulo/una-europa-vasalla-de-eeuu.
TOTM Way back in May, I cracked wise about the Federal Trade Commission’s (FTC) fictional “Bureau of Let’s Sue Meta,” noting that the commission’s proposal (really, . . .
Way back in May, I cracked wise about the Federal Trade Commission’s (FTC) fictional “Bureau of Let’s Sue Meta,” noting that the commission’s proposal (really, an “order to show cause”) to modify its 2020 settlement of a consumer-protection matter with what had then been Facebook—in other words, a settlement modifying a 2012 settlement—was the FTC’s third enforcement action with Meta in the first half of 2023. That seemed like a lot, even if we ignored, say, Meta’s European and UK matters (see, e.g., here on the EU Digital Markets Act’s “gatekeeper” designations; here on the Norwegian data-protection authority; here and here on the Court of Justice of the European Union, and here on the UK Competition Appeal Tribunal).
Read the full piece here.
Popular Media Meta gave European users of Facebook and Instagram a choice between paying for a no-ads experience or keeping the services free of charge and with . . .
Meta gave European users of Facebook and Instagram a choice between paying for a no-ads experience or keeping the services free of charge and with ads. As I discussed previously (Facebook, Instagram, “pay or consent” and necessity to fund a service and EDPB: Meta violates GDPR by personalised advertising. A “ban” or not a “ban”?), the legal reality behind that choice is more complex. Users who continue without paying are asked to consent for their data to be processed for personalized advertising. In other words, this is a “pay or consent” framework for processing first-party data.
I was asked by IAPP, “the largest privacy association in the world and a leader in the privacy industry,” to discuss this. I also thought that the text I wrote for them could use some additional explanations for this substack’s audience. What follows is an expanded version of the text published by IAPP. (If this text is too long, I suggest reading just the next section).
Presentations & Interviews ICLE Director of Competition Policy Dirk Auer joined as a panelist in a webinar organized by ECIPE on platform regulation and merger policy in the . . .
ICLE Director of Competition Policy Dirk Auer joined as a panelist in a webinar organized by ECIPE on platform regulation and merger policy in the EU, and the implications for member states’ attractiveness for digital investment. Video of the full panel is embedded below.
Popular Media Reading comments (Ben Thompson, Eric Seufert) on the Meta-Amazon deal to let “shoppers buy Amazon products directly from ads on Instagram and Facebook” (Bloomberg) made me . . .
Reading comments (Ben Thompson, Eric Seufert) on the Meta-Amazon deal to let “shoppers buy Amazon products directly from ads on Instagram and Facebook” (Bloomberg) made me think: could it happen here (in the EU)? Would EU law block it? I don’t think so. Still, given that the deal means “more data for Meta” (and Amazon), we’ll likely see some knee-jerk critical reactions. So, I thought it would be interesting to think through this question. (To be clear: this is not a full legal analysis, just my quick thoughts).
ICLE Issue Brief I. Introduction Proposals to protect children and teens online are among the few issues in recent years to receive at least rhetorical bipartisan support at . . .
Proposals to protect children and teens online are among the few issues in recent years to receive at least rhetorical bipartisan support at both the national and state level. Citing findings of alleged psychological harm to teen users,[1] legislators from around the country have moved to pass bills that would require age verification and verifiable parental consent for teens to use social-media platforms.[2] But the primary question these proposals raise is whether such laws will lead to greater parental supervision and protection for teen users, or whether they will backfire and lead teens to become less likely to use the covered platforms altogether.
The answer, this issue brief proposes, is to focus on transaction costs.[3] Or more precisely, the answer can be found by examining how transaction costs operate under the Coase theorem.
The major U.S. Supreme Court cases that have considered laws to protect children by way of parental consent and age verification all cast significant doubt on the constitutionality of such regimes under the First Amendment. The reasoning such cases have employed appears to apply a Coasean transaction-cost/least-cost-avoider analysis, especially with respect to strict scrutiny’s least-restrictive-means test.
This has important implications for recent attempts to protect teens online by way of an imposed duty of care, mandatory age verification, and/or verifiable parental consent. First, because it means these solutions are likely unconstitutional. Second, because a least-cost-avoider analysis suggests that parents are in best positioned to help teens assess the marginal costs and benefits of social media, by way of the power of the purse and through available technological means. Placing the full burden of externalities on social-media companies would reduce the options available to parents and teens, who could be excluded altogether if transaction costs are sufficiently large as to foreclose negotiation among the parties. This would mean denying teens the overwhelming benefits of social-media usage.
Part II of this brief will define transaction costs and summarize the Coase theorem, with an eye toward how these concepts can help to clarify potential spillover harms and benefits arising from teens’ social-media usage. Part III will examine three major Supreme Court cases that considered earlier parental-consent and age-verification regimes enacted to restrict minors’ access to allegedly harmful content, while arguing that one throughline in the jurisprudence has been the implicit application of least-cost-avoider analysis. Part IV will argue that, even in light of how the internet ecosystem has developed, the Coase theorem’s underlying logic continues to suggest that parents and teens working together are the least-cost avoiders of harmful internet content.
Part V will analyze proposed legislation and recently enacted bills, some of which already face challenges in the federal courts, and argue that the least-cost-avoider analysis embedded in Supreme Court precedent should continue to foreclose age-verification and parental-consent laws. Part VI concludes.
The Coase theorem has been described as “the bedrock principle of modern law and economics,”[4] and the essay that initially proposed it may be the most-cited law-review article ever published.[5] Drawn from Ronald Coase’s seminal work “The Problem of Social Cost”[6] and subsequent elaborations in the literature,[7] the theorem suggests that:
A few definitions are in order. An externality is a side effect of an activity that is not reflected in the cost of that activity—basically, what occurs when we do something whose consequences affect other people. A negative externality occurs when a third party does not like the effects of an action. When we say that such an externality is bilateral, it is to say that it takes two to tango: only when there is a conflict in the use or enjoyment of property is there an externality problem.
Transaction costs are the additional costs borne in the process of buying or selling, separate and apart from the price of the good or service itself—i.e., the costs of all actions involved in an economic transaction. Where transaction costs are present and sufficiently large, they may prevent otherwise beneficial agreements from being concluded. Institutional frameworks determine the rules of the game, including who should bear transaction costs. In order to maximize efficiency, the Coase theorem holds that the burden of avoiding negative externalities should be placed on the party or parties that can avoid them at the lowest cost.
A related and interesting literature focuses on whether the common law is efficient, and the mechanisms by which that may come to be the case.[8] Todd J. Zywicki and Edward P. Stringham argue—contra the arguments of Judge Richard Posner—that the common law’s relative efficiency is a function of the legal process itself, rather than whether judges implicitly or explicitly adopt efficiency or wealth maximization as goals.[9] Zywicki & Stringham find both demand-side and supply-side factors that tend to promote efficiency in the common law, but note that the supply-side factors (e.g., competitive courts for litigants) have changed over time in ways that may result in diminished incentives for efficiency.[10] Their central argument is that the re-litigation of inefficient rules eventually leads to the adoption of more efficient ones.[11] Efficiency itself, they argue, is also best understood as the ability to coordinate plans, rather than as wealth maximization.[12]
In contrast to common law, there is a relative paucity of literature on whether constitutional law follows a pattern of efficiency. For example, one scholar notes that citations to Coase’s work in the corpus of constitutional-law scholarship are actually exceedingly rare.[13] This brief seeks to contribute to the law & economics literature by examining how the Supreme Court appears implicitly to have adopted one version of efficiency—the least-cost-avoider principle—in its First Amendment reviews of parental-consent and age-verification laws under the compelling-government-interest and least-restrictive-means tests.
The Coase theorem’s basic insights are useful in evaluating not only legal decisions, but also legislation. Here, this means considering issues related to children and teenagers’ online social-media usage. Social-media platforms, teenage users, and their parents are the parties at-issue in this example. While social-media platforms create incredible value for their users,[14] they also arguably impose negative externalities on both teens and their parents.[15] The question here, as it was for Coase, is how to deal with those externalities.
The common-law framework of rights in this scenario is to allow minors to enter into enforceable agreements, except where they are void for public-policy reasons. As Adam Candeub points out:
Contract law is a creature of state law, and states require parental consent for minors entering all sorts of contracts for services or receiving privileges, including getting a tattoo, obtaining a driver’s license, using a tanning facility, purchasing insurance, and signing liability waivers. As a general rule, all contracts with minors are valid, but with certain exceptions they are voidable. And even though a minor can void most contracts he enters into, most jurisdictions have laws that hold a minor accountable for the benefits he received under the contract. Because children can make enforceable contracts for which parents could end up bearing responsibility, it is a reasonable regulation to require parental consent for such contracts. The few courts that have addressed the question of the enforceability of online contracts with minors have held the contracts enforceable on the receipt of the mildest benefit.[16]
Of course, many jurisdictions have passed laws requiring age-verification for various transactions prohibited to minors, such as laws for buying alcohol or tobacco,[17] obtaining driver’s licenses,[18] and buying lottery tickets or pornography.[19] Through the Children’s Online Privacy Protection Act and its regulations, the federal government also requires that online platforms obtain verifiable parental consent before they are permitted to collect certain personal information regarding children under age 13.[20]
The First Amendment, however, has been found to protect minors’ ability to receive speech, including through commercial transactions.[21] The question therefore arises: how should the law regard minors’ ability to access information on social-media platforms? In recent years, multiple jurisdictions have responded to this question by proposing or passing age-verification and parental-consent laws for teens’ social-media usage.[22]
As will be detailed below,[23] while the internet has contributed to significant reductions in transaction costs, they are still present. Thus, in order to maximize social-media platforms’ benefits while minimizing the negative externalities they impose, policymakers should endeavor to place the burden of avoiding the harms associated with teen use on the least-cost avoider. I argue that the least-cost avoider is parents and teens working together to make marginal decisions about social-media use, including by exploiting relatively low-cost practical and technological tools to avoid harmful content. The thesis of this issue brief is that this finding is consistent with the implicit Coasean reasoning in the Supreme Court’s major First Amendment cases on parental consent and age verification.
Parental-consent and age-verification laws that seek to protect minors from harmful content are not new. The Supreme Court has had occasion to review several of them, while applying First Amendment scrutiny. An interesting aspect of this line of cases is that the Court appears implicitly to have used Coasean analysis in understanding who should bear the burden of avoiding harms associated with speech platforms.
Specifically, in each case, after an initial finding that the restrictions were content-based, the Court applied strict scrutiny. Thus, the burden was placed on the government to prove the relevant laws were narrowly tailored to a compelling government interest using the least-restrictive means. The Court’s transaction-cost analysis is implicit throughout the descriptions of the problem in each case. But the main area of analysis below will be from each case’s least-restrictive-means test section, with a focus on the compelling-state-interest test in Part III.C. Parts III.A, III.B, and III.C will deal with each of these cases in turn.
In United States v. Playboy Entertainment Group,[24] the Supreme Court reviewed § 505 of the Telecommunications Act of 1996, which required “cable television operators who provide channels ‘primarily dedicated to sexually-oriented programming’ either to ‘fully scramble or otherwise fully block’ those channels or to limit their transmission to hours when children are unlikely to be viewing, set by administrative regulation as the time between 10 p.m. and 6 a.m.”[25] Even prior to the regulations promulgated pursuant to the law, cable operators used technological means called “scrambling” to blur sexually explicit content for those viewers who didn’t explicitly subscribe to such content, but there were reported problems with “signal bleed” that allowed some audio and visual content to be obtained by nonsubscribers.[26] Following the regulation, cable operators responded by shifting the hours when such content would be aired—i.e., by making it unavailable for 16 hours a day. This prevented cable subscribers from viewing purchased content of their choosing at times they would prefer.[27]
The basic Coasean framework is present right from the description of the problems that the statute and regulations were trying to solve. As the Court put it:
Two essential points should be understood concerning the speech at issue here. First, we shall assume that many adults themselves would find the material highly offensive; and when we consider the further circumstance that the material comes unwanted into homes where children might see or hear it against parental wishes or consent, there are legitimate reasons for regulating it. Second, all parties bring the case to us on the premise that Playboy’s programming has First Amendment protection. As this case has been litigated, it is not alleged to be obscene; adults have a constitutional right to view it; the Government disclaims any interest in preventing children from seeing or hearing it with the consent of their parents; and Playboy has concomitant rights under the First Amendment to transmit it. These points are undisputed.[28]
In Coasean language, the parties at-issue were the cable operators, content-providers of sexually explicit programming, adult cable subscribers, and their children. Cable television provides tremendous value to its customers, including sexually explicit subscription content that is valued by those subscribers. There is, however, a negative externality to the extent that such programming may become available to children whose parents find it inappropriate. The Court noted that some parents may allow their children to receive such content, and the government disclaimed an interest in preventing such reception with parental consent. Given imperfect scrambling technology, this possible negative externality was clearly present. The question that arose was whether the transaction costs imposed by time-shifting requirements in Section 505 have the effect of restricting adults’ ability to make such viewing decisions for themselves and on behalf of their children.
After concluding that Section 505 was a content-based restriction, due to the targeting of specific adult content and specific programmers, the Court stated that when a content-based restriction is designed “to shield the sensibilities of listeners, the general rule is that the right of expression prevails, even where no less restrictive alternative exists. We are expected to protect our own sensibilities ‘simply by averting [our] eyes.’” [29]
This application of strict scrutiny does not change, the court noted, because we are dealing in this instance with children or the issue of parental consent:
No one suggests the Government must be indifferent to unwanted, indecent speech that comes into the home without parental consent. The speech here, all agree, is protected speech; and the question is what standard the Government must meet in order to restrict it. As we consider a content-based regulation, the answer should be clear: The standard is strict scrutiny. This case involves speech alone; and even where speech is indecent and enters the home, the objective of shielding children does not suffice to support a blanket ban if the protection can be accomplished by a less restrictive alternative.[30]
Again, using our Coasean translator, we can read the opinion as saying the least-cost way to avoid the negative externality of unwanted adult content is by just not looking at it, or for parents to use the means available to them to prevent their children from viewing it.
In fact, that is exactly where the Court goes, by comparing, under the least-restrictive-means test, the targeted blocking mechanism made available in Section 504 of the statute to the requirements imposed by Section 505:
[T]argeted blocking enables the Government to support parental authority without affecting the First Amendment interests of speakers and willing listeners—listeners for whom, if the speech is unpopular or indecent, the privacy of their own homes may be the optimal place of receipt. Simply put, targeted blocking is less restrictive than banning, and the Government cannot ban speech if targeted blocking is a feasible and effective means of furthering its compelling interests. This is not to say that the absence of an effective blocking mechanism will in all cases suffice to support a law restricting the speech in question; but if a less restrictive means is available for the Government to achieve its goals, the Government must use it.[31]
Moreover, the Court found that the fact that parents largely eschewed the available low-cost means to avoid the harm was not necessarily sufficient for the government to prove that it is the least-restrictive alternative:
When a plausible, less restrictive alternative is offered to a content-based speech restriction, it is the Government’s obligation to prove that the alternative will be ineffective to achieve its goals. The Government has not met that burden here. In support of its position, the Government cites empirical evidence showing that § 504, as promulgated and implemented before trial, generated few requests for household-by-household blocking. Between March 1996 and May 1997, while the Government was enjoined from enforcing § 505, § 504 remained in operation. A survey of cable operators determined that fewer than 0.5% of cable subscribers requested full blocking during that time. Id., at 712. The uncomfortable fact is that § 504 was the sole blocking regulation in effect for over a year; and the public greeted it with a collective yawn.[32]
This is because there were, in fact, other market-based means available for parents to use to avoid the harm of unwanted adult programming,[33] and the government had not proved that Section 504 could be effective with more adequate notice.[34] The Court concluded its least-restrictive means analysis by saying:
Even upon the assumption that the Government has an interest in substituting itself for informed and empowered parents, its interest is not sufficiently compelling to justify this widespread restriction on speech. The Government’s argument stems from the idea that parents do not know their children are viewing the material on a scale or frequency to cause concern, or if so, that parents do not want to take affirmative steps to block it and their decisions are to be superseded. The assumptions have not been established; and in any event the assumptions apply only in a regime where the option of blocking has not been explained. The whole point of a publicized § 504 would be to advise parents that indecent material may be shown and to afford them an opportunity to block it at all times, even when they are not at home and even after 10 p.m. Time channeling does not offer this assistance. The regulatory alternative of a publicized § 504, which has the real possibility of promoting more open disclosure and the choice of an effective blocking system, would provide parents the information needed to engage in active supervision. The Government has not shown that this alternative, a regime of added communication and support, would be insufficient to secure its objective, or that any overriding harm justifies its intervention.[35]
In Coasean language, the government’s imposition of transaction costs through time-shifting channels is not the least-cost way to avoid the harm. By publicizing the blocking mechanism of Section 504, as well as promoting market-based alternatives like VCRs to record programming for playback later or blue-screen technology that blocks scrambled video, adults would be able to effectively act as least-cost avoiders of harmful content, including on behalf of their children.
In Ashcroft v. ACLU,[36] the Supreme Court reviewed a U.S. District Court’s preliminary injunction of the age-verification requirements imposed by the Children Online Protection Act (COPA), which was designed to “protect minors from exposure to sexually explicit materials on the Internet.”[37] The law created criminal penalties “of a $50,000 fine and six months in prison for the knowing posting” for ‘commercial purposes’ of World Wide Web content that is ‘harmful to minors.’”[38] The law did, however, provide an escape hatch, through:
…an affirmative defense to those who employ specified means to prevent minors from gaining access to the prohibited materials on their Web site. A person may escape conviction under the statute by demonstrating that he “has restricted access by minors to material that is harmful to minors— “(A) by requiring use of a credit card, debit account, adult access code, or adult personal identification number; “(B) by accepting a digital certificate that verifies age; or “(C) by any other reasonable measures that are feasible under available technology.” § 231(c)(1).[39]
…an affirmative defense to those who employ specified means to prevent minors from gaining access to the prohibited materials on their Web site. A person may escape conviction under the statute by demonstrating that he
“has restricted access by minors to material that is harmful to minors—
“(A) by requiring use of a credit card, debit account, adult access code, or adult personal identification number;
“(B) by accepting a digital certificate that verifies age; or
“(C) by any other reasonable measures that are feasible under available technology.” § 231(c)(1).[39]
Here, the Coasean analysis of the problem is not stated as explicitly as in Playboy, but it is still apparent. The internet clearly provides substantial value to users, including those who want to view pornography. But there is a negative externality in internet pornography’s broad availability to minors for whom it would be inappropriate. Thus, to prevent these harms, COPA established a criminal regulatory scheme with an age-verification defense. The threat of criminal penalties, combined with the age-verification regime, imposed high transaction costs on online publishers who post content defined as harmful to minors. This leaves adults (including parents of children) and children themselves as the other relevant parties. Again, the question is: who is the least-cost avoider of the possible negative externality of minor access to pornography? The adult-content publisher or the parents, using technological and practical means?
The Court immediately went to an analysis of the least-restrictive-means test, defining the inquiry as follows:
In considering this question, a court assumes that certain protected speech may be regulated, and then asks what is the least restrictive alternative that can be used to achieve that goal. The purpose of the test is not to consider whether the challenged restriction has some effect in achieving Congress’ goal, regardless of the restriction it imposes. The purpose of the test is to ensure that speech is restricted no further than necessary to achieve the goal, for it is important to ensure that legitimate speech is not chilled or punished. For that reason, the test does not begin with the status quo of existing regulations, then ask whether the challenged restriction has some additional ability to achieve Congress’ legitimate interest. Any restriction on speech could be justified under that analysis. Instead, the court should ask whether the challenged regulation is the least restrictive means among available, effective alternatives.[40]
The Court then considered the available alternative to COPA’s age-verification regime: blocking and filtering software. They found that such tools are clearly less-restrictive means, focusing not only on the software’s granting parents the ability to prevent their children from accessing inappropriate material, but also that adults would retain access to any content blocked by the filter by simply turning it off.[41] In fact, the Court noted that the evidence presented to the District Court suggested that filters, while imperfect, were probably even more effective than the age-verification regime.[42] Finally, the Court noted that, even if Congress couldn’t require filtering software, it could encourage it through parental education, by providing incentives to libraries and schools to use it, and by subsidizing development of the industry itself. Each of these, the Court argued, would be clearly less-restrictive means of promoting COPA’s goals.[43]
In Coasean language, the Court found that parents using technological and practical means are the least-cost avoider of the harm of exposing children to unwanted adult content. Government promotion and support of those means were held up as clearly less-restrictive alternatives than imposing transaction costs on publishers of adult content.
In Brown v. Entertainment Merchants Association,[44] the Court considered California Assembly Bill 1179, which prohibited the sale or rental of “violent video games” to minors.[45] The Court first disposed of the argument that the government could create a new category of speech that it considered unprotected, just because it is directed at children, stating:
The California Act is something else entirely. It does not adjust the boundaries of an existing category of unprotected speech to ensure that a definition designed for adults is not uncritically applied to children. California does not argue that it is empowered to prohibit selling offensively violent works to adults—and it is wise not to, since that is but a hair’s breadth from the argument rejected in Stevens. Instead, it wishes to create a wholly new category of content-based regulation that is permissible only for speech directed at children. That is unprecedented and mistaken. “[M]inors are entitled to a significant measure of First Amendment protection, and only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to them.” Erznoznik v. Jacksonville, 422 U.S. 205, 212-213, 95 S.Ct. 2736*2736 2268, 45 L.Ed.2d 125 (1975) (citation omitted). No doubt a State possesses legitimate power to protect children from harm, Ginsberg, supra, at 640-641, 88 S.Ct. 1274; Prince v. Massachusetts, 321 U.S. 158, 165, 64 S.Ct. 438, 88 L.Ed. 645 (1944), but that does not include a free-floating power to restrict the ideas to which children may be exposed. “Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.” Erznoznik, supra, at 213-214, 95 S.Ct. 2268.[46]
The California Act is something else entirely. It does not adjust the boundaries of an existing category of unprotected speech to ensure that a definition designed for adults is not uncritically applied to children. California does not argue that it is empowered to prohibit selling offensively violent works to adults—and it is wise not to, since that is but a hair’s breadth from the argument rejected in Stevens. Instead, it wishes to create a wholly new category of content-based regulation that is permissible only for speech directed at children.
That is unprecedented and mistaken. “[M]inors are entitled to a significant measure of First Amendment protection, and only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to them.” Erznoznik v. Jacksonville, 422 U.S. 205, 212-213, 95 S.Ct. 2736*2736 2268, 45 L.Ed.2d 125 (1975) (citation omitted). No doubt a State possesses legitimate power to protect children from harm, Ginsberg, supra, at 640-641, 88 S.Ct. 1274; Prince v. Massachusetts, 321 U.S. 158, 165, 64 S.Ct. 438, 88 L.Ed. 645 (1944), but that does not include a free-floating power to restrict the ideas to which children may be exposed. “Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.” Erznoznik, supra, at 213-214, 95 S.Ct. 2268.[46]
The Court rejected that there was any “longstanding tradition” of restricting children’s access to depictions of violence, as demonstrated by copious examples of violent content in children’s books, high-school reading lists, motion pictures, radio dramas, comic books, television, music lyrics, etc. Moreover, to the extent there was a time when government enforced such regulations, the courts have eventually overturned them.[47] The fact that video games were interactive did not matter either, the Court found, as all literature is potentially interactive, especially genres like choose-your-own-adventure stories.[48]
Thus, because the law was clearly content-based, the Court applied strict scrutiny. The Court was skeptical even of whether the government had a compelling state interest, finding the law to be both seriously over- and under-inclusive. The same effects of exposure to violent content, the Court noted, could be found from covered video games and cartoons not subject to the law’s provisions. Moreover, the law allowed a parent or guardian (or any adult) to buy violent video games for their children.[49]
The Court then gets to the law’s real justification, which it summarily rejected as inconsistent with the First Amendment:
California claims that the Act is justified in aid of parental authority: By requiring that the purchase of violent video games can be made only by adults, the Act ensures that parents can decide what games are appropriate. At the outset, we note our doubts that punishing third parties for conveying protected speech to children just in case their parents disapprove of that speech is a proper governmental means of aiding parental authority.[50]
In Coasean language, the Court is saying that video games—even violent ones—are subjectively valued by those who play them, including minors. There may be negative externalities from playing such games, in that exposure to violence could be linked to psychological harm, and that they are interactive, but these content and design features are still protected speech. Placing the transaction costs on parents/adults to buy such games on behalf of minors, just in case some parents disapprove of their children playing them, is not a compelling state interest.
While the Court is only truly focused on whether there is a compelling state interest in California’s statutory scheme regulating violent video games, some of the language would equally apply to a least-restrictive means analysis:
But leaving that aside, California cannot show that the Act’s restrictions meet a substantial need of parents who wish to restrict their children’s access to violent video games but cannot do so. The video-game industry has in place a voluntary rating system designed to inform consumers about the content of games. The system, implemented by the Entertainment Software Rating Board (ESRB), assigns age-specific ratings to each video game submitted: EC (Early Childhood); E (Everyone); E10 + (Everyone 10 and older); T (Teens); M (17 and older); and AO (Adults Only—18 and older). App. 86. The Video Software Dealers Association encourages retailers to prominently display information about the ESRB system in their stores; to refrain from renting or selling adults-only games to minors; and to rent or sell “M” rated games to minors only with parental consent. Id., at 47. In 2009, the Federal Trade Commission (FTC) found that, as a result of this system, “the video game industry outpaces the movie and music industries” in “(1) restricting target-marketing of mature-rated products to children; (2) clearly and prominently disclosing rating information; and (3) restricting children’s access to mature-rated products at retail.” FTC, Report to Congress, Marketing Violent Entertainment to Children 30 (Dec.2009), online at http://www. ftc.gov/os/2009/12/P994511violent entertainment.pdf (as visited June 24, 2011, and available in Clerk of Court’s case file) (FTC Report). This system does much to ensure that minors cannot purchase seriously violent games on their own, and that parents who care about the matter can readily evaluate the games their children bring home. Filling the remaining modest gap in concerned parents’ control can hardly be a compelling state interest. And finally, the Act’s purported aid to parental authority is vastly overinclusive. Not all of the children who are forbidden to purchase violent video games on their own have parents who care whether they purchase violent video games. While some of the legislation’s effect may indeed be in support of what some parents of the restricted children actually want, its entire effect is only in support of what the State thinks parents ought to want. This is not the narrow tailoring to “assisting parents” that restriction of First Amendment rights requires.[51]
But leaving that aside, California cannot show that the Act’s restrictions meet a substantial need of parents who wish to restrict their children’s access to violent video games but cannot do so. The video-game industry has in place a voluntary rating system designed to inform consumers about the content of games. The system, implemented by the Entertainment Software Rating Board (ESRB), assigns age-specific ratings to each video game submitted: EC (Early Childhood); E (Everyone); E10 + (Everyone 10 and older); T (Teens); M (17 and older); and AO (Adults Only—18 and older). App. 86. The Video Software Dealers Association encourages retailers to prominently display information about the ESRB system in their stores; to refrain from renting or selling adults-only games to minors; and to rent or sell “M” rated games to minors only with parental consent. Id., at 47. In 2009, the Federal Trade Commission (FTC) found that, as a result of this system, “the video game industry outpaces the movie and music industries” in “(1) restricting target-marketing of mature-rated products to children; (2) clearly and prominently disclosing rating information; and (3) restricting children’s access to mature-rated products at retail.” FTC, Report to Congress, Marketing Violent Entertainment to Children 30 (Dec.2009), online at http://www. ftc.gov/os/2009/12/P994511violent entertainment.pdf (as visited June 24, 2011, and available in Clerk of Court’s case file) (FTC Report). This system does much to ensure that minors cannot purchase seriously violent games on their own, and that parents who care about the matter can readily evaluate the games their children bring home. Filling the remaining modest gap in concerned parents’ control can hardly be a compelling state interest.
And finally, the Act’s purported aid to parental authority is vastly overinclusive. Not all of the children who are forbidden to purchase violent video games on their own have parents who care whether they purchase violent video games. While some of the legislation’s effect may indeed be in support of what some parents of the restricted children actually want, its entire effect is only in support of what the State thinks parents ought to want. This is not the narrow tailoring to “assisting parents” that restriction of First Amendment rights requires.[51]
In sum, the Court suggests that the law would not be narrowly tailored, because there are already market-based systems in place to help parents and minors make informed decisions about which video games to buy—most importantly from the rating system that judges appropriateness by age and offers warnings about violence. Government paternalism is simply insufficient to justify imposing new transaction costs on parents and minors who wish to buy even violent video games.
Interestingly, the concurrence of Justice Samuel Alito, joined by Chief Justice John Roberts, also contains some language that could be interpreted through a Coasean lens. The concurrence allows, in particular, the possibility that harms from interactive violent video games may differ from other depictions of violence that society has allowed children to view, although it concludes that reasonable minds may differ.[52] In other words, the concurrence basically notes that the negative externalities may be greater than the majority opinion would allow, but nonetheless, that Justices Alito and Roberts agreed the law was not drafted in a constitutional manner that comports with the obscenity exception to the First Amendment.
Nonetheless, it appears the Court applies an implicit Coasean framework when it rejects the imposition of transaction costs on parents and minors to gain access to protected speech—in this case, violent video games. Parents and minors remain the least-cost avoiders of the potential harms of violent video games.
As outlined above, the issue is whether social media needs age-verification and parental-consent laws in order to address negative externalities to minor users. This section will analyze this question under the Coasean framework introduced in Part II.
The basic argument proceeds as follows:
Part IV.A will detail the substantial transaction costs associated with obtaining age verification and verifiable parental consent. Part IV.B argues that parents and teens working together using practical and technological means are the lowest-cost avoiders of the harms of social-media use. Part IV.C will consider the counterfactual scenario of placing the transaction costs on social-media companies and argue that the result would be teens’ exclusion from social media, to their detriment, as well as the detriment of parents who would have made different choices.
As Coase taught, in a world without transaction costs (or where such costs are sufficiently low), age-verification laws or mandates to obtain verifiable parental consent would not matter, because the parties would bargain to arrive at an efficient solution. Because there are high transaction costs that prevent such bargains from being easily struck, making the default that teens cannot join social media without verifiable parental consent could have the effect of excluding them from the great benefits of social media usage altogether.[54]
There is considerable evidence that, even despite the internet and digital technology serving to reduce transaction costs considerably across a wide range of fronts,[55] transaction costs remain high when it comes to age verification and verifiable parental consent. A data point that supports this conclusion is the experience of social-media platforms under the Children’s Online Privacy Protection Act (COPPA).[56] In their working paper “COPPAcalypse? The YouTube Settlement’s Impact on Kids Content,”[57] Garrett Johnson, Tesary Lin, James C. Cooper, & Liang Zhong summarized the issue as follows:
The Children’s Online Privacy Protection Act (COPPA), and its implementing regulations, broadly prohibit operators of online services directed at children under 13 from collecting personal information without providing notice of its data collection and use practices and obtaining verifiable parental consent. Because obtaining verifiable parental consent for free online services is difficult and rarely cost justified, COPPA essentially acts as a de facto ban on the collection of personal information by providers of free child-directed content. In 2013, the FTC amended the COPPA rules to include in the definition of personal information “persistent identifier that can be used to recognize a user over time and across different Web sites or online services,” such as a “customer number held in a cookie . . . or unique device identifier.” This regulatory change meant that, as a practical matter, online operators who provide child-directed content could no longer engage in personalized advertising. On September 4, 2019, the FTC entered into a consent agreement with YouTube to settle charges that it had violated COPPA. The FTC’s allegations focused on YouTube’s practice of serving personalized advertising on child-directed content at children without obtaining verifiable parental consent. Although YouTube maintains it is a general audience website and users must be at least 13 years old to obtain a Google ID (which makes personalized advertising possible), the FTC complaint alleges that YouTube knew that many of its channels were popular with children under 13, citing YouTube’s own claims to advertisers. The settlement required YouTube to identify child-directed channels and videos and to stop collecting personal information from visitors to these channels. In response, YouTube required channel owners producing [“made-for-kids”] MFK content to designate either their entire channels or specific videos as MFK, beginning on January 1, 2020. YouTube supplemented these self-designations with an automated classifier designed to identify content that was likely directed at children younger than 13.9 In so doing, YouTube effectively shifted liability under COPPA to the channel owners, who could face up to $42,530 in fines per video if they fail to self-designate and are not detected by YouTube’s classifier.[58]
The Children’s Online Privacy Protection Act (COPPA), and its implementing regulations, broadly prohibit operators of online services directed at children under 13 from collecting personal information without providing notice of its data collection and use practices and obtaining verifiable parental consent. Because obtaining verifiable parental consent for free online services is difficult and rarely cost justified, COPPA essentially acts as a de facto ban on the collection of personal information by providers of free child-directed content. In 2013, the FTC amended the COPPA rules to include in the definition of personal information “persistent identifier that can be used to recognize a user over time and across different Web sites or online services,” such as a “customer number held in a cookie . . . or unique device identifier.” This regulatory change meant that, as a practical matter, online operators who provide child-directed content could no longer engage in personalized advertising.
On September 4, 2019, the FTC entered into a consent agreement with YouTube to settle charges that it had violated COPPA. The FTC’s allegations focused on YouTube’s practice of serving personalized advertising on child-directed content at children without obtaining verifiable parental consent. Although YouTube maintains it is a general audience website and users must be at least 13 years old to obtain a Google ID (which makes personalized advertising possible), the FTC complaint alleges that YouTube knew that many of its channels were popular with children under 13, citing YouTube’s own claims to advertisers. The settlement required YouTube to identify child-directed channels and videos and to stop collecting personal information from visitors to these channels. In response, YouTube required channel owners producing [“made-for-kids”] MFK content to designate either their entire channels or specific videos as MFK, beginning on January 1, 2020. YouTube supplemented these self-designations with an automated classifier designed to identify content that was likely directed at children younger than 13.9 In so doing, YouTube effectively shifted liability under COPPA to the channel owners, who could face up to $42,530 in fines per video if they fail to self-designate and are not detected by YouTube’s classifier.[58]
The rule change and settlement increased the transaction costs imposed on social-media platforms by requiring verifiable parental consent. YouTube’s economically rational response was to restrict the content creators’ ability to benefit from (considerably more lucrative) personalized advertising. The end result was less content created for children, with competitive effects to boot:
Consistent with a loss in personalized ad revenue, we find that child-directed content creators produce 13% less content and pivot towards producing non-child-directed content. On the demand side, views of child-directed channels fall by 22%. Consistent with the platform’s degraded capacity to match viewers to content, we find that content creation and content views become more concentrated among top child-directed YouTube channels.[59]
This is not the only finding regarding COPPA’s role in reducing the production of content for children. The president of the App Association, a global trade association for small and medium-sized technology companies, presented extensively at the Federal Trade Commission’s (FTC) 2019 COPPA Workshop.[60] The testimony from App Association President Morgan Reed detailed that the transaction costs associated with obtaining verifiable parental consent did little to enhance parental control, but much to reduce the quality and quantity of content directed to children. But it is worth highlighting Reed’s constant use of the words “friction,” “restriction,” and “cost” to describe how the institutional environment of COPPA affects the behavior of the social media platforms, parents, and children. While noting that general audience content is “unfettered, meaning that you don’t feel restricted by what you can get to, how you do it. It’s easy, it’s low friction. Widely available. I can get it on any platform, in any case, in any context and I can get to it rapidly,” COPPA-regulated apps and content are, Reed said, all about:
Friction, restriction, and cost. Every layer of friction you add alters parent behavior significantly. We jokingly refer to it as the over the shoulder factor. If a parent wants access to something and they have to pass it from the back seat to the front seat of the car more than one time, the parent moves on to the next thing. So the more friction you add to an application directed at children the less likely it is that the parent is going to take the steps necessary to get through it because the competition, of course, is as I said, free, unfettered, widely available. Restriction. Kids balk against some of the restrictions. I can’t get to this, I can’t do that. And they say that to the parent. And from the parent’s perspective, fine, I’ll just put in a different age date. They’re participating, they’re parenting but they’re not using the regulatory construction that we all understand. … The COPPA side, expensive, onerous or friction full. We have to find some way around that. Restrictive, fewer features, fewer capabilities, less known or available, and it’s entertaining-ish. … Is COPPA the barrier? I thought this quote really summed it up. “Seamlessness is expected. But with COPPA, seamlessness is impossible.” And that has been one of the single largest areas of concern. Our folks are looking to provide a COPPA compliant environment. And they’re finding doing VPC is really hard. We want to make it this way, we just walked away. And why do they want to do it? We wanted to create a hub for kids to promote creativity. So these are not folks who are looking to take data and provide interest based advertising. They’re trying to figure out how to do it so they can build an engaging product. Parental consent makes the whole process very complicated. And this is the depressing part. … We say that VPC is intentional friction. It’s clear from everything we’ve heard in the last two panels that the authors of COPPA, we don’t really want information collected on kids. So friction is intentional. And this is leading to the destruction of general audience applications basically wiping out COPPA apps off the face of the map.[61]
Friction, restriction, and cost. Every layer of friction you add alters parent behavior significantly. We jokingly refer to it as the over the shoulder factor. If a parent wants access to something and they have to pass it from the back seat to the front seat of the car more than one time, the parent moves on to the next thing. So the more friction you add to an application directed at children the less likely it is that the parent is going to take the steps necessary to get through it because the competition, of course, is as I said, free, unfettered, widely available. Restriction. Kids balk against some of the restrictions. I can’t get to this, I can’t do that. And they say that to the parent. And from the parent’s perspective, fine, I’ll just put in a different age date. They’re participating, they’re parenting but they’re not using the regulatory construction that we all understand. …
The COPPA side, expensive, onerous or friction full. We have to find some way around that. Restrictive, fewer features, fewer capabilities, less known or available, and it’s entertaining-ish. …
Is COPPA the barrier? I thought this quote really summed it up. “Seamlessness is expected. But with COPPA, seamlessness is impossible.” And that has been one of the single largest areas of concern. Our folks are looking to provide a COPPA compliant environment. And they’re finding doing VPC is really hard. We want to make it this way, we just walked away. And why do they want to do it? We wanted to create a hub for kids to promote creativity. So these are not folks who are looking to take data and provide interest based advertising. They’re trying to figure out how to do it so they can build an engaging product. Parental consent makes the whole process very complicated. And this is the depressing part. …
We say that VPC is intentional friction. It’s clear from everything we’ve heard in the last two panels that the authors of COPPA, we don’t really want information collected on kids. So friction is intentional. And this is leading to the destruction of general audience applications basically wiping out COPPA apps off the face of the map.[61]
Reed’s use of the word “friction” is particularly enlightening. Mike Munger has often described transaction costs as frictions, explaining that, to consumers, all costs are transaction costs.[62] When higher transaction costs are imposed on social-media platforms, end users feel the impact. In this case, the result is that children and parents receive less quality children’s apps and content.
A similar example can be seen in the various battles between traditional media and social-media companies in Australia, Canada, and the EU, where laws have been passed that would require platforms to pay for linking to certain news content.[63] Because these laws raise transaction costs, social-media platforms have responded by restricting access to news links,[64] to the detriment of users and the news-media organizations themselves. In other words, much like with verifiable parental consent, the intent of these laws is thwarted by the underlying economics.
More evidence that imposing transaction costs on social-media companies can have the effect of diminishing the user experience can be found in the preliminary injunction issued by the U.S. District Court in Austin, Texas in Free Speech Coalition Inc. v. Colmenero.[65] The court cited evidence from the plaintiff’s complaint that included bills for “several commercial verification services, showing that they cost, at minimum, $40,000.00 per 100,000 verifications.”[66] The court also noted that “[Texas law] H.B. 1181 imposes substantial liability for violations, including $10,000.00 per day for each violation, and up to $250,000.00 if a minor is shown to have viewed the adult content.”[67]
Moreover, the transaction costs in this example also include the subjective costs borne by those who actually go through with verifying their age to access pornography. As the court noted “the law interferes with the Adult Video Companies’ ability to conduct business, and risks deterring adults from visiting the websites.”[68] The court issued a preliminary injunction against the law’s age-verification provision, finding that other means—such as content-filtering software—are clearly more effective than age verification to protect children from unwanted content.[69]
In sum, transaction costs for age verification and verifiable parental consent are sufficiently high as to prevent an easy bargain from being struck. Thus, which party bears the burden of those costs will determine the outcome. The lessons from COPPA, news-media laws, and online-pornography age-verification laws are clear: if the transaction costs are imposed on the online platforms and apps, it will lead to access restrictions on the speech those platforms provide, almost all of which is protected speech. This is the type of collateral censorship that the First Amendment is designed to avoid.[70]
If transaction costs due to online age-verification and verifiable-parent-consent laws are substantial, the question becomes which party or parties should be subject to the burden of avoiding the harms arising from social-media usage.
It is possible, in theory, that social-media platforms are the best-positioned to monitor and control content posted to their platforms—for instance, when it comes to harms associated with anonymous or pseudonymous accounts imposing social costs on society.[71] In such cases, a duty of care that would allow for intermediary liability against social-media companies may make sense.[72]
On the other hand, when it comes to online age-verification and parental-consent laws, widely available practical and technological means appear to be lowest-cost way to avoid the negative externalities associated with social-media usage. As NetChoice put it in their complaint against Arkansas’ social-media age-verification law, “[p]arents have myriad ways to restrict their children’s access to online services and to keep their children safe on such services.”[73]
In their complaint, NetChoice recognizes the subjective nature of negative externalities, stating:
Just as people inevitably have different opinions about what books, television shows, and video games are appropriate for minors, people inevitably have different views about whether and to what degree online services are appropriate for minors. While many minors use online services in wholesome and productive ways, online services, like many other technologies, can be abused in ways that may harm minors.[74]
They then expertly list all the ways that parents can take control and help their children avoid online harms, including with respect to the decisions to buy devices for their children and to set terms for how and when they are permitted to use them.[75] Parents can also choose to use tools from cell-phone carriers and broadband providers to block certain apps and sites from their children’s devices, or to control with whom their children can communicate and for how long they can use the devices.[76] They also point to wireless routers that allow for parents to filter and monitor online content;[77] parental controls at the device level;[78] third-party filtering applications;[79] and numerous tools offered by NetChoice members that all allow for relatively low-cost monitoring and control by parents and even teen users acting on their own behalf.[80] Finally, they note that NetChoice members, in response to market demand,[81]expend significant resources curating content to make sure it’s appropriate.[82]
The recent response from the Australian government to the proposed “Roadmap for Age Verification”[83] buttresses this analysis. The government pulled back from plans to “force adult websites to bring in age verification following concerns about privacy and the lack of maturity of the technology.”[84] In particular, the government noted that:
It is clear from the Roadmap that at present, each type of age verification or age assurance technology comes with its own privacy, security, effectiveness and implementation issues. For age assurance to be effective, it must: work reliably without circumvention; be comprehensively implemented, including where pornography is hosted outside of Australia’s jurisdiction; and balance privacy and security, without introducing risks to the personal information of adults who choose to access legal pornography. Age assurance technologies cannot yet meet all these requirements. While industry is taking steps to further develop these technologies, the Roadmap finds that the age assurance market is, at this time, immature. The Roadmap makes clear that a decision to mandate age assurance is not ready to be taken.[85]
It is clear from the Roadmap that at present, each type of age verification or age assurance technology comes with its own privacy, security, effectiveness and implementation issues. For age assurance to be effective, it must:
Age assurance technologies cannot yet meet all these requirements. While industry is taking steps to further develop these technologies, the Roadmap finds that the age assurance market is, at this time, immature.
The Roadmap makes clear that a decision to mandate age assurance is not ready to be taken.[85]
As a better solution, the government offered “[m]ore support and resources for families,”[86] including promoting tools already available in the marketplace to help prevent children from accessing inappropriate content like pornography,[87] and promoting education for both parents and children on how to avoid online harms.[88]
In sum, this is all about transaction costs. The least-cost avoider from negative externalities imposed by social-media usage are the parents and teens themselves, working together to make marginal decisions about how to use these platforms through the use of widely available practical and technological means.
If the burden of avoiding negative externalities is placed on social-media platforms, the result could be considerable collateral censorship of protected speech. This is because of transaction costs, as explained above in Part IV.A. Thus, while one could argue that the externalities imposed by social-media platforms on teen users and their parents represent a market failure, this is not the end of the analysis. Transaction costs help to explain that the institutional environment we create fosters the rules of the game that platforms, parents, and teens follow. If transaction costs are too high and placed incorrectly on social-media platforms, parents and teens’ ability to control how they use social media will actually suffer.
As can be seen most prominently in the COPPA examples discussed above,[89] the burden of obtaining verifiable parental consent leads to platforms reallocating investments into the exclusion of the protected class—in that case, children under age 13—that could otherwise go toward creating a safe and vibrant community from which children could benefit. Thus, proposals like COPPA 2.0,[90] which would extend the need for verifiable consent to teens, could yield an equivalent result of greater exclusion of teens. State laws that would require age verification and verifiable parental consent for teens are likely to produce the same result, as well. The irony, of course, is that parental consent laws would actually reduce the available choices for those parents who see the use value for their teenagers.
In sum, the economics of transaction costs explains why age-verification and verifiable-parental-consent laws will not satisfy their proponents’ stated objectives. As with minimum-wage laws[91] and rent control,[92] economics helps to explain the counterintuitive finding that well-intentioned laws can actually produce the exact opposite end result. Here, that means age-verification and verifiable-parental-consent laws lead to parents and teens being less able to make meaningful and marginal decisions about the costs and benefits of their own social-media usage.
Bringing this all together, Part V will consider the constitutionality of the enacted and proposed laws on age verification and verifiable parental consent under the First Amendment. As several courts have already suggested, these laws will not survive First Amendment scrutiny.
The first question is whether these laws will be subject to strict scrutiny (because they are content-based) or instead to intermediate scrutiny as content-neutral regulations. There is a possibility that it will not matter, because a court could find—as one already has—that such laws burden more speech than necessary anyway. Part V.A will take up these questions.
The second set of questions is whether, assuming strict scrutiny applies, these enacted and proposed laws could survive the least-restrictive-means test. Part V.B will consider this set of questions and argue that, as the lowest-cost avoiders, parents and teens working together using widely available practical and technological means to avoid negative externalities also represents the least-restrictive means to promote the government’s interest in protecting minors from the harms of social media.
The first important question is whether laws that attempt to protect minors from externalities associated with social-media usage are content-neutral. One argument that has been forwarded is that they are simply content-neutral contract laws that shift the consent default to parents before teens can establish an ongoing contractual relationship with a social-media company by creating a profile.[93]
Before delving into whether that argument could work, it is worth considering laws that are clearly content-based to help tell the difference. For instance, the Texas law challenged in Free Speech Coalition v. Colmenero is clearly content-based, because “the regulation is based on whether content contains sexual material.”[94]
Similarly, laws like the Kids Online Safety Act (KOSA)[95] are content-based, in that they require covered platforms to take:
reasonable measures in its design or operation of products and services to prevent or mitigate the following:
Consistent with evidence-informed medical information, the following mental health disorders: anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors.
Patterns of use that indicate or encourage addiction-like behaviors.
Physical violence, online bullying, and harassment of the minor.
Sexual exploitation and abuse.
Promotion and marketing of narcotic drugs (as defined in section 102 of the Controlled Substances Act (21 U.S.C. 802)), tobacco products, gambling, or alcohol.
Predatory, unfair, or deceptive marketing practices, or other financial harms.[96]
While parts 4-6 and actual physical violence all constitute either unprotected speech or conduct, decisions about how to present information from part 2 is arguably protected speech.[97] Even true threats like online bullying and harassment are speech subject to at least some First Amendment scrutiny, in that they would require some type of mens rea to be constitutional.[98] Part 1 may be unconstitutionally vague as written.[99] Moreover, 1-3 are clearly content-based, in that it is necessary to consider the content presented, which will include at least some protected speech. This equally applies to the California Age Appropriate Design Code,[100] which places an obligation on covered companies to identify and mitigate speech that is harmful or potentially harmful to users under 18 years old, and to prioritize speech that promotes such users’ well-being and best interests.[101]
In each of these cases, it would be difficult to argue that strict scrutiny ought not apply. On the other hand, some have argued that the Utah and Arkansas laws requiring age verification and verifiable parental consent are simply content-neutral regulations of contract formation, which can be considered independently of speech.[102] Arkansas has argued that Act 689’s age-verification requirements are “merely a content-neutral regulation on access to speech at particular ‘locations,’ so intermediate scrutiny should apply.”[103]
But even in NetChoice v. Griffin,[104] the U.S. District Court in Arkansas, while skeptical that the law was content-neutral,[105] proceeded as if it was and still found, in granting a preliminary injunction, that the age-verification law “is likely to unduly burden adult and minor access to constitutionally protected speech.”[106] Similarly, the U.S. District Court for the Northern District of California found that all major provisions of California’s AADC were likely unconstitutional under a lax commercial-speech standard.[107]
Nonetheless, there are strong arguments that these laws are content-based. As the court in Griffin put it:
Deciding whether Act 689 is content-based or content-neutral turns on the reasons the State gives for adopting the Act. First, the State argues that the more time a minor spends on social media, the more likely it is that the minor will suffer negative mental health outcomes, including depression and anxiety. Second, the State points out that adult sexual predators on social media seek out minors and victimize them in various ways. Therefore, to the State, a law limiting access to social media platforms based on the user’s age would be content-neutral and require only intermediate scrutiny. On the other hand, the State points to certain speech-related content on social media that it maintains is harmful for children to view. Some of this content is not constitutionally protected speech, while other content, though potentially damaging or distressing, especially to younger minors, is likely protected nonetheless. Examples of this type of speech include depictions and discussions of violence or self-harming, information about dieting, so-called “bullying” speech, or speech targeting a speaker’s physical appearance, race or ethnicity, sexual orientation, or gender. If the State’s purpose is to restrict access to constitutionally protected speech based on the State’s belief that such speech is harmful to minors, then arguably Act 689 would be subject to strict scrutiny. During the hearing, the State advocated for intermediate scrutiny and framed Act 689 as “a restriction on where minors can be,” emphasizing it was “not a speech restriction” but “a location restriction.” The State’s briefing analogized Act 689 to a restriction on minors entering a bar or a casino. But this analogy is weak. After all, minors have no constitutional right to consume alcohol, and the primary purpose of a bar is to serve alcohol. By contrast, the primary purpose of a social media platform is to engage in speech, and the State stipulated that social media platforms contain vast amounts of constitutionally protected speech for both adults and minors. Furthermore, Act 689 imposes much broader “location restrictions” than a bar does. The Court inquired of the State why minors should be barred from accessing entire social media platforms, even though only some of the content was potentially harmful to them, and the following colloquy ensued: THE COURT: Well, to pick up on Mr. Allen’s analogy of the mall, I haven’t been to the Northwest Arkansas mall in a while, but it used to be that there was a restaurant inside the mall that had a bar. And so certainly minors could not go sit at the bar and order up a drink, but they could go to the Barnes & Noble bookstore or the clothing store or the athletic store. Again, borrowing Mr. Allen’s analogy, the gatekeeping that Act 689 imposes is at the front door of the mall, not the bar inside the mall; yes? THE STATE: The state’s position is that the whole mall is a bar, if you want to continue to use the analogy. THE COURT: The whole mall is a bar? THE STATE: Correct. Clearly, the state’s analogy is not persuasive. NetChoice argues that Act 689 is not a content-neutral restriction on minors’ ability to access particular spaces online, and the fact that there are so many exemptions to the definitions of “social media company” and “social media platform” proves that the State is targeting certain companies based either on a platform’s content or its viewpoint. Indeed, Act 689’s definitions and exemptions do seem to indicate that the State has selected a few platforms for regulation while ignoring all the rest. The fact that the State fails to acknowledge this causes the Court to suspect that the regulation may not be content neutral. “If there is evidence that an impermissible purpose or justification underpins a facially content-neutral restriction, for instance, that restriction may be content-based.” City of Austin v. Reagan Nat’l Advertising of Austin, LLC, 142 S. Ct. 1464, 1475 (2022).[108]
Deciding whether Act 689 is content-based or content-neutral turns on the reasons the State gives for adopting the Act. First, the State argues that the more time a minor spends on social media, the more likely it is that the minor will suffer negative mental health outcomes, including depression and anxiety. Second, the State points out that adult sexual predators on social media seek out minors and victimize them in various ways. Therefore, to the State, a law limiting access to social media platforms based on the user’s age would be content-neutral and require only intermediate scrutiny.
On the other hand, the State points to certain speech-related content on social media that it maintains is harmful for children to view. Some of this content is not constitutionally protected speech, while other content, though potentially damaging or distressing, especially to younger minors, is likely protected nonetheless. Examples of this type of speech include depictions and discussions of violence or self-harming, information about dieting, so-called “bullying” speech, or speech targeting a speaker’s physical appearance, race or ethnicity, sexual orientation, or gender. If the State’s purpose is to restrict access to constitutionally protected speech based on the State’s belief that such speech is harmful to minors, then arguably Act 689 would be subject to strict scrutiny.
During the hearing, the State advocated for intermediate scrutiny and framed Act 689 as “a restriction on where minors can be,” emphasizing it was “not a speech restriction” but “a location restriction.” The State’s briefing analogized Act 689 to a restriction on minors entering a bar or a casino. But this analogy is weak. After all, minors have no constitutional right to consume alcohol, and the primary purpose of a bar is to serve alcohol. By contrast, the primary purpose of a social media platform is to engage in speech, and the State stipulated that social media platforms contain vast amounts of constitutionally protected speech for both adults and minors. Furthermore, Act 689 imposes much broader “location restrictions” than a bar does. The Court inquired of the State why minors should be barred from accessing entire social media platforms, even though only some of the content was potentially harmful to them, and the following colloquy ensued:
THE COURT: Well, to pick up on Mr. Allen’s analogy of the mall, I haven’t been to the Northwest Arkansas mall in a while, but it used to be that there was a restaurant inside the mall that had a bar. And so certainly minors could not go sit at the bar and order up a drink, but they could go to the Barnes & Noble bookstore or the clothing store or the athletic store. Again, borrowing Mr. Allen’s analogy, the gatekeeping that Act 689 imposes is at the front door of the mall, not the bar inside the mall; yes?
THE STATE: The state’s position is that the whole mall is a bar, if you want to continue to use the analogy.
THE COURT: The whole mall is a bar?
THE STATE: Correct.
Clearly, the state’s analogy is not persuasive.
NetChoice argues that Act 689 is not a content-neutral restriction on minors’ ability to access particular spaces online, and the fact that there are so many exemptions to the definitions of “social media company” and “social media platform” proves that the State is targeting certain companies based either on a platform’s content or its viewpoint. Indeed, Act 689’s definitions and exemptions do seem to indicate that the State has selected a few platforms for regulation while ignoring all the rest. The fact that the State fails to acknowledge this causes the Court to suspect that the regulation may not be content neutral. “If there is evidence that an impermissible purpose or justification underpins a facially content-neutral restriction, for instance, that restriction may be content-based.” City of Austin v. Reagan Nat’l Advertising of Austin, LLC, 142 S. Ct. 1464, 1475 (2022).[108]
Utah’s laws HB 311 and 152 would also seem to suffer from a similar defect as KOSA and AADC,[109] though they have not yet been litigated.
Assuming that courts do, in fact, find that these laws are content-based, strict scrutiny would apply, including the least-restrictive-means test.[110] In that case, the caselaw is clear: the least-restrictive means to achieve the government’s interest of protecting minors from social media’s speech and design problems is to promote low-cost monitoring and filtering.
First, however, it is also worth inquiring whether the government would be able to establish a compelling state interest, as the Court discussed in Brown. The Court’s strong skepticism of government paternalism[111] applies equally to the verifiable-parental-consent laws enacted in Arkansas and Utah, as well as COPPA 2.0. Aiding parental consent likely fails to “meet a substantial need of parents who wish to restrict their children’s access”[112] to social media, but can’t do so, to use the late Justice Antonin Scalia’s language. Moreover, the “purported aid to parental authority” is likely to be found to be “vastly overinclusive” because “[n]ot all of the children who are forbidden” to join social media on “their own have parents who care whether” they do so.[113] While such laws “may indeed be in support of what some parents of the restricted children actually want, its entire effect is only in support of what the State thinks parents ought to want. This is not the narrow tailoring to ‘assisting parents’ that restriction of First Amendment rights requires.”[114]
As argued clearly above, Ashcroft is strong precedent that promoting the practical and technological means available in the marketplace, outlined by NetChoice in its brief in Griffin, is less restrictive than age-verification laws to protect minors from harms associated with social-media usage.[115] In fact, there is a strong argument that the market has subsequently produced more and more effective tools than were available even then. This makes it exceedingly unlikely that the Supreme Court will change its mind.
While some have argued that Justice Clarence Thomas’ dissent in Brown offers roadmap to reject these precedents,[116] there is little basis for that conclusion. First, Thomas’ dissent in Brown was not joined by any other members of the Supreme Court.[117] Second, Justice Thomas joined the majority in Ashcroft v. ACLU, suggesting he probably still sees age-verification laws as unconstitutional.[118] Even Associate Justice Samuel Alito issued a concurrence to the majority in that case,[119] expressing skepticism of Justice Thomas’ approach.[120] Third, it seems unlikely that the newer conservative justices, whose jurisprudence has been more speech-protective by nature,[121] would join Justice Thomas in his opinion on the right of children to receive speech. And far from being vague on the issue of whether a minor has a right to receive speech, [122] Justice Scalia’s majority opinion clearly stated that:
[M]inors are entitled to a significant measure of First Amendment protection, and only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to them… but that does not include a free-floating power to restrict the ideas to which children may be exposed.[123]
Precedent is strong against age-verification and parental-consent laws, and there is no reason to think the personnel changes on the Supreme Court would change the analysis.
In sum, straightforward applications of Brown and Ashcroft doom these new social-media laws.
This issue brief has two main conclusions, one of interest to the scholarship of applying law & economics to constitutional law, and the other to the policy and legal questions surrounding social-media age-verification and parental-consent laws:
In conclusion, these online age-verification laws should be rejected. Why? The answer is transaction costs.
[1] See, e.g., Kirsten Weir, Social Media Brings Benefits and Risks to Teens. Here’s How Psychology Can Help Identify a Path Forward, 54 Monitor on Psychology 46 (Sep. 1, 2023), https://www.apa.org/monitor/2023/09/protecting-teens-on-social-media.
[2] See, e.g., Khara Boender, Jordan Rodell, & Alex Spyropoulos, The State of Affairs: What Happened in Tech Policy During 2023 State Legislative Sessions?, Project Disco (Jul. 25, 2023), https://www.project-disco.org/competition/the-state-of-affairs-state-tech-policy-in-2023 (noting laws passed and proposed addressing children’s online safety at the state level, including California’s Age-Appropriate Design Code and age-verification laws in both Arkansas and Utah, all of which will be considered below).
[3] With apologies to Mike Munger for borrowing the title of his excellent podcast, invoked several times in this issue brief; see The Answer Is Transaction Costs, https://podcasts.apple.com/us/podcast/the-answer-is-transaction-costs/id1687215430 (last accessed Sept. 28, 2023).
[4] Steven G. Medema, “Failure to Appear”: The Use of the Coase Theorem in Judicial Opinions, at 4, Dep’t of Econ. Duke Univ., Working Paper No. 2.1 (2019), available at https://hope.econ.duke.edu/sites/hope.econ.duke.edu/files/Medema%20workshop%20paper.pdf.
[5] Fred R. Shapiro & Michelle Pearse, The Most Cited Law Review Articles of All Time, 110 Mich. L. Rev. 1483, 1489 (2012).
[6] R.H. Coase, The Problem of Social Cost, 3 J. L. & Econ. 1 (1960).
[7] See generally Steven G. Medema, The Coase Theorem at Sixty, 58 J. Econ. Lit. 1045 (2020).
[8] Todd J. Zywicki & Edward Peter Stringham, Common Law and Economic Efficiency, Geo. Mason Univ.. L. & Econ. Rsch., Working Paper No. 10-43 (2010), available at https://www.law.gmu.edu/assets/files/publications/working_papers/1043CommonLawandEconomicEfficiency.pdf.
[9] See id. at 4.
[10] See id. at 3.
[11] See id. at 10.
[12] See id. at 34.
[13] Medema, supra note 4, at 39.
[14] See, e.g., Matti Cuorre & Andrew K. Przybylski, Estimating the Association Between Facebook Adoption and Well-Being in 72 Countries, 10 Royal Soc’y Open Sci. 1 (2023), https://royalsocietypublishing.org/doi/epdf/10.1098/rsos.221451; Sabrina Cipoletta, Clelia Malighetti, Chiara Cenedese, & Andrea Spoto, How Can Adolescents Benefit from the Use of Social Networks? The iGeneration on Instagram, 17 Int. J. Environ. Res. Pub. Health 6952 (2020), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7579040.
[15] See Jean M. Twenge, Thomas E. Joiner, Megan L Rogers, & Gabrielle N. Martin, Increases in Depressive Symptoms, Suicide-Related Outcomes, and Suicide Rates Among U.S. Adolescents After 2010 and Links to Increased New Media Screen Time, 6 Clinical Psych. Sci. 3 (2018), available at https://courses.engr.illinois.edu/cs565/sp2018/Live1_Depression&ScreenTime.pdf.
[16] Adam Candeub, Age Verification for Social Media: A Constitutional and Reasonable Regulation, FedSoc Blog (Aug. 7, 2023), https://fedsoc.org/commentary/fedsoc-blog/age-verification-for-social-media-a-constitutional-and-reasonable-regulation.
[17] See Wikipedia, List of Alcohol Laws of the United States, https://en.wikipedia.org/wiki/List_of_alcohol_laws_of_the_United_States (last accessed Sep. 28, 2023); Wikipedia, U.S. History of Tobacco Minimum Purchase Age by State, https://en.wikipedia.org/wiki/U.S._history_of_tobacco_minimum_purchase_age_by_state (last accessed Sep. 28, 2023).
[18] See Wikipedia, Driver’s Licenses in the United States, https://en.wikipedia.org/wiki/Driver%27s_licenses_in_the_United_States (last accessed Sep. 28, 2023).
[19] See Wikipedia, Gambling Age, https://en.wikipedia.org/wiki/Gambling_age (last accessed Sep. 28, 2023) (table on minimum age for lottery tickets and casinos by state). As far as this author is aware, every state and territory requires identification demonstrating the buyer is at least 18 years old to make a retail purchase of a pornographic magazine or video.
[20] See 15 U.S.C. § 6501, et seq. (2018); 16 CFR Part 312.
[21] See infra Part III. See Brown v. Ent. Merch. Ass’n, 564 U.S. 786, 794 (2011) (“California does not argue that it is empowered to prohibit selling offensively violent works to adults—and it is wise not to, since that is but a hair’s breadth from the argument rejected in Stevens. Instead, it wishes to create a wholly new category of content-based regulation that is permissible only for speech directed at children. That is unprecedented and mistaken. ‘[M]inors are entitled to a significant measure of First Amendment protection, and only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to them…’ No doubt a State possesses legitimate power to protect children from harm… but that does not include a free-floating power to restrict the ideas to which children may be exposed. ‘Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.’”) (internal citations omitted).
[22] See infra Part V.
[23] See infra Part IV.
[24] 529 U.S. 803 (2000).
[25] Id. at 806.
[26] See id.
[27] See id. at 806-807.
[28] Id. at 811.
[29] Id. at 813 (internal citation omitted).
[30] Id. at 814.
[31] Id. at 815.
[32] Id. at 816.
[33] See id. at 821 (“[M]arket-based solutions such as programmable televisions, VCR’s, and mapping systems []which display a blue screen when tuned to a scrambled signal[] may eliminate signal bleed at the consumer end of the cable.”).
[34] See id. at 823 (“The Government also failed to prove § 504 with adequate notice would be an ineffective alternative to § 505.”).
[35] Id. at 825-826.
[36] 542 U.S. 656 (2004).
[37] Id. at 659.
[38] Id. at 661.
[39] Id. at 662.
[40] Id. at 666.
[41] See id. at 667 (“Filters are less restrictive than COPA. They impose selective restrictions on speech at the receiving end, not universal restrictions at the source. Under a filtering regime, adults without children may gain access to speech they have a right to see without having to identify themselves or provide their credit card information. Even adults with children may obtain access to the same speech on the same terms simply by turning off the filter on their home computers. Above all, promoting the use of filters does not condemn as criminal any category of speech, and so the potential chilling effect is eliminated, or at least much diminished. All of these things are true, moreover, regardless of how broadly or narrowly the definitions in COPA are construed.”).
[42] See id. at 667-669.
[43] See id. at 669-670.
[44] 564 U.S. 786 (2011).
[45] See id. at 787.
[46] Id. at 793-795.
[47] See id. at 794-797.
[48] See id. at 796-799.
[49] See id. at 799-802.
[50] Id. at 801.
[51] Id. at 801-804.
[52] See id. at 812 (J. Alito, concurring):
“There is a critical difference, however, between obscenity laws and laws regulating violence in entertainment. By the time of this Court’s landmark obscenity cases in the 1960’s, obscenity had long been prohibited, See Roth v. U.S., 354 U.S. 476, at 484-485, and this experience had helped to shape certain generally accepted norms concerning expression related to sex.
There is no similar history regarding expression related to violence. As the Court notes, classic literature contains descriptions of great violence, and even children’s stories sometimes depict very violent scenes.
Although our society does not generally regard all depictions of violence as suitable for children or adolescents, the prevalence of violent depictions in children’s literature and entertainment creates numerous opportunities for reasonable people to disagree about which depictions may excite “deviant” or “morbid” impulses. See Edwards & Berman, Regulating Violence on Television, 89 Nw. U.L.Rev. 1487, 1523 (1995) (observing that the Miller test would be difficult to apply to violent expression because “there is nothing even approaching a consensus on low-value violence”).
Finally, the difficulty of ascertaining the community standards incorporated into the California law is compounded by the legislature’s decision to lump all minors together. The California law draws no distinction between young children and adolescents who are nearing the age of majority.”
See also id. at 819 (Alito, J., concurring) (“If the technological characteristics of the sophisticated games that are likely to be available in the near future are combined with the characteristics of the most violent games already marketed, the result will be games that allow troubled teens to experience in an extraordinarily personal and vivid way what it would be like to carry out unspeakable acts of violence.”).
[53] The following sections are adapted from Ben Sperry, Right to Anonymous Speech, Part 3: Anonymous Speech and Age-Verification Laws, Truth on the Market (Sep. 11, 2023), https://truthonthemarket.com/2023/09/11/right-to-anonymous-speech-part-3-anonymous-speech-and-age-verification-laws.
[54] See Ben Sperry, Online Safety Bills Will Mean Kids Are No Longer Seen or Heard Online, The Hill (May 12, 2023), https://thehill.com/opinion/congress-blog/4002535-online-safety-bills-will-mean-kids-are-no-longer-seen-or-heard-online; Ben Sperry, Bills Aimed at ‘Protecting’ Kids Online Throw the Baby out with the Bathwater, The Hill (Jul. 26, 2023), https://thehill.com/opinion/congress-blog/4121324-bills-aimed-at-protecting-kids-online-throw-the-baby-out-with-the-bathwater; Przybylski & Vuorre, supra note 14; Mesfin A. Bekalu, Rachel F. McCloud, & K. Viswanath, Association of Social Media Use With Social Well-Being, Positive Mental Health, and Self-Rated Health: Disentangling Routine Use From Emotional Connection to Use, 42 Sage J. 69S, 69S-80S (2019), https://journals.sagepub.com/doi/full/10.1177/1090198119863768.
[55] See generally Michael Munger, Tomorrow 3.0: Transaction Costs and the Sharing Economy, Cambridge University Press (Mar. 22, 2018).
[56] The Future of the COPPA Rule: An FTC Workshop Part 2, Federal Trade Commission (Oct. 7, 2019), available at https://www.ftc.gov/system/files/documents/public_events/1535372/transcript_of_coppa_workshop_part_2_1.pdf.
[57] Garrett A. Johnson, Tesary Lin, James C. Cooper, & Liang Zhong, COPPAcalypse? The YouTube Settlement’s Impact on Kids Content, SSRN (Apr. 26, 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4430334.
[58] Id. at 6-7 (emphasis added).
[59] Id. at 1.
[60] FTC, supra note 56.
[61] Id. at 6 (emphasis added).
[62] See Michael Munger, To Consumers, All Costs are Transaction Costs, Am. Inst. Econ. Rsch. (June 13, 2023), https://www.aier.org/article/to-consumers-all-costs-are-transaction-costs.
[63] See Katie Robertson, Meta Begins Blocking News in Canada, N.Y. Times (Aug. 2, 2023), https://www.nytimes.com/2023/08/02/business/media/meta-news-in-canada.html; Mark Collom, Australia Made a Deal to Keep News on Facebook. Why Couldn’t Canada?, CBC News (Aug. 3, 2023), https://www.cbc.ca/news/world/meta-australia-google-news-canada-1.6925726.
[64] See id.
[65] Free Speech Coal. Inc. v. Colmenero, No. 1:23-CV-917-DAE, 2023 U.S. Dist. LEXIS 154065 (W.D. Tex. 2023), available at https://storage.courtlistener.com/recap/gov.uscourts.txwd.1172751222/gov.uscourts.txwd.1172751222.36.0.pdf.
[66] Id. at 10.
[67] Id.
[68] Id.
[69] Id. at 44.
[70] Geoffrey A. Manne, Ben Sperry, & Kristian Stout, Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, 49 Rutgers Comput. & Tech. L.J. 26 (2022), https://laweconcenter.org/resources/who-moderates-the-moderators-a-law-economics-approach-to-holding-online-platforms-accountable-without-destroying-the-internet; Geoffrey A. Manne, Kristian Stout, & Ben Sperry, Twitter v. Taamneh and the Law & Economics of Intermediary Liability, Truth on the Market (Mar. 8, 2023), https://truthonthemarket.com/2023/03/08/twitter-v-taamneh-and-the-law-economics-of-intermediary-liability; Ben Sperry, The Law & Economics of Children’s Online Safety: The First Amendment and Online Intermediary Liability, Truth on the Market (May 12 2023), https://truthonthemarket.com/2023/05/12/the-law-economics-of-childrens-online-safety-the-first-amendment-and-online-intermediary-liability.
[71] See Manne, Stout, & Sperry, Twitter v. Taamneh and the Law & Economics of Intermediary Liability, supra note 70; Ben Sperry, Right to Anonymous Speech, Part 2: A Law & Economics Approach, Truth on the Market. (Sep. 6, 2023), httsps://truthonthemarket.com/2023/09/06/right-to-anonymous-speech-part-2-a-law-economics-approach; Manne, Sperry, & Stout, Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, supra note 70.
[72] See Manne, Stout, & Sperry, Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet, supra note 70, at 28 (“To the extent that the current legal regime permits social harms online that exceed concomitant benefits, it should be reformed to deter those harms, provided it can be done so at sufficiently low cost.”); Sperry, Right to Anonymous Speech, Part 2: A Law & Economics Approach, supra note 71.
[73] See NetChoice Complaint, NetChoice LLC v. Griffin, NO. 5:23-CV-05105, available at 2023 U.S. Dist. LEXIS 154571 (W.D. Ark. 2023), https://netchoice.org/wp-content/uploads/2023/06/NetChoice-v-Griffin_-Complaint_2023-06-29.pdf.
[74] Id. at para. 13.
[75] See id. at para. 14
[76] See id.
[77] See id. at para 15.
[78] See id. at para 16.
[79] See id.
[80] See id. at para. 17, 19-21
[81] See Ben Sperry, Congress Should Focus on Protecting Teens from Real Harms, Not Targeted Ads, The Hill (Feb. 12, 2023), https://thehill.com/opinion/congress-blog/3862238-congress-should-focus-on-protecting-teens-from-real-harms-not-targeted-ads.
[82] See NetChoice Complaint, supra note 73 at para. 18.
[83] Government Response to the Roadmap for Age Verification, Australian Gov’t Dep’t of Infrastructure, Transp., Reg’l Dev., Commc’ns and the Arts (Aug. 2023), available at https://www.infrastructure.gov.au/sites/default/files/documents/government-response-to-the-roadmap-for-age-verification-august2023.pdf.
[84] See Josh Taylor, Australia Will Not Force Adult Websites to Bring in Age Verification Due To Privacy And Security Concerns, The Guardian (Aug. 30, 2023), https://www.theguardian.com/australia-news/2023/aug/31/roadmap-for-age-verification-online-pornographic-material-adult-websites-australia-law.
[85] See NetChoice Complaint, supra note 73 at 2.
[86] Id. at 6.
[87] See id.
[88] See id. at 6-8.
[89] Supra Part IV.A.
[90] See Children and Teen’s Online Privacy Protection Act, S. 1418, 118th Cong. (2023), as amended Jul. 27, 2023, available at https://www.congress.gov/bill/118th-congress/senate-bill/1418/text (last accessed Oct. 2, 2023). Other similar bills have been proposed as well. See Protecting Kids on Social Media Act, S. 1291, 118th Cong. (2023); Making Age-Verification Technology Uniform, Robust, and Effective Act, S. 419, 118th Cong. (2023); Social Media Child Protection Act, H.R. 821, 118th Cong. (2023).
[91] See David Neumark & Peter Shirley, Myth or Measurement: What Does the New Minimum Wage Research Say About Minimum Wages and Job Loss in the United States? (Nat’l Bur. Econ. Res. Working Paper 28388, Mar. 2022), available at https://www.nber.org/papers/w28388 (concluding that “(i) there is a clear preponderance of negative estimates in the literature; (ii) this evidence is stronger for teens and young adults as well as the less-educated; (iii) the evidence from studies of directly-affected workers points even more strongly to negative employment effects; and (iv) the evidence from studies of low-wage industries is less one-sided.”).
[92] See Lisa Sturtevant, The Impacts of Rent Control: A Research Review and Synthesis, at 6-7, Nat’l Multifamily Hous. Coun’cl Res. Found. (May 2018), available at https://www.nmhc.org/globalassets/knowledge-library/rent-control-literature-review-final2.pdf (“1. Rent control and rent stabilization policies do a poor job at targeting benefits. While some low-income families do benefit from rent control, so, too, do higher-income households. There are more efficient and effective ways to provide assistance to lower-income individuals and families who have trouble finding housing they can afford. 2. Residents of rent-controlled units move less often than do residents of uncontrolled housing units, which can mean that rent control causes renters to continue to live in units that are too small, too large or not in the right locations to best meet their housing needs. 3. Rent-controlled buildings potentially can suffer from deterioration or lack of investment, but the risk is minimized when there are effective local requirements and/or incentives for building maintenance and improvements. 4. Rent control and rent stabilization laws lead to a reduction in the available supply of rental housing in a community, particularly through the conversion to ownership of controlled buildings. 5. Rent control policies can hold rents of controlled units at lower levels but not under all circumstances. 6. Rent control policies generally lead to higher rents in the uncontrolled market, with rents sometimes substantially higher than would be expected without rent control. 7. There are significant fiscal costs associated with implementing a rent control program.”).
[93] See Candeub, supra note 16.
[94] Colmenero, supra note 65, at 22.
[95] See Kids Online Safety Act, S. 1409, 118th Cong. (2023), as amended and posted by the Senate Committee on Commerce, Science , and Transportation on July 27, 2023, available at https://www.congress.gov/bill/118th-congress/senate-bill/1409/text#toc-id6fefcf1d-a1ae-4949-a826-23c1e1b1ef26 (last accessed Oct. 2, 2023).
[96] See id. at Section 3.
[97] Cf. Manhattan Community Access Corp. v. Halleck, 139 S. Ct. 1921, 1930-31 (2019):
[M]erely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints…
If the rule were otherwise, all private property owners and private lessees who open their property for speech would be subject to First Amendment constraints and would lose the ability to exercise what they deem to be appropriate editorial discretion within that open forum. Private property owners and private lessees would face the unappetizing choice of allowing all comers or closing the platform altogether.
[98] See Counterman v. Colorado, 600 U.S. 66 (2023); Ben Sperry (@RBenSperry), Twitter (June 28, 2023, 4:46 PM), https://twitter.com/RBenSperry/status/1674157227387547648.
[99] Cf. HØEG v. Newsom, 2023 WL 414258 (E.D. Cal. Jan. 25, 2023); Sperry, The Law & Economics of Children’s Online Safety: The First Amendment and Online Intermediary Liability, supra note 70.
[100] California Age-Appropriate Design Code Act, AB 2273 (2022), https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202120220AB2273.
[101] See id. at § 1798.99.32(d)(1), (2), (4).
[102] See Candeub, supra note 16.
[103] NetChoice LLC. v. Griffin, Case No. 5:23-CV-05105 at 25 (Aug. 31, 2023), slip op., available at https://netchoice.org/wp-content/uploads/2023/08/GRIFFIN-NETCHOICE-GRANTED.pdf.
[104] Id.
[105] Id. at 38 (“Having considered both sides’ positions on the level of constitutional scrutiny to be applied, the Court tends to agree with NetChoice that the restrictions in Act 689 are subject to strict scrutiny. However, the Court will not reach that conclusion definitively at this early stage in the proceedings and instead will apply intermediate scrutiny, as the State suggests.”).
[106] Id. at 48 (“In sum, NetChoice is likely to succeed on the merits of the First Amendment claim it raises on behalf of Arkansas users of member platforms. The State’s solution to the very real problems associated with minors’ time spent online and access to harmful content on social media is not narrowly tailored. Act 689 is likely to unduly burden adult and minor access to constitutionally protected speech. If the legislature’s goal in passing Act 689 was to protect minors from materials or interactions that could harm them online, there is no compelling evidence that the Act will be effective in achieving those goals.”).
[107] See NetChoice v. Bonta, Case No. 22-cv-08861-BLF (N.D. Cal. Sept. 18, 2023), slip op., available at https://netchoice.org/wp-content/uploads/2023/09/NETCHOICE-v-BONTA-PRELIMINARY-INJUNCTION-GRANTED.pdf; Ben Sperry, What Does NetChoice v. Bonta Mean for KOSA and Other Attempts to Protect Children Online?, Truth on the Market (Sep. 29, 2023), https://truthonthemarket.com/2023/09/29/what-does-netchoice-v-bonta-mean-for-kosa-and-other-attempts-to-protect-children-online.
[108] Id. at 36-38.
[109] See Carl Szabo, NetChoice Sends Veto Request to Utah Gov. Spencer Cox on HB 311 and SB 152, NetChoice (Mar. 3, 2023), https://netchoice.org/netchoice-sends-veto-request-to-utah-gov-spencer-cox-on-hb-311-and-sb-153.
[110] See, e.g., Sable Commcn’s v. FCC, 492 U.S. 115, 126 (1989) (“The Government may, however, regulate the content of constitutionally protected speech in order to promote a compelling interest if it chooses the least restrictive means to further the articulated interest.”).
[111] Brown, 564 U.S. at 801 (“California claims that the Act is justified in aid of parental authority: By requiring that the purchase of violent video games can be made only by adults, the Act ensures that parents can decide what games are appropriate. At the outset, we note our doubts that punishing third parties for conveying protected speech to children just in case their parents disapprove of that speech is a proper governmental means of aiding parental authority.”).
[112] Brown, 564 U.S. at 801.
[113] Id. at 803
[114] Id.
[115] See supra IV.B.
[116] See Clare Morrell, Adam Candeub, & Michael Toscano, No, Big Tech Doesn’t Have A Right To Speak To Kids Without Their Parent’s Consent, The Federalist (Sept. 21, 2023), https://thefederalist.com/2023/09/21/no-big-tech-doesnt-have-a-right-to-speak-to-kids-without-their-parents-consent (noting “Justice Clarence Thomas wrote in his dissent in the Brown case that “the ‘freedom of speech,’ as originally understood, does not include a right to speak to minors (or a right of minors to access speech) without going through the minors’ parents or guardians.”).
[117] Brown, 564 U.S. at 821.
[118] Id. at 822.
[119] Id. at 805.
[120] Id. at 813.
[121] See, e.g., Ben Sperry, There’s Nothing ‘Conservative’ About Trump’s Views on Free Speech and the Regulation of Social Media, Truth on the Market (Jul. 12, 2019), https://truthonthemarket.com/2019/07/12/theres-nothing-conservative-about-trumps-views-on-free-speech (noting Kavanaugh’s majority opinion in Halleck on compelled speech included all the conservative justices; at the time he and Gorsuch were relatively new Trump appointees); Justice Amy Comey Barrett has also joined the majority opinion in 303 Creative LLC v. Elenis, 600 U.S. 570 (2023), written by Gorsuch and joined by all the conservatives, which found public-accommodations laws are subject to strict scrutiny if they implicate expressive activity.
[122] Clare Morell (@ClareMorellEPPC), Twitter (Sept. 7, 2023, 8:27 PM), https://twitter.com/ClareMorellEPPC/status/1699942446711357731.
[123] Brown, 564 U.S. at 786.
Popular Media Amazon against the DSA ad database duty: The new Digital Services Act (DSA) includes a duty for very large online platforms (VLOPs) to “compile and make . . .
Amazon against the DSA ad database duty: The new Digital Services Act (DSA) includes a duty for very large online platforms (VLOPs) to “compile and make publicly available an advertisement repository.” Amazon challenged this duty before the EU’s General Court, which made a preliminary decision to temporarily suspend the application of that duty to Amazon.
TOTM The European Commission late last month published the full list of its “gatekeeper” designations under the Digital Markets Act (DMA). Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft—the six . . .
The European Commission late last month published the full list of its “gatekeeper” designations under the Digital Markets Act (DMA). Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft—the six designated gatekeepers—now have six months to comply with the DMA’s list of obligations and restrictions with respect to their core platform services (CPS), or they stand to face hefty fines and onerous remedies (see here and here for our initial reactions).
TOTM In order to promote competition in digital markets,[1] Latin American countries should not copy and paste “solutions” from other jurisdictions, but rather design their own set . . .
In order to promote competition in digital markets,[1] Latin American countries should not copy and paste “solutions” from other jurisdictions, but rather design their own set of policies. In short, Latin American countries—like my own, Peru—should not “put the cart before the horse” and regulate markets that are not yet mature.