Regulatory Comments

Coalition Letter Opposing California SB 1047

We, the undersigned organizations and individuals, are writing to express our serious concerns about SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act. We believe that the bill, as currently written, would have severe unintended consequences that could stifle innovation, harm California’s economy, and undermine America’s global leadership in AI.

Our main concerns with SB 1047 are as follows:

  1. The application of the precautionary principle, codified as a “limited duty exemption,” would require developers to guarantee that their models cannot be misused for various harmful purposes, even before training begins. Given the general-purpose nature of AI technology, this is an unreasonable and impractical standard that could expose developers to criminal and civil liability for actions beyond their control.
  2. The bill’s compliance requirements, including implementing safety guidance from multiple sources and paying fees to fund the Frontier Model Division, would be expensive and time-consuming for many AI companies. This could drive businesses out of California and discourage new startups from forming. Given California’s current budget deficit and the state’s reliance upon capital gains taxation, even a marginal shift of AI startups to other states could be deleterious to the state government’s fiscal position.
  3. The bill’s definition of a “covered model”–models trained with more than 10^26 floating-point operations at a cost above $100 million-will create confusion, encourage an adversarial relationship between the Frontier Model Division and AI developers, and interfere with industry dynamics in unpredictable ways. First, it is not always straightforward to say what a training run for a model costs. Second, the Frontier Model Division will have an incentive to investigate AI companies’ finances and other records to ensure they are not training covered models, which will create another burden for developers. Finally, it penalizes companies based on the size of their investment in AI: if one company trains a model above the threshold, they will be regulated in perpetuity. Yet because compute costs fall rapidly, a competitor could train a model six months later and be subject to no regulation at all. This is nonsensical.
  4. The bill’s combination of the precautionary principle and liability (both criminal and civil) is incompatible with the way open-source software has been developed and distributed for decades. While this bill would not ban any existing open-source model, it constitutes a gradual legislated phasing out of open-source AI near today’s frontier.

These restrictions on open-source AI models would undermine a key driver of innovation and collaboration in the field. The vast majority of stakeholders, including large tech companies, startups, the broader business community, academia, and civil society organizations like the Center for American Progress, have voiced support for open-source AI development. Open-source AI has also thus far played an essential role in interpretability and safety research; by limiting access to future open-source models, this bill could undermine progress in those vital fields.

We believe that SB 1047, if enacted in its current form, would have a chilling effect on AI research and development in California and potentially across the United States. It could slow down progress in a field that holds immense promise for advancing scientific understanding, improving medicine, and driving economic growth.

While we share the goal of ensuring that AI is developed and deployed responsibly, we urge you to reconsider the approach taken in SB 1047. The bill is also broadly inconsistent with the legislative direction suggested by the United States Senate’s Bipartisan Working Group on AI; if SB 1047 passes, California likely would be an unfortunate outlier in the broader context of American policy stances toward AI. In conclusion, we respectfully request that you either make substantial changes to SB 1047 to address the concerns outlined above or withdraw the bill entirely. We stand ready to work with you to find a path forward that promotes innovation while also ensuring the safe and responsible development of AI technology.


Neil Chilson, Head of AI Policy,, Abundance Institute

Kristian Stout, Director of Innovation Policy, International Center for Law & Economics

Lisa B. Nelson, CEO, ALEC Action

Logan Kolas, Director of Technology Policy, American Consumer Institute

Daniel Castro, Director, Center for Data Innovation

Taylor Barkley, Director of Public Policy, Abundance Institute

Adam Thierer, Resident Senior Fellow, Technology & Innovation, R Street Institute

Vance Ginn, Ph.D., Former Chief Economist, White House Office of Management and Budget

Jessica Melugin, Director, Center for Technology and Innovation, Competitive Enterprise Institute

Nathan Leamer, Executive Director, Digital First Project