MATSUI LEADS COLLEAGUES IN OPPOSING PUSH TO REVIVE AI MORATORIUM IN NDAA
WASHINGTON, D.C. – Today, Congresswoman Doris Matsui (CA-07), Ranking Member of the House Energy and Commerce Subcommittee on Communications and Technology, and Representatives Ted Lieu (CA-36), April McClain Delaney (MD-06), Luz Rivas (CA-29), Don Beyer (VA-08) and Yvette Clarke (NY-09) led a group of 81 total lawmakers in sending a letter to congressional leadership opposing any effort to reintroduce a moratorium on state and local artificial intelligence (AI) laws in the National Defense Authorization Act (NDAA). The lawmakers also raised strong concerns about related proposals under consideration for a presidential executive order that would undermine state authority over AI regulation.
“Earlier this year, the Senate rejected this same AI moratorium concept on an overwhelming bipartisan 99–1 vote for H.R. 1, the reconciliation bill,” wrote the lawmakers. “That vote sent a clear, bipartisan message: Congress should not freeze state and local AI safeguards, least of all when there are no meaningful federal protections in place. Trying to revive the same flawed policy in the NDAA, or through executive action, is an attempt to quietly jam through an idea that has already been rejected, as well as sidestepping public debate and bypassing the regular committee process.”
The lawmakers emphasized that these new attempts come at a moment of intensifying AI-related harms and growing bipartisan public demand for safe, trustworthy AI development. States across the country, led by both Democrats and Republicans, are actively creating common-sense guardrails to protect consumers, workers, children, and vulnerable communities. Blocking those safeguards now, while the federal government has yet to enact any comprehensive AI laws, would leave Americans exposed to escalating risks, erode public trust, and undermine U.S. competitiveness.
“This proposal is not only dangerous, but it is also unpopular,” the lawmakers continued. “The American people reject it, state leaders reject it, and experts reject it.”
“The American people want AI to be used in ways that are safe, fair, and accountable. They want innovation they can trust, not a rush to strip away all safeguards,” the lawmakers concluded.
Congresswoman Matsui has been a leading voice in Congress against the AI moratorium. She previously led her California colleagues against the House-passed moratorium in Republicans Big Ugly Bill and later led an effort urging the Senate to strike provisions conditioning BEAD funding on state AI preemption. Today’s letter continues that work as similar proposals resurface in the NDAA and executive branch deliberations.
Full text of the letter can be found below or HERE.
Dear Speaker Johnson, Minority Leader Jeffries, Majority Leader Thune, and Minority Leader Schumer:
We write to express our strong opposition to any effort in the National Defense Authorization Act (NDAA) that would reintroduce a sweeping moratorium on the ability of states and local governments to enforce their own artificial intelligence (AI) laws and regulations. Similarly, we strongly oppose related proposals under consideration for a presidential executive order that would attempt to preempt duly enacted state AI laws and coerce states, by threatening federal funding, into abandoning enforcement.
Earlier this year, the Senate rejected this same AI moratorium concept on an overwhelming bipartisan 99-1 vote for H.R.1, the reconciliation bill. That vote sent a clear, bipartisan message: Congress should not freeze state and local AI safeguards, least of all when there are no meaningful federal protections in place. Trying to revive the same flawed policy in the NDAA, or through executive action, is an attempt to quietly jam through an idea that has already been rejected, as well as sidestepping public debate and bypassing the regular committee process.
Proponents of the ban on state AI laws claim it is necessary to protect innovation, but that gets the tradeoff exactly backwards. We strongly support innovation, and it is simply wrong to accept the premise that identifying and addressing AI-specific risks, and setting common-sense guardrails, is incompatible with U.S. leadership in AI. Clear, trusted rules unlock innovation by giving people confidence and certainty, while promoting a fair, open, and competitive playing field.
Our federal system is designed to let states serve as “laboratories of democracy,” as states are closest to the communities already experiencing AI harms—from deepfakes and targeted scams to automated systems that entrench bias, and emerging risks to kids’ online safety. If states are blocked from enforcing their own AI laws without any meaningful federal alternative in place, those harms will deepen, public trust will erode, and U.S. competitiveness will suffer.
States, led by both Democrats and Republicans, are working to find that balance. Last year, Utah enacted its Utah Artificial Intelligence Policy Act to mandate certain disclosure requirements for entities using generative AI tools, and further updated the law this year to address mental health chatbots, showing how states can quickly adapt as new risks emerge. And in October, California enacted a first-of-its-kind law, the Transparency in Frontier Artificial Intelligence Act (SB-53), establishing basic transparency and safety requirements for the largest AI companies. This law is a common-sense framework: it helps protect the public without slowing innovation or overburdening smaller developers, while recognizing the potential to align with future federal frameworks should the federal government adopt national AI standards. Notably, leading AI companies agree this is the right direction. Anthropic endorsed SB-53. OpenAI stated the company was “pleased to see that California has created a critical path toward harmonization with the federal government.” And Meta said it “supports balanced AI regulation,” calling the law “a positive step in that direction.”
By contrast, the federal government has yet to enact, or even seriously discuss, a single comprehensive AI safety or accountability law that would provide meaningful protections for consumers, workers, or our democracy. At the same time, we are hearing growing reports of tragic suicides encouraged by AI chatbots—like that of 16-year-old Adam Raine—and AI-induced psychosis. Against this backdrop, it would be especially irresponsible to tell states they cannot act when there are no meaningful federal protections in place. Many states have been stepping in where the federal government has not, trying to address the documented harms that AI is causing right now in the real world.
In addition to the renewed push for an AI moratorium in the NDAA, we are deeply concerned by reports that President Trump is preparing an executive order that would directly undermine state authority over AI. Specifically, we understand that such an order could:
- Create an AI Litigation Task Force at the Department of Justice and direct it to challenge state AI laws as unconstitutional or preempted, including by invoking the Commerce Clause and interstate commerce as a pretext; and
- Direct federal agencies to identify so-called “onerous” state AI laws and withhold or restrict non-deployment Broadband Equity, Access, and Deployment (BEAD) funds and other discretionary federal funds unless states walk away from these critical AI protections, while urging regulators such as the Federal Communications Commission and Federal Trade Commission to treat certain state safeguards as preempted.
If accurate, this would be a troubling attempt to weaponize preemption doctrine and federal funding streams to override the policy decisions of state legislatures and local governments. Such an order would represent an unprecedented, sweeping effort to strip states of their ability to regulate and protect their residents from serious AI harms. This would set a dangerous precedent. Congress should reject any attempt to codify this approach in the NDAA or other legislation and be prepared to exercise robust oversight of any such executive action.
We are equally concerned by any effort to condition critical federal programs on a state AI law moratorium. Earlier proposals to tie BEAD funds to state AI preemption were deeply misguided. Conditioning vital federal funding, authorized and appropriated by Congress, on unrelated state concessions about AI regulation would punish communities for insisting on reasonable safeguards, delay essential projects, and inject unnecessary uncertainty into programs that are supposed to be focused on closing the digital divide.
This proposal is not only dangerous, but it is also unpopular. The American people reject it, state leaders reject it, and experts reject it.
A recent Gallup survey found that 80% of U.S. adults believe the government should maintain rules for AI safety and data security—including 88% of Democrats and 79% of Republicans and independents. In other words, there is overwhelming, bipartisan support for common-sense AI safeguards.
Opposition to an AI moratorium is also broad and bipartisan. The earlier version was publicly opposed by a wide coalition, including 40 state attorneys general, 260 state lawmakers, faith leaders, more than 130 advocacy groups, 17 Republican governors, and many child safety experts. Already, both Republican and Democratic governors, lawmakers, and stakeholders have spoken out against this new push for a federal AI moratorium.
As you consider the NDAA, we urge you to oppose any effort to reinsert an AI moratorium barring state and local governments from enforcing their laws.
The American people want AI to be used in ways that are safe, fair, and accountable. They want innovation they can trust, not a rush to strip away all safeguards. We stand ready to work with you on robust federal legislation that meets this moment. But we cannot support efforts to silence states, undermine existing protections, or use must-pass bills like the NDAA to jam through a policy that has already been soundly rejected.
Thank you for your attention to this important matter.
# # #
