Key point: Although the executive order seeks to bring regulatory certainty in the development and deployment of AI in the U.S. — at least in the short term — it is unlikely to alleviate compliance burdens for businesses and may only create more uncertainty.
On December 11, 2025, President Donald Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence.” The purpose of the executive order is twofold.
First, the order seeks to create a legal structure to stop states from enacting new state AI laws and from enforcing existing ones. According to the order, the goal of the U.S. must be “[t]o win” a “race with adversaries for supremacy” in AI. However, to do so, “United States AI companies must be free to innovate without cumbersome regulation.” Therefore, the order seeks to prevent “a patchwork of 50 different state regulatory regimes that makes compliance more challenging, particularly for start-ups.” Importantly, the order itself does not attempt to preempt state AI laws. Rather, as discussed below, it just creates a structure for the federal government to try to preempt some of them.
Second, the order states that the Trump administration will work with Congress to enact a “minimally burdensome national standard” that preempts state law and “ensure[s] that the United States wins the AI race, as we must.”
The order follows two prior attempts in Congress to pass a moratorium on states enacting AI laws. Most recently, an attempt to include a moratorium in the National Defense Authorization Act of 2026 failed, creating the impetus for the president to sign the order.
Although the executive order seeks to streamline and reduce AI regulation, it leaves open many questions, including the scope of laws that will be challenged and the likelihood — if not certainty — that states will challenge the order’s legality. It also remains to be seen whether the order slows the passage of new state AI laws and enforcement of existing ones. Indeed, it could ultimately have the unintended consequence of resulting in even more state AI laws. In the article below, we discuss the scope of the order, the state AI laws that could be targeted by the administration, how states have reacted to the order, and takeaways for businesses that are trying to comply with existing and forthcoming state AI laws.
What Does the Executive Order Do?
The executive order essentially does two things — (1) it creates a legal structure for the administration to challenge existing state AI laws and discourage states from passing new laws, and (2) it charges administration officials with preparing a legislative recommendation for a federal AI framework that preempts state AI laws.
With respect to the first topic, the order requires the attorney general to establish an AI Litigation Task Force within 30 days. The sole responsibility of the task force is to challenge state AI laws that are inconsistent with sustaining and enhancing “the United States’ global AI dominance through a minimally burdensome national policy framework for AI.”
In addition, within 90 days of the order, the secretary of commerce must publish an evaluation of existing state AI laws “that identifies onerous laws that conflict with the policy [of U.S. global AI dominance], as well as laws that should be referred to the Task Force.” The secretary of commerce also must issue a policy notice specifying the conditions under which states may be eligible for remaining funding under the Broadband Equity, Access, and Deployment (BEAD) program that was saved through the administration’s “Benefit of the Bargain” reforms. Among other things, the policy notice must provide that states with onerous AI laws identified by the secretary are ineligible for non-deployment funds.
Executive departments and agencies also must assess their discretionary grant programs to determine whether they can condition grants on states not enacting AI laws. For states with existing laws, the executive departments and agencies must assess whether to enter into binding agreements prohibiting states from enforcing such laws while they receive discretionary funding.
Next, within 90 days of the order, the Federal Trade Commission (FTC) chair must issue a policy statement on the application of the FTC Act’s prohibition on unfair and deceptive acts or practices to AI models. “That policy statement must explain the circumstances under which State laws that require alterations to the truthful outputs of AI models are preempted by the Federal Trade Commission Act’s prohibition on engaging in deceptive acts or practices affecting commerce.”
Finally, the special advisor for AI and crypto and the assistant to the president for science and technology are to prepare a “legislative recommendation” for a federal AI framework that preempts state AI laws. However, that recommendation will not propose preempting state AI laws relating to (1) child safety protections; (2) AI compute and data center infrastructure, other than generally applicable permitting reforms; (3) state government procurement and use of AI; and (4) “other topics as shall be determined.”
What State AI Laws Could the Administration Challenge?
As noted, the executive order charges the commerce secretary with creating a list of state AI laws that conflict with the order’s stated policy of sustaining and enhancing U.S. global AI dominance. The order does not define what constitutes a state AI law which, as discussed below, can include many different types of laws. However, the order does state that the evaluation must identify laws that “require AI models to alter their truthful outputs, or that may compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment or any other provision of the Constitution.”
The order does not specifically state what it means for a law to require AI models to “alter their truthful outputs.” However, this likely refers to the claim that, by requiring companies to mitigate the risk that their algorithms will discriminate against protected classifications such as women, veterans, minorities, and religious groups, such laws allegedly mandate untruthful outputs. To that end, the executive order states that “State laws are increasingly responsible for requiring entities to embed ideological bias within models. For example, a new Colorado law banning ‘algorithmic discrimination’ may even force AI models to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.”
For reference, we maintain a map and list of the most notable private-sector-focused state AI laws. However, there are many other types of state AI laws, including laws regulating the use of AI in health care, insurance, real estate, elections, and law enforcement; government use and procurement of AI technologies; name, image, and likeness; CSAM and sexual material; and deepfakes and other criminal issues. In addition, state kids’ safety and social media laws implicate AI, as do state consumer data privacy laws, including the California Consumer Privacy Act (CCPA) where CalPrivacy recently finalized regulations on automated decision-making technologies. Generally applicable state laws also necessarily regulate AI, including unfair and deceptive acts and practices laws, false advertising laws, criminal laws, and civil rights laws.
The signed executive order only refers to one law — Colorado’s AI Act. The draft version of the executive order leaked in November also referenced California’s Frontier Artificial Intelligence Act (SB 53), but that reference was removed in the final signed version. Discussing that removal, Politico reported that “many tech companies want to include similar language from that law . . . in any federal AI law.” In that same vein, earlier this year, New York lawmakers passed a frontier model bill — the New York Responsible AI Safety and Education (RAISE) Act (A 6453 / S 6953). New York Governor Kathy Hochul is currently considering that bill and is reportedly proposing amendments that tech advocates are pushing to align the bill with California’s SB 53.
Although the final order only refers to the Colorado law, while signing the order, President Donald Trump and David Sacks, chair of the president’s Council of Advisors on Science and Technology, specifically referenced laws passed in California, New York, and Illinois. Sacks further stated that the administration will not target state children’s safety laws, and that the administration will focus on challenging the most onerous laws. Part 8 of the order also signals that state AI compute and data infrastructure laws and state government procurement and use of AI laws will not be targeted.
With respect to California, as we previously reported, California enacted seven private-sector-focused AI-related laws this year. In addition to SB 53 (discussed above), California enacted AB 853 (amending the California AI Transparency Act (SB 942)), SB 243 (companion chatbots), AB 325 (Cartwright Act violations), AB 723 (real estate: digitally altered images: disclosure), AB 316 (AI: defenses), and AB 489 (health care professions: deceptive terms or letters). In 2024, California enacted AB 2013 (Generative AI Training Data Transparency), which goes into effect January 1, 2026. The new CCPA regulations on automated decision-making technology and risk assessments also could be targeted as they cover many of the same topics as the Colorado AI Act. It is unclear if the administration would go so far as to challenge the recently finalized California Civil Rights Council regulations on automated-decision systems although those regulations do prohibit the use of automated-decision systems that discriminate so they could fall into the administration’s focus on challenging laws that allegedly require AI models to “alter their truthful outputs.”
With respect to New York, this year’s Algorithmic Pricing Disclosure Act could be targeted by the administration, but that law already is the subject of an ongoing lawsuit based on First Amendment and other grounds. Among other AI-related laws, New York enacted a companion chatbot law, a law regulating the use of algorithmic pricing by landlords, and a law mandating the disclosure of AI-generated synthetic performers in advertisements.
In addition, Hochul is currently considering a bill to amend and broaden last year’s LOADinG Act. That law only applies to the state’s use of automated decision-making systems, so it is presumably outside the scope of the order based on the scope identified in part 8. However, if the administration were to challenge that law, there is no reason that Kentucky’s SB 4 would not be in scope. In short, that law adopts the Colorado AI Act’s principles and applies them to Kentucky’s procurement and deployment of AI for state purposes.
New York City also previously enacted Local Law 144, which may draw the administration’s attention as it requires bias audits for the use of automated employment decision tools to screen candidates for employment. However, Local Law 144 is a city — not state — law and it is unclear whether the order is intended to reach those types of laws.
Moving to Illinois, last year, the state enacted an amendment to its Human Rights Act that, effective January 1, 2026, makes it unlawful for employers “to use artificial intelligence that has the effect of subjecting employees to discrimination on the basis of protected classes.” This year, lawmakers enacted the Wellness and Oversight for Psychological Resources Act, which regulates the use of AI in health care.
Further, it is unclear whether state laws regulating consumer-facing AI applications such as chat features are in play. If so, laws in California, Maine, New Jersey, Nevada, and Utah could be targeted. It also is unclear whether the order is intended to extend to state insurance-related regulatory activities. Algorithmic bias has been a relevant consideration in that industry and one that has drawn attention from regulatory authorities such as the New York Division of Financial Services. Arkansas’ ownership of GenAI content and model learning law and Tennessee’s ELVIS Act feel outside of the scope of the order; however, the ELVIS Act drew opposition from technology groups prior to passing last year.
One state that appears notably absent from the discussion is Texas. Last year, Texas was one of the most active states to pass AI-related laws, including HB 149 (the Texas Responsible AI Governance Act or TRAIGA), SB 1188 (AI in electronic health records), and SB 815 (use of AI systems in utilization reviews). During the signing ceremony, Trump was joined by Texas Senator Ted Cruz, perhaps signaling that the Texas laws will not be challenged.
Nonetheless, TRAIGA prohibits a person from developing or deploying an AI system “with the intent to unlawfully discriminate against a protected class in violation of state or federal law.” In addition, the law creates a safe harbor for entities that substantially comply with the most recent version of the National Institute of Standards and Technology’s (NIST) Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile. One of the risks that the NIST framework discusses is the risk of GenAI systems perpetuating bias based on race, gender, disability, or other protected classes. Therefore, for an entity to enjoy the protection of the TRAIGA safe harbor provision, it would need to extensively consider this issue. In theory, this would make TRAIGA a prime target for challenge by the administration.
Finally, given the complexity of state laws that implicate AI, what laws the commerce secretary chooses to list and the task force chooses to challenge will be crucial not only in relation to what laws are identified but also what laws are not identified. In other words, if the administration does not list TRAIGA, it will signal to the states that TRAIGA is an acceptable state AI law that can be enacted without consequence. If the administration does not challenge chatbot laws, government use of AI laws, or health care AI laws, for example, it will signal to state lawmakers that those are safe areas for state lawmaking. In that respect, the commerce secretary’s list could have the unintended consequence of leading to the passage of even more state AI laws.
How Have the States Reacted?
Although the executive order seeks to pass a national AI regulatory standard that preempts state laws and provides regulatory clarity to businesses, it is likely that the order will only cause more uncertainty for businesses in the short term as the federal government and states dig in to litigate the lawfulness of the order and federal challenges to any state laws. In response to the order, California Governor Gavin Newsom issued a press release stating, in part: “President Trump today issued an executive order that does little to protect innovation or interests of the American people, and instead protects the President and his cronies’ ongoing grift and corruption.”
In New York, Hochul reacted to the draft order by issuing a statement saying:
“We passed some of the nation’s strongest AI safeguards to protect kids, workers and consumers. Now, the White House is threatening to withhold hundreds of millions of dollars in broadband funding meant for rural upstate communities, all to shield big corporations from taking basic steps to prevent potential harm from AI.
This is unacceptable. In New York, we protect working families and set the standard for the nation. I will continue to fight to ensure our state remains a global leader in responsible AI.”
In Colorado, Attorney General Philip Weiser promised to challenge the order “to defend the rule of law and protect the people of Colorado.”
It also is unclear whether the executive order will slow down states passing new AI laws. In late November, a bipartisan group of 280 state lawmakers from across the U.S. sent a letter to Congress expressing “strong opposition” to any attempt to include a state AI law moratorium in the National Defense Authorization Act. In early December, Florida Governor Ron DeSantis announced a legislative proposal to “protect consumers by establishing an Artificial Intelligence Bill of rights for citizens.” After Trump signed the executive order last week, 50 state lawmakers from 26 states issued a letter stating they “are outraged that the President’s Executive Order attempts to prevent us from fulfilling our responsibility to defend our constituents from the well-known harms of artificial intelligence.”
Nonetheless, as noted, the executive order states that the administration will consider restricting remaining BEAD funding from being sent to states with “onerous AI laws.” The order also directs executive departments and agencies to restrict discretionary grant programs to states with such laws. Ultimately, the extent to which those actions may impact the willingness of states to pass new laws or enforce existing ones remains to be seen, especially considering that states are likely to sue if they believe funds are being unlawfully withheld. It also should be noted that BEAD funding will be used to build high-speed internet infrastructure in unserved and underserved areas. While funding is allocated to the states, the states use that money to pay private technology companies to build the infrastructure.
What Does This Mean for Compliance?
Although the goal of the executive order is to create a national standard to drive innovation, it is likely that — at least in the short term — it will only create more compliance uncertainty for businesses. For example, some laws that could be challenged are already in effect, such as New York City’s Local Law 144. Businesses already have driven compliance with those laws, rendering the executive order’s impact limited, at least from a compliance perspective. Meanwhile, other laws go into effect January 1, 2026, including the Illinois Human Rights Act amendments and California’s GenAI Training Data Transparency Act. TRAIGA also will go into effect January 1, 2026, although, as discussed, it is unclear whether that law will be challenged. The commerce secretary’s list of laws will not be published until March, meaning that businesses will need to comply with those laws or risk enforcement actions.
Further, after the commerce secretary publishes the list of state AI laws, there will presumably be state AI laws that do not make the list and, therefore, are unaffected by the executive order. And, for laws that make the list, there will inevitably be litigation over the lawfulness of the executive order. Perhaps states will agree to not enforce those laws during the pendency of such litigation, but that remains to be seen. Moreover, even assuming arguendo that the administration has the authority to challenge the state laws, it is possible that courts could reject the administration’s legal theories. Alternatively, courts analyzing different laws could reach different conclusions. For example, one court could find that a state AI law violates the First Amendment while another court could find that a different state AI law does not implicate the First Amendment and is otherwise constitutional. Ultimately, the regulatory certainty that the administration desires and businesses crave can likely only come from one source — the federal government passing a law that preempts state laws. However, that has proven to be an unreachable goal to date and has directly led to states passing their own laws, which is the very problem that the executive order now seeks to address.