Photo of David Stauss

David guides clients as they navigate the complexities of privacy and cyber law. His straightforward advice and thorough approach are a benefit to clients as they confront their toughest challenges.

Key point: The law, which went into effect at signing, contains significant design and development requirements, requires independent third-party audits, and can be enforced against officers and employees.

On February 5, 2026, South Carolina Governor Henry McMaster signed the South Carolina Age-Appropriate Design Code Act (H 3431). South Carolina now joins California, Maryland, Nebraska, and Vermont in enacting Age-Appropriate Design Code (AADC) laws although these laws vary widely in both scope and requirements.

South Carolina’s law has several unique requirements, including requiring covered online services to engage in independent third-party audits, which are to be publicly posted by the state attorney general. We review these requirements below.

Of further note, the law went into effect upon the governor’s signature and does not contain a right to cure. The law is generally enforceable by the state attorney general who can seek treble financial damages for violations. The law also specifically provides that officers and employees of covered online services can be held personally liable for willful and wanton violations. In addition, the law’s prohibition against dark patterns is enforceable under the South Carolina Unfair Trade Practices Act, which allows for a private right of action. In the below post, we provide an overview of the new law and provide more general context on its provisions.

Key point: Businesses subject to the CCPA now must conduct risk assessments for certain types of processing activities and, starting in 2028, must certify to California regulators that they completed the assessments.

The California Consumer Privacy Act’s (CCPA) new regulations went into effect on January 1, 2026. Although the new regulations bring many changes for businesses subject to the CCPA, one of the biggest changes is a new requirement to conduct risk assessments for processing activities that present “significant risk to consumers’ privacy.” This can encompass many types of common data processing activities such as the use of third-party cookies and tracking technologies, processing of sensitive personal information (e.g., biometric data), and the use of AI for certain employment-related activities. Like the CCPA, the risk assessment requirement applies to consumer, employee, and commercial personal information.

Importantly, on April 1, 2028, businesses subject to the CCPA must file a certification with the California Privacy Protection Agency (CalPrivacy) attesting — under penalty of perjury — that they conducted the required risk assessments. The certification must be signed by a member of the business’s executive management team.

In the below article, we provide an overview of this new risk assessment requirement.

Key point: New York becomes the second state — after California — to enact an AI frontier model law, while the governor’s veto of the New York Health Information Privacy Act will be a welcome result for organizations that criticized the bill as unworkable.

In the last two weeks, New York Governor Kathy Hochul took action on numerous bills the New York legislature passed before it closed in June. Among those actions, Hochul signed four AI-related bills — including a bill regulating AI frontier models — and vetoed a controversial health data privacy bill. We discuss each of those bills in the article below.

In addition to these bills, earlier this year, New York lawmakers enacted three other AI-related laws — the Algorithmic Pricing Disclosure Act, a companion chatbot law, and a law regulating the use of algorithmic pricing by landlords.

Key point: Although the executive order seeks to bring regulatory certainty in the development and deployment of AI in the U.S. — at least in the short term it is unlikely to alleviate compliance burdens for businesses and may only create more uncertainty.

On December 11, 2025, President Donald Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence.” The purpose of the executive order is twofold.

First, the order seeks to create a legal structure to stop states from enacting new state AI laws and from enforcing existing ones. According to the order, the goal of the U.S. must be “[t]o win” a “race with adversaries for supremacy” in AI. However, to do so, “United States AI companies must be free to innovate without cumbersome regulation.” Therefore, the order seeks to prevent “a patchwork of 50 different state regulatory regimes that makes compliance more challenging, particularly for start-ups.” Importantly, the order itself does not attempt to preempt state AI laws. Rather, as discussed below, it just creates a structure for the federal government to try to preempt some of them.

Second, the order states that the Trump administration will work with Congress to enact a “minimally burdensome national standard” that preempts state law and “ensure[s] that the United States wins the AI race, as we must.”

The order follows two prior attempts in Congress to pass a moratorium on states enacting AI laws. Most recently, an attempt to include a moratorium in the National Defense Authorization Act of 2026 failed, creating the impetus for the president to sign the order.

Although the executive order seeks to streamline and reduce AI regulation, it leaves open many questions, including the scope of laws that will be challenged and the likelihood if not certainty that states will challenge the order’s legality. It also remains to be seen whether the order slows the passage of new state AI laws and enforcement of existing ones. Indeed, it could ultimately have the unintended consequence of resulting in even more state AI laws. In the article below, we discuss the scope of the order, the state AI laws that could be targeted by the administration, how states have reacted to the order, and takeaways for businesses that are trying to comply with existing and forthcoming state AI laws.

Key point: Starting August 1, 2026, registered data brokers will need to access California’s new one-stop-shop deletion platform to process deletion requests or risk significant fines.

Last month, the California Office of Administrative Law (OAL) approved the California Privacy Protection Agency’s (CalPrivacy) regulations further implementing the Delete Act (SB 362). Effective January 1, 2026, the Delete Act makes several changes to California’s data broker law, including charging CalPrivacy with creating a new one-stop-shop for California residents to request that all registered data brokers delete their personal information. California residents can begin registering on January 1, 2026, and data brokers must process requests starting August 1, 2026. Failure to comply is subject to a $200 fine “for each deletion request for each day the data broker fails to delete information.”

In the below article, we provide a brief background on the Delete Act and summarize the new regulations.

Key point: This is the eighth fine CalPrivacy has issued against an entity for failing to register as a data broker and comes just days after CalPrivacy announced a new Data Broker Enforcement Strike Force and only months before fines will significantly increase under the California Delete Act.

On December 3, 2025, the California Privacy Protection Agency (CalPrivacy) announced its latest fine for an entity failing to register as a data broker under California’s Delete Act. This is the eighth time CalPrivacy has fined an entity for failing to register as a data broker. The agency issued four fines in both 2024 and 2025.

The $56,600 fine comes just days after CalPrivacy announced the formation of a Data Broker Enforcement Strike Force, portending even more (and significantly higher) fines against data brokers and unregistered data brokers. This is particularly notable given that the agency’s data broker regulations adopt a broader definition of what constitutes a data broker, which definition may encompass entities that do not traditionally consider themselves to be data brokers.

In the below article, we provide a brief overview of the enforcement action. We also discuss the broader context of data broker regulation in California, including the increased risks and requirements on data brokers in 2026.

Key point: The court held that NetChoice’s complaint adequately states constitutional claims against Maryland’s Age-Appropriate Design Code Act and allowed NetChoice’s lawsuit to continue, but did not rule on the merits of the claims or enjoin the law.

On November 24, 2025, Maryland District Court Judge Richard Bennett denied Maryland’s motion to dismiss a complaint filed by NetChoice challenging Maryland’s Age-Appropriate Design Code Act, commonly referred to as the Maryland Kids Code. NetChoice’s complaint alleges that the Kids Code violates the First Amendment and is preempted by federal law. The decision finds only that NetChoice’s complaint states plausible claims. The court did not rule on the merits of the claims and did not enjoin the law. In the below article, we provide a brief overview of the Kids Code and the decision.

Key point: The most recent CCPA enforcement action focuses on the CCPA’s right to opt out of sales and shares and treatment of minor’s data.

On November 21, 2025, the California Attorney General (AG) announced its latest enforcement action for violations of the California Consumer Privacy Act (CCPA). The complaint alleges that a gaming app developer failed to provide a CCPA-compliant opt-out link or setting within any of its 21 apps or website. The complaint also alleges that six of the developer’s apps sold the personal information of consumers between the ages of 13 and 16 without obtaining consent. Pursuant to the final judgment and permanent injunction, the developer agreed to pay $1.4 million, implement corrective measures, and maintain a compliance program.

The settlement is the ninth CCPA public enforcement action, including six by the AG and three by the California Privacy Protection Agency. Seven of the nine enforcement actions are from this year, showing a notable increase in enforcement activity. As with prior enforcement actions, this settlement reinforces that businesses should be auditing their current practices and procedures to ensure that they are compliant and remain compliant.

In the below article, we provide a summary of the violations and penalties.

Key point: The California Privacy Protection Agency’s announcement places even more scrutiny on the compliance practices of data brokers.

On November 19, 2025, the California Privacy Protection Agency (now calling itself CalPrivacy) announced the creation of a Data Broker Enforcement Strike Force. The stated goal of the strike force is to review the data broker “industry for compliance with the data broker registration requirement in the Delete Act, as well as for compliance with the state’s comprehensive privacy law, the California Consumer Privacy Act.” Announcing the launch, Michael Macko, CalPrivacy’s head of enforcement, stated “For decades, strike forces have been a mainstay at U.S. Attorney offices and state Attorney General offices across the United States. We intend to bring the same level of intensity to our investigations into the data broker industry.”