Key point: Five takeaways from March 2026 decisions: (1) Courts diverge on “purpose” requirement in ECPA’s crime-tort exception; (2) Courts consider ECPA exception outside the health care industry; (3) Contradictory statements in privacy policies can defeat consent even when tracking tech use is disclosed; (4) Courts provide guidance on website design to establish consent; and (5) Three courts allow negligence claims to proceed but nix negligence per se claims.

Welcome to our monthly update on how courts across the U.S. have handled privacy litigation involving website tools such as cookies, pixels, session replay, and similar technologies. In this post, we cover decisions from March 2026.

In March, federal courts issued approximately 30 decisions in cases with Electronic Communications Privacy Act (ECPA) claims while only issuing three decisions in cases involving California’s trap and trace law and four decisions in cases involving California’s state wiretapping statutes. Time will tell whether this is merely a one-off anomaly or signifies a shift by plaintiffs to test the waters outside California, where Senate Bill 690 remains pending.

Many courts are currently handling data privacy cases across the U.S. Although illustrative, this update is not intended to be exhaustive. If there is another area of data privacy litigation you would like to know more about, please reach out. The contents provided below are time-sensitive and subject to change. If you are not already subscribed to our blog, consider doing so to stay updated. If you are interested in tracking developments between blog posts, consider following us on LinkedIn.

  1. Courts differ in their interpretation of the “for the purpose” language in the ECPA’s crime/tort exception.

March brought a flood of decisions grappling with one of the most contentious issues in ECPA litigation: what does it mean for a defendant to intercept communications “for the purpose of” committing a crime or tort? The answer matters enormously because many defendants in website tracking cases invoke the one-party consent exemption, arguing they consented to their third-party marketing partners to receive the intercepted communications. Plaintiffs must then establish the crime-tort exception applies to overcome this defense. Courts issued no fewer than seven decisions in March wrestling with this statutory language, revealing deep divisions over whether defendants must have a criminal or tortious motive, or whether it is enough that their conduct happens to violate criminal or tort law.

Before we get into those decisions, it is important to understand what the crime/tort exception is and how it fits in with the overall structure of the ECPA. Under subsection (1) of 18 U.S.C. § 2511, the ECPA makes it unlawful to “intentionally intercept” any electronic communication. Subsection (2)(d) then provides an exemption where a party to the communication consents to the interception — the reason ECPA is considered a “one-party consent” statute, unlike California’s all-party consent wiretapping law. But § 2511(2)(d) carves out an exception to this exemption: the one-party consent rule does not apply where “such communication is intercepted for the purpose of committing any criminal or tortious act in violation of the Constitution or laws of the United States or of any State.” Courts have fractured over how to read the phrase “for the purpose of.”

Two District of Massachusetts decisions illustrate one approach, which requires plaintiffs to allege the defendant’s purpose in the underlying act was to commit the alleged violation — not merely that a violation occurred as a byproduct. In a March 6 decision, a District of Massachusetts court rejected the defendant’s argument that plaintiffs must plead specific intent to commit a crime or tort, holding instead that “the purpose must be to commit an act, and that act must be criminal or tortious.” But the court nevertheless granted the motion to dismiss after finding the allegations “do not support the inference that Defendants purposefully committed the ‘criminal and tortious acts’ specified by Plaintiffs.” The court emphasized that “it is not enough that a crime or tort [may have been] a . . . side-effect of the interception.” Four days later, the same judge applied identical reasoning and dismissed claims where the plaintiff emphasized the defendant “chose not to opt in [a tracking tool’s] anonymization feature” but failed to plausibly allege the defendant installed tracking tools “for the purpose of acquiring IIHI without authorization.” Under this framework, purposeful interception of communications is insufficient; plaintiffs must show purposeful commission of the underlying violation.[1]

Other courts rejected this interpretation and allowed claims to proceed where the defendant’s conduct violated criminal or tort law, regardless of whether the defendant’s subjective motive was lawful. A Northern District of Illinois court denied a motion to dismiss after holding that “the ‘act’ must be criminal or tortious, but the ‘purpose’ does not need to be criminal or tortious.” The court found it sufficient that the defendant intended to perform the act and the act violated the law, even if the defendant’s overarching purpose was merely commercial. And a Central District of Illinois court identified “an independent criminal act” in the two-step process of capturing personal health information through website trackers and then disclosing it to third parties, rejecting the defendant’s argument that a primary financial motive precludes application of the crime-tort exception. And a Northern District of Illinois court agreed, rejecting the defendant’s argument that a monetary motive forecloses the crime/tort exemption.

A Southern District of New York decision added another dimension to the analysis by focusing on whether the alleged underlying violation is truly “independent” from the act of interception itself. The court acknowledged the defendant intentionally added tracking tools to its website but found its motivation “primary commercial.” The court further noted that although the complaint alleged other violations including invasion of privacy through intrusion upon seclusion, the plaintiffs “for good reason” did not argue these triggered the crime-tort exception because “some of those torts . . . are simply not independent from the act of interception itself.” This independence requirement adds a third variable to the analysis: even if the defendant’s conduct violates criminal or tort law, courts may require that the violation be analytically distinct from the interception.

The divergence among courts led a Northern District of Illinois court to certify the question to the Seventh Circuit. In a March 3 decision, the court certified the question: “Whether the crime-tort exception to the one-party consent rule in the Electronic Communications Privacy Act applies when a complaint alleges that the defendant acted with a lawful purpose, and does not allege that a defendant acted with a criminal or tortious purpose.” The certification reflects the practical reality that district courts within the same circuit are reaching opposite conclusions on materially identical facts, creating unpredictability for both plaintiffs and defendants. If the Seventh Circuit accepts certification and issues a published opinion, it could bring clarity to cases in Illinois, Indiana, and Wisconsin — and potentially influence other circuits grappling with the same interpretive question.

For now, defendants should expect courts to continue splitting on whether commercial motivation forecloses the crime-tort exception, whether the exception requires purposeful commission of the underlying violation or merely conduct that violates the law, and whether the alleged crime or tort must be analytically independent from the interception itself. Plaintiffs face strategic choices in how to plead their allegations, particularly in jurisdictions requiring purposeful commission rather than commercial motivation with incidental violations. The Seventh Circuit’s eventual resolution in Stein — if it accepts certification — will be closely watched.

  1. Courts consider whether alleged “crimes” and “torts” other than HIPAA meet crime/tort exception under ECPA.

Our second takeaway stays with the crime/tort exception but focuses on the underlying crime or tort that the plaintiffs argue satisfy the ECPA’s exception. Most of the published decisions on this issue focus on the health care space, where plaintiffs allege the disclosure of health information to third-parties via AdTech tools on websites violates HIPAA.

Two decisions from Northern District of Illinois in March are illustrative. A March 2 decision from a Northern District of Illinois court considered a defendant’s motion to dismiss the plaintiff’s amended complaint. An earlier decision from the court had dismissed the plaintiff’s ECPA claim after finding “merely browsing and searching [the defendant’s] website does not amount to a disclosure of individually identifiable health information” to the defendant. In her amended complaint, the plaintiff alleged that she visited specific webpages and made searches (via the website search bar) that disclosed specific phrases related to her medical condition, searches related to insurance coverage for diabetes monitoring systems, and “the details of her requests for [the defendant’s] diabetes monitoring products.” The plaintiff alleged she also disclosed to the defendant her name, email address, date of birth, gender, phone number, and zip code. Although the court found the plaintiff plausibly alleged this information qualified as health information under HIPAA, the court nevertheless dismissed the ECPA claim after finding the plaintiff failed to disclose all this information was transmitted to the third party, explaining “[t]he mere existence of tracking tools on [the defendant’s] website does not in itself establish the disclosure of HIPAA-protected information.” Four days later, another Northern District of Illinois court issued a decision addressing a motion to dismiss where the plaintiff alleged the defendants intentionally disclosed communications — including requests for appointments, the names of the clinics where those appointments were made, the reason for the appointments, the name of the attending medical provider, their phone numbers, and their insurance providers — via a scheduling tool on the defendant’s website. The defendants argued the plaintiffs inputted the information into a scheduling tool rather than a patient portal. The court denied the motion to dismiss, finding the distinction could not be resolved at the pleading stage.

Other March decisions demonstrate how courts resolve claims against defendants in industries outside health care. A March 30 decision from the Southern District of New York that considered whether a defendant in the financial industry violated the ECPA through its use of AdTech tools on its website violated Section 6802 of the Gramm-Leach-Bliley Act (GLBA) distinguished decisions against health care companies. The court found that while HIPAA makes disclosure itself a crime, the GLBA does not. The court also noted that GLBA does not contemplate any private right of action.

In contrast, an earlier March decision from the Northern District of Illinois considered allegations that an auto insurance provider violated the ECPA. To overcome the one-party consent exemption, the plaintiff alleged the third-party’s use of the data was “in violation of the right to privacy.” The court, however, allowed the claim to continue after finding the defendant’s activity violated an entirely different law — the Fair Credit Reporting Act. Specifically, the court found the plaintiffs had alleged the defendants transmitted information about the plaintiffs’ driving habits, even when the plaintiffs were merely passengers in a vehicle and not driving.

Collectively, these March decisions reveal that the viability of the crime-tort exception depends heavily on the industry context and the specific statute plaintiffs invoke as the underlying violation. In health care cases, courts have shown a willingness to allow ECPA claims to proceed where plaintiffs plausibly allege the disclosure of individually identifiable health information to third parties, particularly where the information goes beyond generic website activity and includes patient-specific details like appointment requests, medical conditions, or treatment-seeking behavior. The two-step framework courts have adopted — first asking whether the disclosed information qualifies as protected health information, then asking whether plaintiffs have adequately alleged the information was transmitted to third parties — creates a roadmap for both plaintiffs drafting complaints and defendants challenging them. Outside health care, however, plaintiffs face a higher bar: not all privacy statutes create criminal liability or provide a basis for common-law tort claims, and courts appear reluctant to extend the crime-tort exception to regulatory violations that lack those characteristics. The contrast between HIPAA (criminal statute, exception applies) and GLBA (no criminal provision, exception denied) suggests defendants in industries governed by civil regulatory schemes may have stronger arguments for dismissal, while defendants in heavily regulated sectors like health care and consumer credit reporting should prepare for closer scrutiny of whether their AdTech implementations violate substantive legal obligations that could satisfy the crime-tort exception.

  1. Courts provide further guidance on what website design is required to establish consent through website agreements

March decisions underscore that the design details of consent flows matter enormously in privacy litigation — not just for establishing user agreement to forum selection clauses or arbitration provisions, but for demonstrating consent to data collection practices that can defeat wiretapping claims under state and federal law. Three decisions issued in March illustrate the dividing line between enforceable and unenforceable agreements, with courts scrutinizing font size, color contrast, proximity to action buttons, and whether users must click to continue.

A March 6 decision from a Northern District of California court found a financial institution’s account creation flow created an enforceable clickwrap agreement. The court carefully examined the visual presentation: users creating an account first input their last name and Social Security number, then reached a second page that was “white and most of the text is gray apart from ‘Create your . . . account’ and ‘Password Requirements.’” At the bottom of this page appeared a dark blue “Save & Continue” button. Directly above the button, the page stated “By clicking ‘Save & Continue’ you agree to the Terms of Use for [the] website” in a font smaller than most other text but with the “Terms of Use” phrase bolded, italicized, blue, and hyperlinked. The court held this design was sufficient to put users on notice, emphasizing that “the placement of Defendant’s Terms of Use is conspicuous and puts a reasonable user on notice they are agreeing to be bound by the terms.” The court found the notice “conspicuously displayed directly [above] the action button,” and that the statement “clearly denotes that continued use will constitute acceptance of the Terms.” Critically, the page was “uncluttered” and positioned the notice such that “a reader would naturally see the notice before their eyes move to ‘Save & Continue.’”

One week later, a Southern District of California court reached the opposite conclusion in a decision involving a headphone company’s website, finding the terms constituted an unenforceable browsewrap agreement. The court explained that “the Terms at issue are browsewrap because the Privacy Policy begins by stating, “By visiting the Site or using any of the Services, you consent to [our] collection, use, disclosure, transfer and storage of information relating to you as set forth in this Privacy Policy.” The images provided in the complaint of the website show a small link at the bottom of the website homepage stating, “By using our site you agree to our Cookie Policy.”

The court found “there are no allegations that a Website-user must affirmatively agree to the Terms by clicking a button to access the website.” Although the cookie banner appeared at the bottom of the screen, users were not required to interact with it to proceed.

Third, a District of Minnesota decision illustrated the evidentiary challenges defendants face even with seemingly robust consent flows. The defendant’s checkout page displayed a “Complete Order” button with text directly beneath stating: “By completing my order, I agree to Membership with automatic renewal, the terms of service and privacy policy.” The “terms of service” and “privacy policy” phrases appeared in orange to denote hyperlinks, while the rest of the text appeared in black. Despite this design — which appears similar to the clickwrap approved in Beltran — the court declined to find mutual assent as a matter of law. The plaintiffs argued that in the briefing, the defendant “consciously limit[ed] its presentation to a zoomed in payment interface” rather than showing the entire webpage, preventing the court from evaluating whether the notice was truly conspicuous in context of the overall site. The court agreed and emphasized that “all allegations are supported by expert descriptions of what plaintiffs would have seen” but “the record contains no testimony from any plaintiff about what she actually saw.” Without actual testimony about what individual plaintiffs perceived, and given “material facts missing from the record” and the “conflicting nature of the material facts included in the record,” the court held it “cannot now determine as a matter of law that mutual assent did or did not exist.”

These decisions create a roadmap for companies seeking to establish valid consent through website design. First, the clickwrap/browsewrap distinction remains dispositive: passive notices at the bottom of pages, even if technically visible, will not establish consent if users can proceed without affirmatively acknowledging the terms. Second, within the clickwrap category, design matters: the consent language should appear directly adjacent to (and ideally directly above) the action button, use contrasting colors to make hyperlinks visible, avoid clutter that could distract from the notice, and use clear language that ties the button click to agreement. Third, defendants must preserve evidence showing what the interface actually looked like at the relevant time — present-tense declarations or screenshots from current versions of a website will not suffice if they cannot establish the design was identical when plaintiffs used the site. Fourth, even well-designed consent flows may not win summary judgment if defendants cannot produce evidence about what individual plaintiffs actually saw and understood. For privacy and cookie policies specifically, these cases suggest the same standards apply: mere availability of a privacy policy through a footer link will not establish consent to data collection, but a clear statement above an action button that clicking constitutes agreement to the privacy policy — with the policy itself accessible through a conspicuous hyperlink — should survive a motion to dismiss and potentially establish consent as a matter of law.

  1. Courts scrutinize the entire privacy policy and isolated statements may defeat consent

Privacy policies increasingly drive outcomes in tracking technology litigation, but March decisions reveal that defendants face risk from conflicting statements buried throughout their policy documents — not just from failures to disclose specific tracking practices. Courts treat privacy policies as integrated contracts where a single contradictory representation can render the entire consent defense implausible at the pleading stage, even when other sections of the policy appear to authorize the challenged conduct.

In a March 27 decision, a Northern District of California court denied a defendant’s motion to dismiss despite the defendant’s argument that its privacy agreements expressly permitted sharing user information with third parties for marketing purposes. The court acknowledged that the defendant’s Consumer Privacy Notice stated it could share customers’ personal information with non-affiliates for marketing purposes, and also disclosed that the defendant could share information with “service providers” and “marketing partners.” But the court held these disclosures did not establish consent as a matter of law because “a reasonable user could interpret the Privacy Contracts in their entirety as disclosing only that [the defendant] collects and shares user information in the aggregate but not information organized by particular users.” The court pointed to contradictory provisions elsewhere in the agreement where the CCPA Disclosure “unequivocally” stated “We do not sell or share personal information for cross-context behavioral advertising” and failed to list “Internet or Other Similar Network Activity” among categories disclosed to third parties. The court also noted that the Terms & Conditions “expressly provide[d] that [the defendant’s] website ‘does not covertly capture information regarding the specific activities of any particular user.’” The court emphasized this last statement — that the website produces reports only “in anonymous or aggregated form” and captures personal information only “specifically submitted” through forms — as directly contradicting the defendant’s interpretation that it could collect and share individualized browsing activity.

Similarly, in a March 19 decision from a District of Massachusetts court, the court denied dismissal after finding the defendant’s privacy policy created plausible nonconsent even though the policy disclosed sharing with third parties. The defendant’s privacy policy stated the defendant could disclose information to “contractors, service providers, and other third parties [it] use[s] to support [its] business” and “to retailers and other third parties to market to [users].” But the court held these provisions “do not clarify whether that information is individualized or aggregated,” and when read against the privacy policy’s statement that patient information would “only be used to support [the] customer/patient relationship” and would be disclosed only if “required by law” or with “specific written consent,” the policy as a whole did not clearly disclose that the defendant collects and shares information about a particular user’s activity on its website with third parties. The court further noted that the defendant’s privacy and security policy stated it uses information “to report aggregated information to [its] advertisers” and “may disclose aggregated information about [its] users, and information that does not identify an individual” — representations that reinforced the inference that individualized data would not be shared. Because “a reasonable user could have understood the Privacy Contracts as not permitting [the defendant] to collect individualized information from its users,” the court concluded that the plaintiffs did not consent to the collection of that sort of user information.

The same principle defeated consent in a browsewrap context. In a Southern District of California decision also discussed above, the court found the terms constituted unenforceable browsewrap in part because the privacy policy itself began with language treating mere website use as consent: “By visiting the Site or using any of the Services, you consent to [our] collection, use, disclosure, transfer and storage of information relating to you as set forth in this Privacy Policy.”

These March decisions establish that privacy policy consistency matters as much as disclosure completeness. Courts will read privacy policies as integrated documents and hold defendants to every representation within them, not just the sections that authorize data sharing. For companies using tracking technologies, this creates several practical requirements. First, privacy policies must be internally consistent across all documents in the policy suite (privacy policy, terms of service, CCPA disclosures, cookie policies). A statement in one document that contradicts disclosures in another will create ambiguity that can defeat consent. Second, companies must avoid broad negative statements about data collection practices unless they are literally true. Representations like “we do not sell personal information,” “we only collect information you submit through forms,” or “we do not capture information about specific user activities” will be held against defendants even when other policy sections appear to authorize the challenged practices. Third, companies must specifically disclose that they collect and share individualized browsing activity, not just that they collect browsing data “in the aggregate” or use cookies “for analytics.” Generic statements about sharing with “service providers” or “marketing partners” may not suffice if other policy language suggests only aggregated data is shared. And finally, companies defending privacy litigation must review their complete policy history at the relevant time period and identify any statement in any policy document that could be read as contradicting their consent defense — a privacy policy’s greatest litigation risk may come not from what it fails to disclose, but from what it affirmatively represents that turns out to be untrue.

  1. Courts diverge on negligence claims as plaintiffs assert new theories for tracking technology disclosures

As privacy litigation involving website tracking technologies matures beyond statutory wiretapping claims, plaintiffs increasingly assert common-law negligence theories, arguing that defendants who collect personal information assume a duty to safeguard it from unauthorized disclosure to third parties. March 2026 decisions show courts allowing these negligence claims to survive motions to dismiss, finding plaintiffs adequately alleged duties of care arising from data collection and handling. The decisions reveal growing judicial acceptance that disclosure of sensitive personal or financial information to third parties without consent can constitute both a breach of duty and cognizable harm, even absent a traditional data breach. Courts uniformly rejected negligence per se claims predicated on federal statutes lacking private rights of action, however, holding that neither the Federal Trade Commission (FTC) Act nor the GLBA can supply the predicate violation for state-law negligence per se theories.

A Northern District of Illinois court denied a motion to dismiss where the plaintiffs alleged the defendants assumed a duty to safeguard their health data by collecting and storing it. Although the court acknowledged case authority questioning whether the Illinois Information Protection Act created such a duty, it noted that other courts had denied motions to dismiss in similar data breach cases and followed that approach. The court also rejected the defendant’s economic loss doctrine argument, holding the doctrine “inapplicable here for two reasons: first, plaintiffs plead non-economic damages, including loss of privacy and emotional harm. Second, ‘[t]he economic loss doctrine does not bar recovery in tort for the breach of a duty that exists independently of a contract.’”

A week later, another Northern District of Illinois court similarly denied a motion to dismiss after rejecting the defendant’s argument that “Illinois does not recognize a common law duty to safeguard personal information.” The court found “multiple recent decisions from this District and one decision from the Illinois Appellate Court have rejected defendant’s position.” The court further held the plaintiff adequately alleged damages, specifically that the defendant’s actions “caused her and the class to ‘have their data shared with third parties without their authorization or consent, receive unwanted advertisements that reveal seeking treatment for specific medical conditions, fear, anxiety and worry about the status of their PII and PHI, diminution in the value of their personal data for which there is a tangible value, and/or a loss of control over their PII and PHI.’”

A Northern District of California decision similarly denied dismissal on damages grounds. The court held the plaintiff alleged sufficient injury by claiming the defendant disclosed “his coborrower information, real estate ownership information, and the type of loan he applied for, which qualify as sensitive financial information that, if disclosed, can give rise to ‘a non-economic privacy injury traditionally recognized under the law.’” The court also credited the plaintiff’s allegation that “he lost time trying ‘to mitigate and remediate the effects of the use of [his] information,’” finding this “is not a pure economic loss.” These decisions establish that disclosure of sensitive personal or financial information — even without a traditional data breach — can constitute cognizable harm supporting negligence claims, and that mitigation efforts constitute noneconomic injury.

The most extensive analysis of duty came from March 31 decision from an Eastern District of Wisconsin court. The court denied dismissal of negligence claims after explaining that under Wisconsin law, “if a person acts or fails to act in a way that a ‘reasonable person would recognize as creating an unreasonable risk of injury or damage to a person or property, he or she is not exercising ordinary care under the circumstances, and is therefore negligent.’” The court rejected the defendant’s argument that Wisconsin precedent held there was no duty to use care in handling and disseminating personal information, and also rejected the defendant’s argument that its contractual relationship with customers precluded negligence claims, noting the defendant “cannot have it both ways”: “If its privacy policy does not act as a contractual limit on the scope of its responsibilities to its customers, [the defendant] cannot invoke that document as a shield against having a general duty of care.” The court similarly rejected application of Wisconsin’s economic loss doctrine, emphasizing the doctrine “is inapplicable to claims for the negligent provision of services” and that the plaintiff alleged “damages that are unrelated to the loss of the benefit of his bargain.”

By contrast, courts uniformly granted motions to dismiss negligence per se claims premised on violations of federal statutes lacking private rights of action. In the Northern District of California decision discussed above, the court granted the motion to dismiss the negligence per se claim where the plaintiff relied on the FTC Act and the GLBA after finding neither provided private rights of action. Similarly, in the Wisconsin decision, the court granted dismissal of the negligence per se claim after finding neither “the statutory text of the FTC Act nor the GLBA creates a private right of action.” These decisions establish that while federal privacy statutes may inform the standard of care in ordinary negligence claims, they cannot serve as the predicate violation for negligence per se unless Congress intended to create a private enforcement mechanism.


[1] Even within the same district, courts do not reach the same result. Another District of Massachusetts court held that “whether [the defendant] was primarily or determinatively motivated to intercept and share [the plaintiff’s] communications with [the third parties] for a lawful purpose (marketing, analytics, and financial gain), an unlawful purpose (violating M.G.L. c. 214, § 1B), or both is a question of fact not fit for resolution at the pleading stage.”