On April 7, 2026, several plaintiffs (“Plaintiffs”) filed a putative class action complaint in the Northern District of California against Sutter Health, Memorial Health Services Inc. and Memorial Care Medical Foundation (the “Providers”). The complaint alleges the Providers illegally recorded Plaintiffs’ confidential medical information by using an AI-powered “ambient clinical documentation” tool to record clinician-patient conversations during medical visits. This suit may highlight a growing litigation risk for the healthcare systems deploying conversational AI technologies, as well as processes and patient consents that can mitigate such risks.
Background: Rapid Adoption of AI Scribes in Clinical Care
Healthcare providers, hospitals and health systems have increasingly embraced AI-powered “scribes” and “ambient listening” or “ambient documentation” tools (“AI Scribes”). These are used to help alleviate documentation burdens, clinician burnout, and workflow inefficiencies, while also potentially improving the detail, accuracy, and timeliness of patient documentation. These tools are designed to capture clinician-patient conversations, generate clinical notes in real time, and reduce time spent charting.
One study found AI Scribes “reduced documentation time by a median of 2.6 minutes per appointment and cut after-hours EHR work by 29.3%”. Unsurprisingly, adoption has accelerated as health systems seek scalable ways to improve efficiency and preserve clinician capacity.
But legal frameworks do not disappear simply because they were enacted during the pre-AI age. As these tools move from pilot programs to enterprise-wide deployment, healthcare organizations must evaluate how privacy, consent, and medical confidentiality laws apply when live clinical conversations are transcribed or summarized by third-party AI vendors.
The Allegations
According to the complaint, Provider clinicians used microphone-enabled devices during in-person appointments to capture record clinician-patient conversations using Abridge AI, Inc.’s (“Abridge”) AI Scribe product. The recordings were allegedly transmitted to external servers, transcribed, and processed by AI to generate draft clinical notes later incorporated into the Providers’ EHR systems.
The Plaintiffs assert that patients were not given “clear notice that their medical conversations would be recorded by an artificial intelligence platform, transmitted outside the clinical setting, or processed through third-party systems.”
Claims Asserted
The complaint asserts a broad range of statutory and common-law claims. The bellwether claims are for alleged violations of (a) California’s “wiretapping” statute, the California Invasion of Privacy Act, (b) the Federal Wiretap Act, and (c) California’s “mini-HIPAA,” the Confidentiality of Medical Information Act (CMIA). The underlying theory behind each of these claims is that the Providers did not have authorization to invite the Abridge AI tool into patient conversations for recording and transcription.
Why This Case Matters
Several aspects of the lawsuit merit close attention for healthcare providers:
- The focus is on the recording itself, not only on data use. The Plaintiffs contend that the legal violation occurs “at the moment of interception”, when live communications are happening and recorded, not only later when data is stored, used, or disclosed.
- Third-party AI vendors are central to the theory of liability. Although Abridge is not named as a defendant, the complaint repeatedly points to transmission of confidential medical information to external servers, and retention and processing of confidential information by an external technology provider as a key component of the legal violations.
- Consent and authorization contents and procedures will be scrutinized. The Plaintiffs’ theory is that general privacy notices, implied consent, or ad hoc clinician disclosures may not be enough to obtain consent-to-recording, particularly in California, an “all party” consent state where all parties to a conversation must agree to be recorded. The complaint alleges that “clear” notice, “meaningful” choice, and “properly documented” authorization is required before recording begins, although it does not define what these terms mean. Still, providers could view this as a call to pay attention to formalities in consent or authorization rules. For example, HIPAA and California’s CMIA mandate specific content requirements for authorizations, with CMIA going so far as to prescribe a particular font size.
- Statutory damages can scale quickly. Because damages may be assessed on a per-violation or per-encounter basis, exposure could multiply quickly for health systems using these tools across large patient populations. Plaintiffs have used similar strategies of leveraging statutory damages into settlements in “cookie” or “pixel” litigation that hospital systems may have faced. The same wiretapping and CMIA statutes that are used in these cookie/pixel cases are now being used for “ambient listener” cases.
- HIPAA is not a complete shield. Even where a vendor relationship is structured to comply with HIPAA, the claims asserted arise under statutes that may impose separate consent and privacy obligations.
Practical Implications for Healthcare Organizations
Healthcare providers, medical foundations, and digital health companies using AI scribes or similar tools should assess:
- whether patients and others present at the patient visit (e.g., spouses and caretakers) are clearly informed that recording is taking place and the purpose of the recording;
- how consent is obtained, documented, and honored;
- whether state laws require a separate or specific authorization (e.g., CMIA);
- how vendor agreements address retention, secondary use, and AI model training; and
- whether deployment creates heightened exposure in all-party consent states such as California.
Looking Ahead
The case is still at an early stage, and the allegations have not been tested on the merits. Even so, the filing reflects a broader trend: plaintiffs’ lawyers are increasingly using privacy, wiretapping, and medical confidentiality laws to challenge emerging AI tools in healthcare.
Regardless of how this case is resolved, it serves as a reminder to healthcare organizations to balance the risk side of technology vendors with their many benefits. As AI Scribes become more common in clinical settings, courts and regulators are likely to take a closer look at consent, transparency, and patient expectations.
