When You Upload Your Medical Records to AI, Who’s Actually Protecting Them?

Forty million people ask ChatGPT health questions every day. This week, OpenAI and Anthropic made it official: connect your medical records, sync your Apple Health data, let the AI see your full health picture. The product pitches emphasize encryption, privacy controls, and promises not to train on your conversations.

Here’s what they don’t mention: your conversations with ChatGPT or Claude aren’t privileged. They can be subpoenaed. They can become evidence in your divorce, your disability claim, your custody battle. And when a court orders disclosure, the company’s privacy policy won’t save you.

When you talk to a doctor, therapist, or lawyer, the law recognizes that some conversations need protection. That’s privilege—a legal shield that prevents courts from forcing disclosure. When you talk to a consumer AI, no such shield exists. You’re creating a record that opposing counsel can request, that courts can order produced, and that you may never know was disclosed until it shows up in a legal filing.

This matters now because these products are live. Before you connect your medical history to the latest AI assistant, you should understand what you’re trading away.


What Privilege Actually Protects

The Supreme Court explained the purpose of privilege in Jaffee v. Redmond (1996), the case that established psychotherapist-patient privilege in federal courts: “Effective psychotherapy… requires an atmosphere of confidence and trust in which the patient is willing to make a frank and complete disclosure of facts, emotions, memories, and fears.”

The Court recognized something fundamental: if people fear their therapy sessions could be subpoenaed, they won’t seek treatment—or won’t be honest when they do. Privilege exists because some conversations are too important to chill with the threat of disclosure.

When you communicate with a licensed healthcare provider, privilege protects:

  • The content of your conversations—not just your diagnosis, but what you said getting there. The fears you voiced. The symptoms you weren’t sure were real. The questions you were embarrassed to ask.
  • Your control over disclosure. Privilege belongs to the patient, and only the patient can waive it. Your provider can’t decide to hand over your records because it’s convenient for them.
  • Your ability to be unfiltered. You can say “I’m worried I’m a terrible parent” to your therapist without that statement becoming Exhibit A in a custody proceeding.

Consumer AI conversations have none of these protections. OpenAI’s CEO Sam Altman said it plainly in July 2025: “Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.”

He called the situation “very screwed up.” He’s right.


The Architecture Apple Built (That AI Companies Didn’t)

During my time at Apple, user privacy wasn’t a feature—it was an architectural constraint that shaped every product decision. The company’s stance was tested publicly in 2016 when the FBI demanded Apple build a backdoor to unlock the San Bernardino shooter’s iPhone. Apple refused, arguing that creating such a tool would compromise the security of every iPhone user.

The engineering philosophy was clear: build systems where Apple literally cannot access user data, so that when compelled, the company can truthfully say we don’t have it. End-to-end encryption with keys held by the user, not Apple. The same principle guided the COVID-19 Exposure Notifications system I was involved with—a decentralized design where Apple and Google never saw who was exposed or infected, because the matching happened entirely on-device. We couldn’t be compelled to produce data we never had.

This is the difference between “we won’t” and “we can’t.” Policy can change. Architecture is structural.

Consumer AI companies made a different choice. To process your prompts, store your conversations, and conduct safety reviews, they need access to your data. OpenAI’s Health Privacy Notice acknowledges that “a limited number of authorized OpenAI personnel and trusted service providers might access Health content to improve model safety.” They promise not to train on it. But they can read it—which means they can be compelled to produce it.


This Isn’t Theoretical

In 2025, copyright litigation triggered court-ordered preservation of ChatGPT conversations, overriding user deletion requests. OpenAI objected, arguing the order conflicted with its privacy commitments. The court’s order prevailed. Privacy policies, it turns out, are not self-enforcing—and in many cases, they are not even binding when legal process demands otherwise.

The health implications became concrete in August 2025, when the parents of 16-year-old Adam Raine filed a wrongful death lawsuit against OpenAI. The complaint cited over 200 mentions of suicide in Adam’s conversations with ChatGPT. Those chat transcripts—intimate conversations about depression, self-harm, and suicidal ideation—were submitted to the court as evidence. The conversations Adam thought were private became part of the legal record.

Family law attorneys are now warning divorce clients that ChatGPT conversations can be subpoenaed. Personal injury lawyers note that AI health conversations are discoverable. A survey by Kolmogorov Law found that 50% of AI users were unaware their conversations could be subpoenaed, while 67% believed AI chats should have the same legal protections as conversations with doctors or lawyers.

They don’t.


Two Frameworks, Two Outcomes

People use AI the way they use a therapist or a notebook: unfiltered, speculative, unfinished. The law treats it like an email.

Here are a few scenarios to illustrate.

Custody dispute. Sarah is going through a contentious divorce. She’s been processing anxiety about parenting and asking questions about her medications.

With consumer AI: Her ex-spouse’s attorney seeks her ChatGPT conversations. There is no privilege to assert. Whether obtained from the company or through discovery from Sarah herself, the conversations are treated as ordinary evidence. Every moment she vented frustration or expressed doubt becomes potential ammunition.

With a licensed provider: Her conversations with a therapist are protected by psychotherapist-patient privilege. To obtain those records, opposing counsel must overcome significant procedural hurdles. She’s notified and can object. Even if records are produced, scope is limited to what’s clinically relevant—not raw emotional processing.

Employment lawsuit. Marcus filed an ADA claim after being fired following his depression disclosure. His former employer wants to prove his performance issues predated his disclosure.

With consumer AI: Comments like “I couldn’t focus today” or “I’m so overwhelmed” can become evidence for the defense. If he asked the AI whether he had a legal case, that strategic thinking—which would be privileged if shared with an attorney—has no protection.

With a licensed provider: His treatment records are protected by both privilege and HIPAA. Only clinical records are potentially discoverable, through formal legal process with patient notification. His private processing of his situation remains private.

Personal injury claim. David is suing after a car accident. The defense wants to minimize his damages.

With consumer AI: Every health conversation can become ammunition. Pre-accident mentions of back pain? Pre-existing condition. Context collapses when excerpts are read in a courtroom.

With a licensed provider: His claims are evaluated on clinical evidence—the medical chart, treatment notes, professional assessments. Not cherry-picked AI conversations taken out of context.


What Consumer AI Does (and Doesn’t) Offer

Consumer AI companies aren’t operating in a legal vacuum. State laws like Washington’s My Health My Data Act and FTC enforcement still apply. OpenAI’s policy states they require warrants for content disclosure and provide “only data specified.”

But these protections focus on consent, data handling, and misuse—not evidentiary privilege, minimum necessary disclosure, or accounting of disclosures. They don’t replace the healthcare-specific rights that HIPAA and professional privilege provide. And as the 2025 court orders demonstrated, company policies can be overridden by legal process.

If you would hesitate to say something in a text message that might be read aloud in court, you should hesitate before saying it to a consumer AI.


The Verily Me Difference

I work at Verily, so I’ll be direct about my positioning. Verily Me delivers care through licensed healthcare providers. When you interact with Verily Me, you’re communicating with professionals whose conversations carry the legal protections that come with a clinical relationship.

That means your conversations are privileged, protected from compelled disclosure except in narrow, legally defined circumstances. It means HIPAA’s minimum necessary standard limits what can be shared even when disclosure is permitted. It means you can request an accounting of who accessed your records. And it means these protections exist by law, not policy—we can’t waive them with a terms update.

The limits are real too. We must comply with valid court orders, and HIPAA’s retention requirements mean we can’t instantly delete data on demand. But the structural distinction holds: privilege belongs to you, and only you can waive it.


The Bottom Line

What a HIPAA-covered service with licensed providers gives you that ChatGPT Health or Claude for Healthcare cannot:

  • A legal shield, not just a privacy promise. Privilege and HIPAA protections with enforcement mechanisms, not policy subject to change.
  • A higher bar before disclosure. Courts must overcome privilege. With consumer AI, there’s no privilege to overcome.
  • Control over waiver. You decide when privilege is waived. With consumer AI, the company responds to legal process balancing your interests against their own.
  • Protection that survives. HIPAA obligations transfer with the data. Privileges attach to the clinical relationship. Privacy policies don’t.

ChatGPT Health and Claude for Healthcare address real needs. These tools can help people navigate a healthcare system that often fails them. But the privacy architecture matters.

Apple showed me that privacy can be architectural—that the answer to compulsion can be “we can’t” rather than “we won’t.” These AI systems have a different architecture. That difference has consequences, and users deserve to understand them before uploading their most sensitive data.

Leave a Reply