Sam Altman hardly ever breaks stride when he talks about ChatGPT, yet in a recent podcast he paused to deliver a blunt warning, which caused my ears to perk up. A therapist might promise that what you confess stays in the room, Sam said, but an AI chatbot cannot, at least not based on the current legal framework. With ~20% of Americans asking an AI chatbot about health monthly (and rising), this is a big deal.
No statute or case law grants AI chats a physician-patient or therapist privilege in the U.S., so a court can compel OpenAI to disclose stored transcripts. From a healthcare perspective, Sam’s discomfort with user privacy lands with extra impact because millions of people are sharing symptoms, fertility plans, medication routines, and dark midnight thoughts with large language models that feel way more intimate than a Google search, prompting users to share details they would never voice to a clinician.
Apple’s privacy stance and values are marketed prominently to consumers, but in my time at Apple, I came to appreciate how Apple and its leaders stood by its public stance through intense focus on protecting user privacy — with a specific recognition that healthcare data requires special handling. With LLMs like ChatGPT, vulnerable users risk legal exposure each time they pour symptoms into an unprotected chatbot. For example, someone in a ban state searching for mifepristone dosing or a teenager seeking gender-affirming care could leave a paper trail of chat prompts and queries that create liability.
The free consumer ChatGPT operates much like “Dr. Google” today in the health context. Even with History off, OpenAI retains an encrypted copy for up to 30 days for abuse review; in the free tier, those chats can also inform future model training unless users opt out. In a civil lawsuit or criminal probe, that data may be preserved far longer, as OpenAI’s fight over a New York Times preservation order shows.
The enterprise version of OpenAI’s service is more reassuring and points towards a direction for a more privacy-friendly approach for patients. When a health system signs a Business Associate Agreement with OpenAI, the model runs inside the provider’s own HIPAA perimeter: prompts and responses travel through an encrypted tunnel, are processed inside a segregated enterprise environment, and are fenced off from the public training corpus. Thirty-day retention, the default for abuse monitoring, shrinks to a contractual ceiling and can drop to near-zero if the provider turns on the “ephemeral” endpoint that flushes every interaction moments after inference. Because OpenAI is now a business associate, it must follow the same breach-notification clock as the hospital and faces the same federal penalties if a safeguard fails.
In practical terms the patient gains three advantages. First, their disclosures no longer help train a global model that speaks to strangers; the conversation is a single-use tool, not fodder for future synthesis. Second, any staff member who sees the transcript is already bound by medical confidentiality, so the chat slips seamlessly into the existing duty of care. Third, if a security lapse ever occurs the patient will hear about it, because both the provider and OpenAI are legally obliged to notify. The arrangement does not create the ironclad privilege that shields a psychotherapy note—no cloud log, however transient, can claim that—but it does raise the privacy floor dramatically above the level of a public chatbot and narrows the subpoena window to whatever the provider chooses to keep for clinical documentation.
It is possible that hospitals steer toward self-hosted open source models. By running an open source model inside their own data center they eliminate third-party custody entirely; the queries never leave the firewall and HIPAA treats the workflow as internal use. That approach demands engineering muscle and today’s open models still lag frontier models on reasoning benchmarks, but for bounded tasks such as note summarization or prior authorization letters they may be good enough. Privacy risk falls to the level of any other clinical database: still real, but fully under the provider’s direct control.
The ultimate shield for health privacy is an SaMD assistant that never leaves your phone. Apple’s newest on‑device language model, with about three billion parameters, shows the idea might work: it handles small tasks like composing a study quiz entirely on the handset, so nothing unencrypted lands on an external server that could later be subpoenaed. The catch is scale. Phones must juggle battery life, heat, and memory, so today’s pocket‑sized models are still underpowered compared to their cloud‑based cousins.
Over the next few product cycles two changes should narrow that gap. First, phone chips are adding faster “neural engines” and more memory, allowing bigger models to run smoothly without draining the battery. Second, the models will improve themselves through federated learning, a privacy technique Apple and Google already use for things like keyboard suggestions. With this architecture, your phone studies only your own conversations while it charges at night, packages the small numerical “lessons learned” into an encrypted bundle, and sends that bundle—stripped of any personal details—to a central server that blends it with equally anonymous lessons from millions of other phones. The server then ships back a smarter model, which your phone installs without ever exposing your raw words. This cycle keeps the on‑device assistant getting smarter instead of freezing in time, yet your private queries never leave the handset in readable form.
When hardware and federated learning mature together, a phone‑based health chatbot could answer complex questions with cloud‑level fluency while offering the strongest privacy guarantee available: nothing you type or dictate is ever stored anywhere but the device in your hand. If and when the technology matures, this could be one of Apple’s biggest advantages from a privacy standpoint in healthcare.
For decades “Dr. Google” meant we bartered privacy in exchange for convenience. Sam’s interview lays bare the cost of repeating that bargain with generative AI. Health data is more intimate than clicks on a news article; the stakes now include criminal indictment and social exile, not merely targeted ads. Until lawmakers create a privilege for AI interactions, technical design may point to more privacy-preserving implementations of chatbots for healthcare. Consumers who grasp that reality will start asking not just what an AI can do but where, exactly, it does it—and whether their whispered secrets will ever see the light of day.
