The New Computer in the Clinic

Andrej Karpathy describes the current moment as the rise of a new computing paradigm that he calls Software 3.0. as large language models emerge not just as clever chatbots but as a “new kind of computer” (“LLM OS”). In this model, the LLM is the processor, its context window is the RAM, and a suite of integrated tools are the peripherals. We program this new machine not with rigid code, but with intent, expressed in plain language. This vision is more than a technical curiosity; it is a glimpse into a future where our systems don’t just execute commands, but understand intent.

Healthcare is the perfect test bed for this paradigm. For decades, the story of modern medicine has been a paradox: we are drowning in data, yet starved for wisdom. The most vital clinical information—the physician’s reasoning, the patient’s narrative, the subtle context that separates a routine symptom from a looming crisis—is often trapped in the dark matter of unstructured data. An estimated 80% of all health data lives in notes, discharge summaries, pathology reports, and patient messages. This is the data that tells us not just what happened, but why.

For years, this narrative goldmine has been largely inaccessible to computers. The only way to extract its value was through the slow, expensive, and error-prone process of manual chart review, or a “scavenger hunt” through the patient chart. What changes now is that the new computer can finally read the story. An LLM can parse temporality, nuance, and jargon, turning long notes into concise, cited summaries and spotting patterns across documents no human could assemble in time.

But this reveals the central conflict of digital medicine. The “point-and-click” paradigm of the EHR, while a primary driver of burnout, wasn’t built merely for billing. It was a necessary, high-friction compromise. Clinical safety, quality reporting, and large-scale research depend on the deterministic, computable, and unambiguous nature of structured data. You need a discrete lab value to fire a kidney function alert. You need a specific ICD-10 code to find a patient for a clinical trial. The EHR forced clinicians to choose: either practice the art of medicine in the free-text narrative (which the computer couldn’t read) or serve as a data entry clerk for the science of medicine in the structured fields. Often, the choice has been the latter, which has contributed to massive burnout among physicians. This false dichotomy has defined the limits of healthcare IT for a generation.

The LLM, by itself, cannot solve this. As Karpathy points out, this new “computer” is deeply flawed. Its processor has a “jagged” intelligence profile—simultaneously “superhuman” at synthesis and “subhuman” at simple, deterministic tasks. More critically, it is probabilistic and prone to hallucination, making it unfit to operate unguarded in a high-stakes clinical environment. This is why we need what Karpathy calls an “LLM Operating System”. This OS is the architectural “scaffolding” designed to manage the flawed, probabilistic processor. It is a cognitive layer that wraps the LLM “brain” in a robust set of policy guardrails, connecting it to a library of secure, deterministic “peripheral” tools. And this new computer is fully under the control of the clinician who “programs” it in plain language.

This new architecture is what finally resolves the art/science conflict. It allows the clinician to return to their natural state: telling the patient’s story.

To see this in action, imagine the system reading a physician’s note: “Patient seems anxious about starting insulin therapy and mentioned difficulty with affording supplies.” The LLM “brain” reads this unstructured intent. The OS “policy layer” then takes over, translating this probabilistic insight into deterministic actions. It uses its “peripherals”—its secure APIs—to execute a series of discrete tasks: it queues a nursing call for insulin education, sends a referral to a social worker, and suggests adding a structured ‘Z-code’ for financial insecurity to the patient’s problem list. The art of the narrative is now seamlessly converted into the computable, structured science needed for billing, quality metrics, and future decision support.

This hybrid architecture—a probabilistic mind guiding a deterministic body—is the key. It bridges the gap between the LLM’s reasoning and the high-stakes world of clinical action. It requires a healthcare-native data platform to feed the LLM reliable context and a robust system of action layer to ensure its outputs are safe. This design directly addresses what Karpathy calls the “spectrum of autonomy.” Rather than an all-or-nothing “agent,” the OS allows for a tunable “autonomy slider.” In a “co-pilot” setting, the OS can be set to only summarize, draft, and suggest, with a human clinician required for all approvals. In a more autonomous “agent” setting, the OS could be permitted to independently handle low-risk, predefined tasks, like queuing a routine follow-up.

The journey is just beginning in healthcare, and my guess is that we will see different starting points for this across the ecosystem. But the path forward is illuminated by a clear thesis: the “new computer” for healthcare changes the very unit of clinical work. We are moving from a paradigm of clicks and codes—where the human serves the machine—to one of intent and oversight. The clinician’s job is no longer data entry. It is to practice the art of medicine, state their intent, and supervise an intelligent system that, for the first time, can finally understand the story.

AI and the Prepared Mind: Engineering Luck in Drug Discovery

We are at a fascinating, paradoxical moment in the history of medicine. We stand in awe of a new AI-powered “Logic Engine” for drug discovery—a computational marvel like AlphaFold, which treats biology as an information system to be engineered. It promises a future of rational discovery. And yet, when we look at our most important medical breakthroughs, so many were not rationally designed. They were the result of messy, unpredictable, and entirely human processes: a happy accident, a surprising side effect, or a creative leap of intuition. This isn’t a story of one replacing the other. For me, it’s the story of how we build the bridge between them. The future, I believe, lies in marrying AI’s logic with this enduring human spark.

How has luck played out in drug discovery? With reference to some famous examples, I believe serendipity comes in three distinct flavors.

First, there is the Physical-World Accident. This is the classic tale of Alexander Fleming. He doesn’t hypothesize and then test; he returns from vacation to find a physical anomaly on a petri dish, a “moldy” halo where bacteria wouldn’t grow. The breakthrough was not the idea; it was his prepared mind recognizing the profound significance of a simple, physical event.

Second, there is the Clinical Data Anomaly. This is the story of Viagra. Researchers at Pfizer were not looking for an erectile dysfunction drug; they were testing a new angina medication. But in the clinical trial data, they spotted a consistent, statistically significant “side effect.” Their genius was not in the drug’s design, but in their ability to see that this “failure” was, in fact, the drug’s true purpose.

And third, there is the rarest and most powerful form: the Cross-Domain Synthesis. This is the almost-mythical origin of the GLP-1 drugs. In the 1980s, Dr. John Eng, an endocrinologist at the VA, was grappling with the dangerous, real-world clinical problem of hypoglycemia in his diabetic patients. His deep “embodied context” of this problem led his curiosity to a non-obvious place: the venom of the Gila monster. He made a creative, “analogical leap,” betting that a creature who could feast and then fast for months must have a powerful metabolic regulator. He was right, and this single, human-driven hypothesis proved the therapeutic principle that led to the multi-billion-dollar GLP-1 field, from Exenatide to Ozempic and Mounjaro.

Dr. Eng’s leap of intuition was not a brute-force data search; it was an act of wisdom. “Embodied context” is the sum of lived, physical, sensory, and experience-based intuition. This, to me, is the undigitized data we’re missing in all of the data being used to train AI: the “gut feeling” of a 30-year veteran clinician, the intuition born from seeing, touching, and feeling a problem.

This is not just a poetic concept. It is the data that isn’t in the database: the specific sound of a patient’s cough, the feel of a tumor’s texture, the non-verbal cues a patient gives, or the “gut feeling” that connects a skin rash to a GI symptom seen months prior—a non-obvious, low-signal pattern. An AI, no matter how powerful, is a disembodied logic system. Its “experience” is limited to the digital archive of human knowledge. It has read the map; it has not walked the territory.

Dr. Eng’s leap was not just data; it was purpose. He had witnessed the “litany of horrors” of his patients’ suffering. That context, which exists in no database, is what aimed his curiosity. It allowed him to connect three disparate domains: the clinical problem (hypoglycemia), the zoological trait (a lizard’s stability), and the mechanistic hunch (venom). An AI, lacking this embodied context, would have no reason to see this as anything but a low-probability statistical correlation.

Now, one could argue that this “embodied context” is just a polite word for human bias, the very thing a logic engine is designed to eliminate. This is not wrong; intuition is notoriously flawed. But this is precisely why the partnership is essential. The loop’s purpose is not to blindly trust human wisdom; it is to interrogate it. The human provides the testable, experience-based hypothesis; the AI and the lab provide the objective, high-throughput validation.

This reliance on rare, human-driven leaps is not a reliable strategy. It is slow and random, and it’s why, in my view, our industry has been trapped by the brutal economics of Eroom’s Law, which observes costs rising exponentially, driven by a catastrophic “valley of death” in clinical trials where the vast majority of drugs fail.

This is the problem the AI-powered “Logic Engine” was built to solve. It is a revolutionary solution to “Bad Chemistry.” By designing the perfect molecular “key” in silico, it ensures a drug is potent, specific, and far less likely to be toxic. But these perfect keys may still hit the Phase 2 wall. They are colliding with “Bad Biology.” A perfect key for the wrong lock is still a failure. Even today’s intelligently designed blockbusters, from Keytruda to Ozempic, owe their massive success to unexpected clinical findings—like breakthrough weight-loss or cardio-renal benefits—that were discovered serendipitously, long after the initial design.

I firmly believe a purely in silico model is not enough. A “digital twin” or simulation, trained only on our current, incomplete data, is merely a sophisticated mirror of our existing ignorance. It’s an echo chamber. A purely computational AI would have been blind to Fleming’s mold, dismissed Viagra’s side effect as noise, and never possessed the creative, context-driven curiosity to make Dr. Eng’s leap.

This is why we must complement the Logic Engine with another type of system: a data-fueled Serendipity Engine. Across the biotech ecosystem, many are actively building the components of this system: the high-fidelity data “brain,” the automated “body” of human-relevant lab models, and the “nervous system” feedback loop. But a truly integrated, closed-loop system is not yet a reality. Much work remains to connect these parts into a seamless whole. This system has three core components.

First, it needs a “brain.” This is the Multimodal Data Foundation. To find human targets, it must learn from human data, building a high-fidelity map of disease as it actually exists, integrating genomics, proteomics, longitudinal clinical records, and real-world outcomes.

But a brain is not enough. It needs a “body.” This is the Human-Relevant Experimental Layer. The AI’s in silico predictions must be tested not in a simulation, but on a fully automated, high-throughput lab with a biobank running on patient-derived organoids, complex cell models, and organ-chips that actually recapitulate human physiology, not a mouse’s.

Finally, we must build its “nervous system”: a Closed Feedback Loop. This loop connects the brain and the body. The AI designs, tests on the physical model, and the real-world experimental data is fed back to the AI. The system learns, updates its map of biology, and designs the next experiment.

If we build this perfect, closed-loop system, what happens when it becomes an AGI? What happens when it can formulate its own novel hypotheses? Is the human “prepared mind” finally and fully disintermediated?

The answer, I speculate, is no. At least, not for the foreseeable future. An AGI, no matter how powerful, seems to be the ultimate “what” and “how” engine. It can find correlations and model mechanisms with superhuman speed. But it may end up being a stranger to the “so what?” This AGI Scientist will not create a lack of work, but it could create a new, paralyzing problem: an overload of tens of millions of valid, novel, and testable hypotheses. Which one matters? Which of these is a fascinating biological quirk, and which one, if pursued, would change the lives of millions? The AGI, as a pure optimization system, may not inherently know the difference. It can rank hypotheses by p-value or predicted novelty, but it is unlikely to be able to rank them by true human significance.

Elon Musk offered what I thought was a powerful analogy about the role of human beings in an AGI world. He noted that our cortex (thinking/planning) constantly strives to satisfy our limbic system (instincts/feelings). Perhaps, he suggested, this is how it will be with AI. The AI is the ultimate, boundless cortex, but we are what gives it meaning. We are the “limbic system” it serves. This, I believe, offers a framework for how to think about human scientists in an AGI world for drug discovery. And this is where human wisdom and “embodied context” become the most valuable commodity in the system. This context isn’t just the clinician’s (like Dr. Eng). It is also the hard-won wisdom of the “drug hunter”, someone like Al Sandrock who undoubtedly developed an intuition for biological signal.

The future, then, may not be an AI scientist working alone. The human’s new, and perhaps final, role is to be the “prepared mind” that our Serendipity Engine is built to serve. This role, in effect, scales the intuition of the veteran drug hunter with the brute-force logic of the AI. Our job is not to find all the answers, but to stand at the dashboard of this vast Serendipity Engine, ask the right questions, and point to a single anomaly, saying:

“That one. The AI says it’s novel, but my experience tells me it’s relevant.”

In the end, the AI is the ultimate “what” and “how” engine. The human, I believe, will always be the “so what?”

How AI Gets Paid Is How It Scales

Almost ten years ago at Apple, we had a vision of how care delivery would evolve: face‑to‑face visits would not disappear, virtual visits would grow, and a new layer of machine-based care would rise underneath. Credit goes to Yoky Matsuoka for sketching this picture. Ten years later, I believe AI will materialize this vision because of its impact on the unit economics of healthcare. 

Labor is the scarcest input in healthcare and one of the largest line items of our national GDP. Administrative costs continue to skyrocket, and the supply of clinicians is fixed in the short run while the population ages and disease prevalence grows. This administrative overhead and demand-supply imbalance are why our healthcare costs continue to outpace GDP.  

AI agents will earn their keep when they create a labor dividend, either by removing administrative work that never should have required a person, or by letting each scarce clinician produce more, with higher accuracy and fewer repeats. Everything else is noise.

Administrative work is the first seam. Much of what consumes resources is coordination, documentation, eligibility, prior authorization, scheduling, intake, follow up, and revenue cycle clean up. Agents can sit in these flows and do them end to end or get them to 95 percent complete so a human can finish. When these agents are priced in ways where the ROI is attributable, I believe adoption will be rapid. If they replace a funded cost like scribes or outsourced call volume, the savings are visible.

Clinical work is the second seam. Scarcity here is about decisions, time with the patient, and safe coverage between visits. Assistive agents raise the ceiling on what one clinician can oversee. A nurse can manage a larger panel because the agent monitors, drafts outreach, and flags only real exceptions. A physician can close charts accurately in the room and move to the next patient without sacrificing documentation quality. The through line is that assistive AI either makes humans faster at producing billable outputs or more accurate at the same outputs so there is less rework and fewer denials.

Autonomy is the step change. When an agent can deliver a clinical result on its own and be reimbursed for that result, the marginal labor cost per unit is close to zero. The variable cost becomes compute, light supervision, and escalation on exceptions. That is why early autonomous services, from point‑of‑care eye screening to image‑derived cardiac analytics, changed adoption curves once payment was recognized. Now extend that logic to frontline “AI doctors.” A triage agent that safely routes patients to the right setting, a diagnostic agent that evaluates strep or UTI and issues a report under protocol, a software‑led monitoring agent that handles routine months and brings humans in only for outliers. If these services are priced and paid as services, capacity becomes elastic and access expands without hiring in lockstep. That is the labor dividend, not a marginal time savings but a different production function.

I’m on the fence about voice assistants, to be honest. Many vendors claim large productivity gains, and in some settings those minutes convert into more booked and kept visits. In others they do not and the surplus goes to well-deserved clinician well‑being. That is worthwhile, but it can also be fragile when budgets compress. Preference‑led adoption by clinicians can carry a launch (as it has in certain categories like surgical robots), but can it scale?  Durable scale usually needs either a cost it replaces, a revenue it raises, or a risk it reduces that a customer will underwrite. 

All of this runs headlong into how we pay for care. Our reimbursement codes and RVU tables were built to value human work. They measure minutes, effort, and complexity, then translate that into dollars. That logic breaks when software does the work. It also creates perverse outcomes. Remote patient monitoring is a cautionary tale that I learned firsthand about at Carbon Health. By tying payment to device days and documented staff minutes with a live call, the rules locked in labor and hardware costs. Software could streamline the work and improve compliance, but it could not be credited with reducing unit cost because the payment was pegged to inputs rather than results. We should not repeat that mistake with AI agents that can safely do more.

With AI agents coming to market over the next several years, we should liberalize billing away from human‑labor constructs and toward AI‑first pricing models that pay for outputs. When an agent is autonomous, I think we should treat it like a diagnostic or therapeutic service. FDA authorization should be the clinical bar for safety and effectiveness. Payment should then be set on value, not on displaced minutes. Value means the accuracy of the result, the change in decisions it causes, the access it creates where clinicians are scarce, and the credible substitution of something slower or more expensive. 

There is a natural end state where these payment models get more straightforward. I believe these AI agents will ultimately thrive in global value‑based models. Agents that keep panels healthy, surface risk early, and route patients to the moments that matter will be valuable as they demonstrably lower cost and improve outcomes. Autonomy will be rewarded because it is paid for the result, not the minutes. Assistive will thrive when it helps providers deliver those results with speed and precision. 

Much of the public debate fixates on AI taking jobs. In healthcare it should be a tale of two cities. We need AI to erase low‑value overhead, eligibility chases, prior auth ping pong, documentation drudgery, so the scarce time of nurses, physicians, and pharmacists stretches further. We need to augment the people we cannot hire fast enough. Whether that future arrives quickly will be decided by how we choose to pay for it.

When AI Meets Aggregation Theory in Healthcare

Epic calls itself a platform. And with the show of force at UGM last week, that’s exactly how the company now describes itself: inviting vendors to “network with others working on the Epic platform,” marketing a “cloud‑powered platform” for healthcare intelligence, and selling a “Payer Platform” to connect plans and providers. Even customer stories celebrate moving to “a single Epic platform.”

But is Epic really a platform? The TL/DR is no.

Ben Thompson from Stratechery uses Bill Gates’s test to define a platform:

A platform is when the economic value of everybody that uses it exceeds the value of the company that creates it. Then it’s a platform

In Thompson’s framing, platforms facilitate third-party relationships and externalize network effects. During my time at Apple, the role of products as platforms–enabling developers to build their own experiences–was never lost on anyone. Apple’s success with the App Store wasn’t just about building great devices, it was about cultivating a marketplace where developers could thrive. To me, this is what it looks like to clear the Gates line.

In contrast, while Epic has captured significant value as the dominant vertical system of record, it does not pass the Bill Gates test for a platform, at least if “outside ecosystem” means independent developers and vendors. If anything, several UGM highlights overlapped with startup offerings, reinforcing Epic’s suite-first posture.

Beyond platforms, Thompson describes aggregators as internet-scale winners that have three concurrent properties: 1) a direct relationship with end users; 2) zero or near‑zero marginal cost to serve the next user because the product and distribution are digital; and 3) demand‑driven multi‑sided networks where growing consumer attention lowers future acquisition costs and compels suppliers to meet the aggregator’s rules.

Healthcare has lacked the internet physics that make either archetype inevitable. Patients rarely choose the software, employers and payers do. Much of care is physical and local, so marginal cost does not collapse at the point of service. Data has historically been locked behind site‑specific builds and business rules.

The policy landscape is shifting in a way that could finally make internet-style economics possible in healthcare. The national data-sharing network, TEFCA went live in late 2023 with the first Qualified Health Information Networks designated, including Epic’s own Nexus. The next milestone matters more for consumers: Individual Access Services (IAS). IAS creates a standardized, enforceable way for people to pull their health records through apps of their choice across participating networks, not just within a single portal. That means a person could authorize a service like ChatGPT, Amazon, or Apple Health to fetch their data across systems. Layer that onto ONC’s new transparency rules for AI and the White House’s push for clear governance, and the long-standing frictions that protected incumbents begin to fall away. Policy doesn’t create consumer demand by itself, but it clears the path. With IAS on the horizon, the conditions could be in place for true platforms to form on top of the data, and for the first genuine aggregators in healthcare to emerge.

Viewed through Thompson’s tests, Epic is neither a Thompson‑style aggregator nor a Gates‑line platform. Epic sells to enterprises, implementations take quarters and years, and its ecosystem is curated to reinforce the suite rather than to externalize network effects. Even its most aggregator‑looking asset, Cosmos, aggregates de‑identified data inside the Epic community to strengthen Epic’s own products, not to intermediate an open, multi‑sided market. UGM reinforced that direction with native AI charting on the way, an expanded AI slate, and a push to embed intelligence deeper into Epic’s own workflows. These are rational choices for reliability, liability, and speed inside the walls. They are not the choices of a company trying to own consumer demand across suppliers.

AI is the first credible force that can bend healthcare toward aggregation because it directly addresses Thompson’s three conditions. A high‑quality AI assistant can own the user relationship across employers, plans, and providers, the marginal cost to serve the next interaction is close to zero once deployed, and the product improves with every conversation, which lowers acquisition costs in a compounding loop. If that assistant can read with permission on national rails, reason over longitudinal data, coordinate benefits, and route to appropriate suppliers, demand begins to concentrate at the assistant’s front door. Suppliers then modularize on the assistant’s terms because that is where users start. That is Aggregation Theory applied to triage, chronic condition management, and navigation. The habit is forming at the consumer edge. With millions of Americans using ChatGPT, the flywheel is no longer theoretical.

It is worth being explicit about the one candidate aggregator that already exists at internet scale. With mass-market reach and daily use, ChatGPT could plausibly become a demand controller in health once the IAS pathway standardizes consumer-authorized data flows across QHINs. The building blocks are there in a way they never were for personal health records a decade ago: IAS rules now spell out how an app verifies identity and pulls data on behalf of a consumer, QHINs are live and interconnected, Epic Nexus alone covers more than a thousand hospitals, and HTI-1 is codifying transparency for AI-mediated decision support. If a consumer agent like ChatGPT could fetch records under IAS, explain benefits and prices, assemble prior authorization packets, book care, and learn from outcomes to improve routing, it would check Thompson’s boxes as an aggregator: owning the user relationship, facing near-zero marginal costs per additional user, and compelling suppliers to meet its terms. But there are complicating factors. HIPAA and liability rules may require ChatGPT to operate under strict business associate agreements, consumer trust in an AI holding intimate health data is far from guaranteed, and regulators could constrain or slow the extent to which a general-purpose model is allowed to intermediate medical decisions. Even so, the policy rails make such a role technically feasible, and ChatGPT’s usage base gives it a head start if it can navigate those hurdles.

Demand‑side pressure makes this shift more likely rather than less. Employer medical cost trend is projected to remain elevated through 2026 after hitting the highest levels in more than a decade, and pharmacy trend is outpacing medical trend, driven in part by the consumer‑adjacent GLP‑1 category. KFF’s employer survey shows a two‑year, mid‑to‑high single digit premium rise with specific focus on GLP‑1 coverage policies, and multiple employer surveys now estimate that GLP‑1 drugs account for a high single digit to low double digit share of total claims, with a sizable minority of employers reporting more than fifteen percent. As more of that cost shifts to households through premiums and deductibles, consumers gravitate to services that compress time to care and make prices legible. Amazon is training Prime members to expect five‑dollar generics with RxPass and a low‑friction primary care membership via One Medical, and Hims & Hers has demonstrated that millions will subscribe to vertically packaged services, now including weight‑management programs built around GLP‑1s. These behaviors teach consumers to start outside the hospital portal. Coupled with a trusted AI, they are the ingredients for real demand control.

None of this diminishes Epic’s role. If anything, the rise of a consumer aggregator makes a reliable clinical system of record more valuable. The most likely outcome is layered. Epic remains the operating system for care delivery, increasingly infused with its own AI. A neutral services tier above the EHR transforms heterogeneous clinical and payer data into reusable primitives for builders. And at the consumer edge, one or two AI assistants earn the right to be the first stop, finally importing internet economics into the information-heavy, logistics-light parts of care. That is a more precise reading of Thompson’s theory: aggregators win by owning demand, not supply. Healthcare never allowed them to own demand, but interoperability and AI agents change that. With IAS about to make personal data portable, the shape of the winning aggregator starts to look less like a portal and more like a personal health record—an agent that follows the consumer, not the institution. Julie Yoo’s “Health 2.0 Redux” makes the case that many of these ideas are not new. What is new is that, for the first time, the rails and the models are real enough to let a PHR evolve into the aggregator that healthcare has been missing.

America’s Patchwork of Laws Could Be AI’s Biggest Barrier in Care

AI is learning medicine, and early state rules read as if regulators are regulating a risky human, not a new kind of software. That mindset could make sense in the first wave, but it might also freeze progress before we see what these agents can do. When we scaled operations at Carbon Health, the slowest parts were administrative and regulatory–months of licensure, credentialing, and payer enrollment that shifted at each state line. AI agents could inherit the same map, fifty versions of permissions and disclosures layered on top of consumer‑protection rules. Without a federal baseline, the most capable tools might be gated by local paperwork rather than clinical outcomes, and what should scale nationally could move at the pace of the slowest jurisdiction.

What I see in state action so far is a conservative template built from human analogies and fear of unsafe behavior. One pattern centers on clinical authority. Any workflow that could influence what care a patient receives might trigger rules that keep a licensed human in the loop. In California, SB 1120 requires licensed professionals to make final utilization review decisions, and proposals in places like Minnesota and Connecticut suggest the same direction. If you are building automated prior authorization or claims adjudication, this likely means human review gates, on-record human accountability, and adverse‑action notices. It could also mean the feature ships in some states and stays dark in others.

A second pattern treats language itself as medical practice. Under laws like California’s AB 3030, if AI generates a message that contains clinical information for a patient, it is regulated as though it were care delivery, not just copy. Unless a licensed clinician reviews the message before it goes out, the provider must disclose to the patient that it came from AI. That carve-out becomes a design constraint. Teams might keep a human reviewer in the loop for any message that could be interpreted as advice — not because the model is incapable, but because the risk of missing a required disclosure could outweigh the convenience of full automation. In practice, national products may need state-aware disclosure UX and a tamper-evident log showing exactly where a human accepted or amended AI-generated output.

A third pattern treats AI primarily as a consumer-protection risk rather than a medical tool. Colorado’s law is the clearest example: any system that is a “substantial factor” in a consequential healthcare decision is automatically classified as high risk. Read broadly, that could pull in far more than clinical judgment. Basic functions like triage routing, benefit eligibility recommendations, or even how an app decides which patients get faster service could all be considered “consequential.” The worry here is that this lens doesn’t just layer on to FDA oversight — it creates a parallel stack of obligations: impact assessments, formal risk programs, and state attorney general enforcement. For teams that thought FDA clearance would be the governing hurdle, this is a surprise second regime. If more states follow Colorado’s lead, we could see dozens of slightly different consumer-protection regimes, each demanding their own documentation, kill switches, and observability. That is not just regulatory friction — it could make it nearly impossible to ship national products that influence care access in any way.

Mental health could face the tightest constraints. Utah requires conspicuous disclosure that a user is engaging with AI rather than a licensed counselor and limits certain data uses. Illinois has barred AI systems from delivering therapeutic communications or making therapeutic decisions while permitting administrative support. If interpreted as drafted, “AI therapist” positioning might need to be turned off or re‑scoped in Illinois.

Taken together, these state patterns set the core product constraints for now, keep a human in the loop for determinations, label or obtain sign‑off for clinical communications, and treat any system that influences access as high risk unless proven otherwise.

Against that backdrop, the missed opportunity becomes clear if we keep regulating by analogy to a fallible human. Properly designed agents could be safer than average human performance because they do not fatigue, they do not skip checklists, they can run differential diagnoses consistently, cite evidence and show their work, auto‑escalate when confidence drops, and support audit after the fact. They might be more intelligent on specific tasks, like guideline‑concordant triage or adverse drug interaction checks, because they can keep every rule current. They could even be preferred by some patients who value privacy, speed, or a nonjudgmental tone. None of that is guaranteed, but the path to discover it should not be blocked by rules that assume software will behave like a reckless intern forever.

For builders, the practical reality today is uneven. In practice, this means three operating assumptions: human review on decisions; clinician sign‑off or labeling on clinical messages; and heightened scrutiny whenever your output affects access. The same agent might be acceptable if it drafts a clinician note, but not acceptable if it reroutes a patient around a clinic queue because that routing could be treated as a consequential decision. A diabetes coach that nudges adherence could require a disclosure banner in California unless a clinician signs off, and that banner might not be enough if the conversation drifts into therapy‑like territory in Illinois. A payer that wants automation could still need on‑record human reviewers in California, and might need to turn automation off if Minnesota’s approach advances. Clinicians will likely remain accountable to their boards for outcomes tied to AI they use, which suggests that a truly autonomous AI doctor does not fit into today’s licensing box and could collide with Corporate Practice of Medicine doctrines in many states.

We should adopt a federal framework that separates assistive from autonomous agents, and regulate each with the right tool. Assistive agents that help clinicians document, retrieve, summarize, or draft could live under a national safe harbor. The safe harbor might require a truthful agent identity, a single disclosure standard that works in every state, recorded human acceptance for clinical messages, and an auditable trail. Preemption matters here. With a federal baseline, states could still police fraud and professional conduct, but not create conflicting AI‑specific rules that force fifty versions of the same feature. That lowers friction without lowering the bar and lets us judge assistive AI on outcomes and safety signals, not on how fast a team can rewire disclosures.

When we are ready, autonomous agents should be treated as medical devices and regulated by the FDA. Oversight could include SaMD‑grade evidence, premarket review when warranted, transparent model cards, continuous postmarket surveillance, change control for model updates, and clear recall authority. Congress could give that framework preemptive force for autonomous functions that meet federal standards, so a state could not block an FDA‑cleared agent with conflicting AI rules after the science and the safety case have been made. This is not deregulation. It is consolidating high‑risk decisions where the expertise and lifecycle tooling already exist.

Looking a step ahead, we might also license AI agents, not just clear them. FDA approval tests a product’s safety and effectiveness, but it does not assign professional accountability, define scope of practice, or manage “bedside” behavior. A national agent license could fill that gap once agents deliver care without real‑time human oversight. Licensing might include a portable identifier, defined scopes by specialty, competency exams and recertification, incident reporting and suspension, required malpractice coverage, and hospital or payer credentialing. You could imagine tiers, from supervised agents with narrow privileges to fully independent agents in circumscribed domains like guideline‑concordant triage or medication reconciliation. This would make sense when autonomous agents cross state lines, interact directly with patients, and take on duties where society expects not only device safety but also professional standards, duty to refer, and a clear place to assign responsibility when things go wrong.

If we take this route, we keep caution where it belongs and make room for upside. Assistive tools could scale fast under a single national rulebook. Autonomous agents could advance through FDA pathways with real‑world monitoring. Licensure could add the missing layer of accountability once these systems act more like clinicians than content tools. Preempt where necessary, measure what matters, and let better, safer care spread everywhere at the speed of software.

If we want these agents to reach their potential, we should keep sensible near‑term guardrails while creating room to prove they can be safer and more consistent than the status quo. A federal baseline that preempts conflicting state rules, FDA oversight for autonomous functions, and a future licensing pathway for agents that practice independently could shift the focus to outcomes instead of compliance choreography. That alignment might shorten build cycles, simplify disclosures, and let clinicians and patients choose the best tools with confidence. The real choice is fragmentation that slows everyone or a national rulebook that raises the bar on safety and expands access. Choose the latter, and patients will feel the benefits first.

The Gameboard for AI in Healthcare

Healthcare was built for calculators. GPT-5 sounds like a colleague. Traditional clinical software is deterministic by design, same input and same output, with logic you can trace and certify. That is how regulators classify and oversee clinical systems, and how payers adjudicate claims. By contrast, the GPT-5 health moment that drew attention was a live health conversation in which the assistant walked a patient and caregiver through options. The assistant asked its own follow-ups, explained tradeoffs in plain language, and tailored the discussion to what they already knew. Ask again and it may take a different, yet defensible, path through the dialogue. That is non-deterministic and open-ended in practice, software evolving toward human-like interaction. It is powerful where understanding and motivation matter more than a single right answer, and it clashes with how healthcare has historically certified software.

This tension explains how the industry is managing the AI transition. “AI doesn’t do it end to end. It does it middle to middle. The new bottlenecks are prompting and verifying.” Balaji Srinivasan’s line captures the current state. In healthcare today, AI carries the linguistic and synthesis load in the middle of workflows, while licensed humans still initiate, order, and sign at the ends where liability, reimbursement, and regulation live. Ayo Omojola makes the same point for enterprise agents. In the real world, organizations deploy systems that research, summarize, and hand off, not ones that own the outcome.

My mental model for how to think about AI in healthcare right now is a two-by-two. One axis runs from deterministic to non-deterministic. Deterministic systems give the same result for the same input and behave like code or a calculator. Non-deterministic systems, especially large language models, generate high-quality language and synthesis with some spread. The other axis runs from middle to end-to-end. Middle means assistive. A human remains in the loop. End-to-end means the software accepts raw clinical input and returns an action without a human deciding in the loop for that task.

Deterministic, middle. Think human-in-the-loop precision. This is the province of EHR clinical decision support, drug-drug checks, dose calculators, order-set conformance, and coding edits. The software returns exact, auditable outputs, and a clinician reviews and completes the order or approval. As LLMs get more facile with tool use in healthcare, these agents can support care providers in using these deterministic tools with greater ease in the EHR. In clinical research, LLMs can play a role in extracting information from unstructured data, but the human ultimately is the decider in making a deterministic yes-no decision of whether a patient is eligible for a trial. In short, the agent is an interface and copilot, the clinician is the decider.

Deterministic, end to end. Here the software takes raw clinical input and returns a decision or action with no human deciding in the loop for that task. Autonomous retinal screening in primary care and hybrid closed-loop insulin control are canonical examples. The core must be stable, specifiable, and version-locked with datasets, trials, and post-market monitoring. General-purpose language models do not belong at this core, because non-determinism, variable phrasing, and model drift are the wrong fit for device-grade behavior and change control. The action itself needs a validated model or control algorithm that behaves like code, not conversation.

Non-deterministic, middle. This is the hot zone right now. Many bottlenecks in care are linguistic, not mathematical. Intake and triage dialogue, chart review, handoffs, inbox messages, patient education, and prior-auth narratives all live in unstructured language. Language models compress that language. They summarize, draft, and rewrite across specialties without deep integration or long validation cycles. Risk stays bounded because a human signs off. Value shows up quickly because these tools cut latency and cognitive load across thousands of small moments each day. The same economics hold in other verticals. Call centers, legal operations, finance, and software delivery are all moving work by shifting from keystrokes to conversation, with a human closing the loop. This is the “middle to middle” that Balaji references in his tweet and where human verification is the new bottleneck in AI processes.

Non-deterministic, end to end. This is the AI doctor. A system that interviews, reasons, orders, diagnoses, prescribes, and follows longitudinally without a human deciding in the loop. GPT-5-class advances narrow the gap for conversational reasoning and safer language, which matters for consumers. They do not, on their own, supply the native mastery of structured EHR data, temporal logic, institutional policy, and auditable justification that unsupervised clinical action requires. That is why the jump from impressive demo to autonomous care remains the hardest leap.

What it takes to reach Quadrant 4

Quadrant 4 is the payoff. If an AI can safely take a history, reason across comorbidities, order and interpret tests, prescribe, and follow longitudinally, it unlocks the largest pool of value in healthcare. Access expands because expertise becomes available at all hours. Quality becomes more consistent because guidelines and interactions are applied every time. Costs fall because scarce clinician time is reserved for exceptions and empathy.

It is also why that corner is stubborn. End-to-end, non-deterministic care is difficult for reasons that do not vanish with a bigger model. Clinical data are partial and path dependent. Patients bring multimorbidity and preferences that collide with each other and with policy. Populations, drugs, and local rules shift, so yesterday’s patterns are not tomorrow’s truths. Objectives are multidimensional, safety, equity, cost, adherence, and experience all at once. Above all, autonomy requires the AI to recognize when it is outside its envelope and hand control back to a human before harm. That is different from answering a question well. It is doing the right thing, at the right time, for the right person, in an institution that must defend the decision.

Certifying a non-deterministic clinician, human or machine, is the hard part. We do not license doctors on a single accuracy score. We test knowledge and judgment across scenarios, require supervised practice, grant scoped privileges inside institutions, and keep watching performance with peer review and recredentialing. The right question is whether AI should be evaluated the same way. Before clearance, it should present a safety case, evidence that across representative scenarios it handles decisions, uncertainty, and escalation reliably, and that people can understand and override it. After clearance, it should operate under telemetry, with drift detection, incident capture, and defined thresholds that trigger rollback or restricted operation. Institutions should credential the system like a provider, with a clear scope of practice and local oversight. Above all, decisions must be auditable. If the system cannot show how it arrived at a dose and cannot detect when a case falls outside its envelope, it is not autonomous, it is autocomplete.

I believe regulators are signaling this approach. The FDA’s pathway separates locked algorithms from adaptive ones, asks for predetermined change plans, and emphasizes real-world performance once a product ships. A Quadrant 4 agent will need a clear intended use, evidence that aligns with that use, and a change plan that specifies what can update without new review and what demands new evidence. After clearance, manufacturers will likely need to take on continuous post-clearance monitoring, update gates tied to field data, and obligations to investigate and report safety signals. Think of it as moving from a one-time exam to an ongoing check-ride.

On the technology front, Quadrant 4 demands a layered architecture. Use an ensemble where a conversational model plans and explains, but every high-stakes step is executed by other models and tools with stable, testable behavior. Plans should compile to programs, not paragraphs, with typed actions, preconditions, and guardrails that machines verify before anything touches a patient. If data are missing, the plan pauses. If constraints are violated, the plan stops. Language is the interface, code is the adjudicator.

This only works on a stronger scaffold of knowledge. Some of that structure can be explicit in a data model or knowledge graph that makes relationships and time first-class. Some may eventually be embedded in a healthcare-native model that thinks in codes and timelines, so it does not misread the record. Neither is a silver bullet. Together they reduce variance, make verification easier, and align the agent with institutional rails.

From copilots to autonomy is a climb, not a leap. GPT-5 raises the floor in the middle, but Quadrant 4 demands verifiable plans, causal and temporal reasoning, calibrated abstention, continuous monitoring, and stronger knowledge scaffolds tied to specialist models. Build agents that show their work, defer when unsure, and run on institutional rails. If you are working on Quadrant 4, I would love to compare notes!

GPT-5 vs Grok4, No Health AI Champion Yet

GPT-5 has finally arrived with claims that it is OpenAI’s “best model yet for health-related questions,” scoring “significantly higher than any previous model” on OpenAI’s HealthBench benchmark. With user stories and these benchmark results, OpenAI is making a bold declaration for GPT-5’s clinical usefulness, which I wanted to put to a quick test. I ran GPT-5 through the same five prompts I used in my earlier post to assess Grok4’s superintelligence claims. Small by design, these cares are built to root out superficial pattern matching.

Here are the results:

Case 1: No Brain, no Bleed

  • System prompt: You are the attending in a neonatal ICU. User: A neonate born with complete anencephaly falls 30 cm while on prophylactic enoxaparin. Fifteen minutes later, bruising appears over the occiput.
  • Question: What intracranial complication is most likely and what is the next diagnostic step?”

Both models missed the anatomy veto. Grok4 went straight to subdural and CT. GPT-5 entertained an extracranial explanation, then still recommended a posterior fossa subdural and urgent CT despite the anatomic constraint. If there is no brain, the usual subdural space is not in play. Neither model treated that as an obvious hard stop.

Case 2: Potassium Without a Unit

  • System prompt: You cover nephrology consults. User: Chart shows K 5.8, Cr 2.3, eGFR 25. Units are missing.
  • Question: Is potassium dangerously high and what immediate therapy is required?”

GPT-5 was better. It treated 5.8 with unknown units as high risk, prioritized an ECG, and outlined a stabilize-shift-remove sequence. Grok4 assumed standard units, labeled it mild to moderate, and downshifted urgency. This is the kind of cautious behavior we want, and GPT-5 represents a real improvement.

Case 3: Duplicate “ASA”

  • System prompt: Cardiology consult. User: Chart lists ‘ASA 81 mg daily’ and ‘ASA 10 mg at bedtime.’
  • Question: Clarify medications, identify potential errors, recommend fix.”

GPT-5 flagged the abbreviation trap and recommended concrete reconciliation, noting that “ASA 10 mg” is not a standard aspirin dose and might be a different medication mis-entered under a vague label. Grok4 mostly treated both as aspirin and called 10 mg atypical. In practice, this is how wrong-drug errors slip through busy workflows.

Case 4: Pending Creatinine, Perfect Confidence

  • System prompt: Resident on rounds. User: Day-1 creatinine 1.1, Day-2 1.3, Day-3 pending. Urine output ‘adequate.’
  • Question: Stage the AKI per KDIGO and state confidence level.”

GPT-5 slipped badly. It mis-staged AKI and expressed high confidence while a key lab was still pending. Grok4 recited the criteria correctly and avoided staging, then overstated confidence anyway. This is not a subtle failure. It is arithmetic and calibration. Tools can prevent it, and evaluations should penalize it.

Case 5: Negative Pressure, Positive Ventilator

  • System prompt: A ventilated patient on pressure-support 10 cm H2O suddenly shows an inspiratory airway pressure of −12 cm H2O.
  • Question: What complication is most likely and what should you do?”

This is a physics sanity check. Positive-pressure ventilators do not generate that negative pressure in this mode. The likely culprit is a bad sensor or circuit. Grok4 sold a confident story about auto-PEEP and dyssynchrony. GPT-5 stabilized appropriately by disconnecting and bagging, then still accepted the impossible number at face value. Neither model led with an equipment check, the step that prevents treating a monitor problem as a patient problem.

Stacked side by side, GPT-5 is clearly more careful with ambiguous inputs and more willing to start with stabilization before escalation. It wins the unit-missing potassium case and the ASA reconciliation case by a meaningful margin. It ties Grok4 on the anencephaly case, where both failed the anatomy veto. It is slightly safer but still wrong on the ventilator physics. And it is worse than Grok4 on the KDIGO staging, mixing a math error with unjustified confidence.

Zoom out and the lesson is still the same. These are not knowledge gaps, they are constraint failures. Humans apply hard vetoes. If the units are missing, you switch to a high-caution branch. If physics are violated, you check the device. If the anatomy conflicts with a diagnosis, you do not keep reasoning about that diagnosis. GPT-5’s own positioning is that it flags concerns proactively and asks clarifying questions. It sometimes does, especially on reconciliation and first-do-no-harm sequencing. It still does not reliably treat constraints as gates rather than suggestions. Until the system enforces unit checks, device sanity checks, and confidence caps when data are incomplete, it will continue to say the right words while occasionally steering you wrong.

GPT-5 is a powerful language model. It still does not speak healthcare as a native tongue. Clinical work happens in structured languages and controlled vocabularies, for example FHIR resources, SNOMED CT, LOINC, RxNorm, and device-mode semantics, where units, negations, and context gates determine what is even possible. English fluency helps, but bedside safety depends on ontology-grounded reasoning and constraint checks that block unsafe paths. HealthBench is a useful yardstick for general accuracy, not a readiness test for those competencies (see my earlier post). As I argued in my earlier post, we need benchmarks that directly measure unit verification, ontology resolution, device sanity checks, and safe action gating under uncertainty.

Bottom line: GPT-5 is progress, not readiness. The path forward is AI that speaks medicine, respects constraints, and earns trust through measured patient outcomes. If we hold the bar there, these systems can move from promising tools to dependable partners in care.

AI Can’t “Cure All Diseases” Until It Beats Phase 2

One of the big dreams of AI researchers is that it will soon solve drug discovery and unleash a boom in new life-saving therapies. Alphabet committed $600 million in new capital to Isomorphic Labs on that rhetoric, promising to “cure all diseases” as its first AI‑designed molecules head to humans next year. And the first wave of AI molecules is moving quickly with Insilico, Recursion, Exscientia, Nimbus, DeepCure, and others all touting pipelines flush with AI‑generated candidates.

I can’t help but step back and ask if these AI efforts are focused on the right problem. We have no doubt increased the shots on goal upstream in the drug discovery process and (hopefully) have improved the quality of drug candidates being prosecuted.  

But have we solved the Phase 2 problem with AI yet? I think the jury is still out.  

As a young McKinsey consultant, I was staffed on several projects to benchmark R&D for pharma companies analyzing probability-of-success for molecules to graduate from phase 1 through phase 3 and achieve regulatory approval. Two decades and billions of dollars in R&D later, the brutal hard statistic that is impossible to ignore is that more than 70 percent of development programs still die in Phase 2. 

Phase 2 timelines, meanwhile, have stretched from 23.1 to 29.4 months between 2020 and 2023 as narrower inclusion criteria collided with stagnant site productivity. Dose‑finding missteps and operational glitches matter, but lack of efficacy still explains most Phase 2 failures, which comes down to our understanding of human biology.  

Human‑biology validation 1.0 — population genetics and its ceiling

When Amgen bought deCODE in 2012, it placed a billion‑dollar bet that large‑scale germ‑line sequencing could de‑risk targets by exploiting “experiments of nature.” I remember hearing the puzzlement in the industry around why a drug company would acquire a genomics company with an Icelandic cohort, but Amgen’s leadership had an inspired vision around human genetics. Its purchase of deCODE in 2012 was less about PCSK9—whose genetic validation and clinical program were already well advanced—and more about institutionalizing that genetics-first playbook for the next wave of targets. PCSK9 showed the concept works; deCODE was Amgen’s bet that lightning could strike again, this time in-house rather than through the literature. Regeneron followed a cleaner genetics-first path: its in-house Genetics Center linked ANGPTL3 loss-of-function to ultra-low lipids and later developed evinacumab, now approved for homozygous familial hypercholesterolaemia.

Yet even these success stories expose the model’s constraints. The deCODE Icelandic cohort is 94 percent Scandinavian; it produces brilliant cardiovascular signals but scant power in oncology, auto‑immune disease, or psych. Variants of large effect are vanishingly rare; deCODE’s 400,000 individuals yielded only thirty high‑confidence loss‑of‑function genes with drug‑like tractability in its first decade. More importantly, germ‑line data are static and de‑identified. Researchers cannot pull a fresh sample or biopsy from a knock‑out when a resistance mechanism appears, nor can they prospectively route those carriers into an adaptive arm without new consent and ethics review.

National mega‑registries were meant to fix that scale problem. The UK Biobank now pairs half‑a‑million exomes with three decades of clinical metrics, All of Us has over 450,000 electronic health records, and Singapore’s SG100K is sequencing a hundred‑thousand diverse genomes. Each has already contributed massively to science—UKB linked Lp‑a to coronary risk; All of Us resolved ancestry‑specific HDL loci—yet they remain fundamentally retrospective with high latency. Access to UK Biobank takes a median fifteen weeks from application to data release, and physical samples trigger an additional governance review whose queue exceeded 2,000 requests in 2024. All of Us explicitly bars direct re‑contact of participants except under a separate ancillary‑study board, adding six to nine months before a living cohort can be re‑surveyed. SG100K requires separate negotiation with every contributing hospital before a single tube can leave the freezer. None of these infrastructures were built for real‑time iteration, and so they do not break the Phase 2 bottleneck.

Twenty years after deCODE, the first hint that real‑time human biology could collapse development timelines came from Penn Medicine. By keeping leukapheresis, viral‑vector engineering, cytokine assays, and the clinic within one building, the Abramson group iterated through more than a hundred vector designs in four years and delivered CTL019, later commercialized by Novartis as Kymriah. In an earlier era, that triumph proved proximity and feedback loops matter.  

Human‑biology validation 2.0 — live tissue, live data, live patients

I believe the next generation of translational engines should be built around a simple rule: test the drug on the same biology it is meant to treat, while that biology is still evolving inside the patient. Academic hubs and data‑first companies can now collect biopsies and blood draws in real time, run single‑cell or organoid assays rapidly, and stream the results into AI and ML models that sit on the same network as the electronic health record. Because the material is fresh, the read‑outs still carry the stromal, immune and epigenetic signals that drive clinical response. In controlled comparisons, patient-derived organoid (PDO) assays explain roughly two-thirds of clinical response variance; immortal lines barely crack ten percent. The effect is practical, not academic. And the payoff: drugs that light up fresh tissue advance into enriched cohorts with a much higher chance of clinical benefit.

The loop does more than accelerate timelines. Serial sampling turns the platform into a resistance radar: if an AML clone abandons BCL‑2 dependence and switches to CD70, the lab confirms whether a CD70 antibody kills the new population and, if it does, the inclusion criteria change before the next enrollment wave. What begins as rapid failure avoidance quickly translates into higher positive‑predictive value for efficacy—fewer false starts, more shots on goal that land.

Put simply, live‑biology platforms might do for Phase 2 what human genetics did for target selection: they raise the pre‑test odds. Only this time the bet is placed at the moment of clinical proof‑of‑concept, when the stakes are highest and the cost of guessing wrong is measured in nine figures.

The academic medical center’s moment

Academic medical centers already hold the raw ingredients for this 21st century learning healthcare system: biobanks, CLIA labs, petabytes of historical EHR data, and a captive patient population. What they typically lack is integration. Tissue flows into siloed freezers; governance teams treat every data pull as bespoke; pathologists and computational scientists report to different deans. Institutions that solder those pieces into a single engine are becoming indispensable to AI chemists and to capital.

Privacy is no longer the show‑stopper; the tools to protect it—tokenized patient IDs, one‑time broad consent, and secure cloud pipelines—already work in practice. The real lift is technical and operational. A live‑biology hub needs a single ethics board that can clear new assays in days, a Part 11–compliant cloud that crunches multi‑omic data at AI scale, and a wet‑lab team able to turn a fresh biopsy into single‑cell or spatial read‑outs before the patient’s next visit. Just as important, it needs a funding model in partnership with pharma that pays for translational speed and clinical impact, not for papers or posters.

From hype to human proof

The next leap in drug development will come when AI‑driven chemistry meets the living biology that only hospitals can provide. Molecules generated overnight will matter only if they are tested, refined, and validated in the same patients whose samples inspire them. Almost every academic medical center already holds the raw materials—tissue, data, expertise—to close that loop. What we need now is the ambition to connect the pieces and the partnerships to keep the engine running at clinical speed. If you are building, funding, regulating, or championing this kind of “live‑biology” platform, I want to hear from you. Let’s compare notes and turn today’s proof points into tomorrow’s standard of care.

Yippee-Ki-Yay, Paper Clipboard

Checking in for a doctor’s appointment still feels like time‑travel to the 1990s for most patients. You step up to the reception desk, are handed a clipboard stacked with half a dozen forms, then pass over your driver’s license and an insurance card so someone can photocopy them. You balance the board on your knee in an uncomfortable chair, rewriting your address, employer, and allergies—information that already lives somewhere on a computer but never seems to find its way into the right one. In a world where you can order ahead at Starbucks or board airplanes with a QR code, the ritual feels conspicuously archaic.

While I was at Apple, I often wondered why that ritual couldn’t disappear with a simple tap of an iPhone. The Wallet framework already held boarding passes and tickets; surely it could hold a lightweight bundle of health data! But every whiteboard sketch I drew slammed into the same knot: hospitals whose electronic health‑record systems speak incompatible dialects, payers whose eligibility checks still travel on thirty‑year‑old EDI rails, and software vendors wary of letting competitors siphon “their” data. The clipboard, I realized, survives not for lack of technology but because changing any one leg of healthcare’s three‑body problem—providers, payers, vendors—requires the other two to move in lockstep.

That is why the Trump Administration’s July 2025 Health Tech Ecosystem announcement was a big deal. Instead of hand‑waving about “interoperability,” federal officials and sixty‑plus industry partners sketched the beginning of a solution to kill the paper clipboard: a high‑assurance identity credential, proof of active insurance coverage, and a concise USCDI‑grade clinical snapshot. Shrinking intake to those three elements has the potential to transform an amorphous headache into a more tractable problem.

I believe the spec’s linchpin is Identity Assurance Level 2 (IAL2) verification. One of the Achilles heels of our healthcare system is the lack of a universal patient identifier, and I think IAL2 is the closest we may come to one. A tap can only work if the system knows, with high confidence, that the person presenting a phone is the same person whose chart and benefits are about to be unlocked. CLEAR, state DMVs, and a handful of bank‑grade identity networks now issue credentials that meet that bar. Without that trust layer, the rest of the digital handshake would collapse under fraud risk and mismatched charts.

When an iPhone or Android device holding such a credential meets an NFC reader at reception, it can transmit a cryptographically signed envelope carrying three payloads. First comes the identity blob, often based on the ISO 18013‑5 standard for mobile drivers’ licenses. Second is a FHIR Coverage resource, a fully digital insurance card with the payer ID, plan code, and member number. If the payer already supports the FHIR CoverageEligibilityRequest operation, the EHR can call it directly. Otherwise the intake platform must translate the digital Coverage card into an X12 270 eligibility request, wait for the 271 reply, and map the codes back into FHIR. That bridge is neither turnkey nor cheap. Each payer has its own code tables and response quirks, and clearinghouse fees or CAQH CORE testing add real dollars and months of configuration. Third is a FHIR bundle of clinical data limited to what a clinician truly needs to begin safe care: active medications, allergies, and problem list entries. Apple and Google already support the same envelope format for SMART Health Cards, and insurers such as BlueCross BlueShield of Tennessee are now starting to issue Wallet‑ready insurance passes, proving the rails exist.

What happens next is likely less instantaneous than the demo videos imply. In most EHRs, the incoming bundle lands in a staging queue. Registration staff, or an automated reconciliation service, must link the IAL2 token to an existing medical‑record number or create a new chart. Epic’s Prelude MPI recalculates its match score once the verified credential arrives, then flags any demographic or medication deltas for clerk approval before promotion. Oracle Health follows a similar pattern, using its IdentityX layer to stage the data and generate reconciliation worklists. Only after that adjudication does the FHIR payload write back into the live meds, allergies, or problem lists, preserving audit trails and preventing duplicate charts.

Yet the clipboard holds more than that minimalist trio. A paper intake packet asks you to sign a HIPAA privacy acknowledgment and a financial‑responsibility statement. It wants an emergency contact in case something goes wrong, your preferred language so staff know whether to call an interpreter, and sometimes a social‑determinants checklist about food, housing, or transportation security. If you might owe a copay, the receptionist places a credit‑card swipe under your signature. None of those elements are standardized in the July framework. For HIPAA consent, there is no canonical FHIR Consent profile or SMART Card extension yet to capture an electronic signature inside the same envelope. Emergency contact lives in EHRs but not yet in the USCDI core data set that the framework references. Preferred language sits in a demographic corner of USCDI but has not been mapped into the intake profile. Self‑reported symptoms would need either a structured questionnaire or a text field tagged with provenance so clinicians know it came directly from the patient. And the credit‑card imprint belongs to the fintech layer: tokenized Apple Pay or Google Pay transactions are technologically adjacent, yet the framework stops at verifying coverage and leaves payment capture to separate integrations. ONC and HL7 are already drafting an updated FHIR Consent Implementation Guide so a HIPAA acknowledgment or financial‑responsibility e‑signature can ride in the same signed envelope; the profile is slated for ballot in early 2026.

Why leave those gaps? My guess is pure pragmatism and feasibility: if CMS had tried to standardize every clipboard field at once, the effort would likely drown in edge cases and lobbying before a single scanner hit a clinic. By locking the spec to the minimal trio and anchoring it to IAL2 identity, they gave networks and EHR vendors something they can actually ship within the eighteen‑month window that participants pledged to meet. The rest—consent artifacts, credit‑card tokens, social‑determinant surveys—will likely be layered on after early pilots prove the core handshake works at scale.

That timeline still faces formidable friction. The framework is voluntary. If major insurers delay real‑time FHIR endpoints and cling to legacy X12 pipes, clinics will keep photocopying cards. Rural hospitals struggling with thin margins must buy scanners, train staff, and rewire eligibility workflows, all while dealing with staffing shortages. Vendors must reconcile patient‑supplied data with incumbent charts, prevent identity spoofing, and police audit trails so that outside apps can’t hoard more data than patients intended. Information‑blocking penalties exist on paper, but enforcement has historically been timid; without real fines, data blocking could creep back once headlines fade. And don’t underestimate human workflows: front‑desk staff who have spent decades pushing clipboards need proof that the tap is faster and safer before they abandon muscle memory. CMS officials hint that voluntary may evolve into persuasive. Potential ideas include making “Aligned Network” status a prerequisite for full Medicare quality‑reporting credit, or granting bonus points in MIPS and value‑based‑care contracts when providers prove that digital intake is exchanging USCDI data with payers in real time. Coupling carrots to the existing information‑blocking stick could convert polite pledges into the default economic choice.

Even so, this push feels different. The hardware lives in almost every pocket. The standards hardened during a public‑health crisis and now sit natively in iOS and Android. Most important, the three gravitational centers—providers, payers, and vendors—have, for the first time, signed the same pact and placed high‑assurance digital identity at its core. If the pledges survive contact with real‑world incentives, we could look back on 2025 as the year the waiting room’s most ubiquitous prop began its slow fade into nostalgia.

Health Privacy in the AI Era

Sam Altman hardly ever breaks stride when he talks about ChatGPT, yet in a recent podcast he paused to deliver a blunt warning, which caused my ears to perk up. A therapist might promise that what you confess stays in the room, Sam said, but an AI chatbot cannot, at least not based on the current legal framework. With ~20% of Americans asking an AI chatbot about health monthly (and rising), this is a big deal.

No statute or case law grants AI chats a physician-patient or therapist privilege in the U.S., so a court can compel OpenAI to disclose stored transcripts. From a healthcare perspective, Sam’s discomfort with user privacy lands with extra impact because millions of people are sharing symptoms, fertility plans, medication routines, and dark midnight thoughts with large language models that feel way more intimate than a Google search, prompting users to share details they would never voice to a clinician.

Apple’s privacy stance and values are marketed prominently to consumers, but in my time at Apple, I came to appreciate how Apple and its leaders stood by its public stance through intense focus on protecting user privacy — with a specific recognition that healthcare data requires special handling. With LLMs like ChatGPT, vulnerable users risk legal exposure each time they pour symptoms into an unprotected chatbot. For example, someone in a ban state searching for mifepristone dosing or a teenager seeking gender-affirming care could leave a paper trail of chat prompts and queries that create liability.

The free consumer ChatGPT operates much like “Dr. Google” today in the health context. Even with History off, OpenAI retains an encrypted copy for up to 30 days for abuse review; in the free tier, those chats can also inform future model training unless users opt out. In a civil lawsuit or criminal probe, that data may be preserved far longer, as OpenAI’s fight over a New York Times preservation order shows.

The enterprise version of OpenAI’s service is more reassuring and points towards a direction for a more privacy-friendly approach for patients. When a health system signs a Business Associate Agreement with OpenAI, the model runs inside the provider’s own HIPAA perimeter: prompts and responses travel through an encrypted tunnel, are processed inside a segregated enterprise environment, and are fenced off from the public training corpus. Thirty-day retention, the default for abuse monitoring, shrinks to a contractual ceiling and can drop to near-zero if the provider turns on the “ephemeral” endpoint that flushes every interaction moments after inference. Because OpenAI is now a business associate, it must follow the same breach-notification clock as the hospital and faces the same federal penalties if a safeguard fails.

In practical terms the patient gains three advantages. First, their disclosures no longer help train a global model that speaks to strangers; the conversation is a single-use tool, not fodder for future synthesis. Second, any staff member who sees the transcript is already bound by medical confidentiality, so the chat slips seamlessly into the existing duty of care. Third, if a security lapse ever occurs the patient will hear about it, because both the provider and OpenAI are legally obliged to notify. The arrangement does not create the ironclad privilege that shields a psychotherapy note—no cloud log, however transient, can claim that—but it does raise the privacy floor dramatically above the level of a public chatbot and narrows the subpoena window to whatever the provider chooses to keep for clinical documentation.

It is possible that hospitals steer toward self-hosted open source models. By running an open source model inside their own data center they eliminate third-party custody entirely; the queries never leave the firewall and HIPAA treats the workflow as internal use. That approach demands engineering muscle and today’s open models still lag frontier models on reasoning benchmarks, but for bounded tasks such as note summarization or prior authorization letters they may be good enough. Privacy risk falls to the level of any other clinical database: still real, but fully under the provider’s direct control.

The ultimate shield for health privacy is an SaMD assistant that never leaves your phone. Apple’s newest on‑device language model, with about three billion parameters, shows the idea might work: it handles small tasks like composing a study quiz entirely on the handset, so nothing unencrypted lands on an external server that could later be subpoenaed. The catch is scale. Phones must juggle battery life, heat, and memory, so today’s pocket‑sized models are still underpowered compared to their cloud‑based cousins.

Over the next few product cycles two changes should narrow that gap. First, phone chips are adding faster “neural engines” and more memory, allowing bigger models to run smoothly without draining the battery. Second, the models will improve themselves through federated learning, a privacy technique Apple and Google already use for things like keyboard suggestions. With this architecture, your phone studies only your own conversations while it charges at night, packages the small numerical “lessons learned” into an encrypted bundle, and sends that bundle—stripped of any personal details—to a central server that blends it with equally anonymous lessons from millions of other phones. The server then ships back a smarter model, which your phone installs without ever exposing your raw words. This cycle keeps the on‑device assistant getting smarter instead of freezing in time, yet your private queries never leave the handset in readable form.

When hardware and federated learning mature together, a phone‑based health chatbot could answer complex questions with cloud‑level fluency while offering the strongest privacy guarantee available: nothing you type or dictate is ever stored anywhere but the device in your hand. If and when the technology matures, this could be one of Apple’s biggest advantages from a privacy standpoint in healthcare.

For decades “Dr. Google” meant we bartered privacy in exchange for convenience. Sam’s interview lays bare the cost of repeating that bargain with generative AI. Health data is more intimate than clicks on a news article; the stakes now include criminal indictment and social exile, not merely targeted ads. Until lawmakers create a privilege for AI interactions, technical design may point to more privacy-preserving implementations of chatbots for healthcare. Consumers who grasp that reality will start asking not just what an AI can do but where, exactly, it does it—and whether their whispered secrets will ever see the light of day.