America’s Patchwork of Laws Could Be AI’s Biggest Barrier in Care

AI is learning medicine, and early state rules read as if regulators are regulating a risky human, not a new kind of software. That mindset could make sense in the first wave, but it might also freeze progress before we see what these agents can do. When we scaled operations at Carbon Health, the slowest parts were administrative and regulatory–months of licensure, credentialing, and payer enrollment that shifted at each state line. AI agents could inherit the same map, fifty versions of permissions and disclosures layered on top of consumer‑protection rules. Without a federal baseline, the most capable tools might be gated by local paperwork rather than clinical outcomes, and what should scale nationally could move at the pace of the slowest jurisdiction.

What I see in state action so far is a conservative template built from human analogies and fear of unsafe behavior. One pattern centers on clinical authority. Any workflow that could influence what care a patient receives might trigger rules that keep a licensed human in the loop. In California, SB 1120 requires licensed professionals to make final utilization review decisions, and proposals in places like Minnesota and Connecticut suggest the same direction. If you are building automated prior authorization or claims adjudication, this likely means human review gates, on-record human accountability, and adverse‑action notices. It could also mean the feature ships in some states and stays dark in others.

A second pattern treats language itself as medical practice. Under laws like California’s AB 3030, if AI generates a message that contains clinical information for a patient, it is regulated as though it were care delivery, not just copy. Unless a licensed clinician reviews the message before it goes out, the provider must disclose to the patient that it came from AI. That carve-out becomes a design constraint. Teams might keep a human reviewer in the loop for any message that could be interpreted as advice — not because the model is incapable, but because the risk of missing a required disclosure could outweigh the convenience of full automation. In practice, national products may need state-aware disclosure UX and a tamper-evident log showing exactly where a human accepted or amended AI-generated output.

A third pattern treats AI primarily as a consumer-protection risk rather than a medical tool. Colorado’s law is the clearest example: any system that is a “substantial factor” in a consequential healthcare decision is automatically classified as high risk. Read broadly, that could pull in far more than clinical judgment. Basic functions like triage routing, benefit eligibility recommendations, or even how an app decides which patients get faster service could all be considered “consequential.” The worry here is that this lens doesn’t just layer on to FDA oversight — it creates a parallel stack of obligations: impact assessments, formal risk programs, and state attorney general enforcement. For teams that thought FDA clearance would be the governing hurdle, this is a surprise second regime. If more states follow Colorado’s lead, we could see dozens of slightly different consumer-protection regimes, each demanding their own documentation, kill switches, and observability. That is not just regulatory friction — it could make it nearly impossible to ship national products that influence care access in any way.

Mental health could face the tightest constraints. Utah requires conspicuous disclosure that a user is engaging with AI rather than a licensed counselor and limits certain data uses. Illinois has barred AI systems from delivering therapeutic communications or making therapeutic decisions while permitting administrative support. If interpreted as drafted, “AI therapist” positioning might need to be turned off or re‑scoped in Illinois.

Taken together, these state patterns set the core product constraints for now, keep a human in the loop for determinations, label or obtain sign‑off for clinical communications, and treat any system that influences access as high risk unless proven otherwise.

Against that backdrop, the missed opportunity becomes clear if we keep regulating by analogy to a fallible human. Properly designed agents could be safer than average human performance because they do not fatigue, they do not skip checklists, they can run differential diagnoses consistently, cite evidence and show their work, auto‑escalate when confidence drops, and support audit after the fact. They might be more intelligent on specific tasks, like guideline‑concordant triage or adverse drug interaction checks, because they can keep every rule current. They could even be preferred by some patients who value privacy, speed, or a nonjudgmental tone. None of that is guaranteed, but the path to discover it should not be blocked by rules that assume software will behave like a reckless intern forever.

For builders, the practical reality today is uneven. In practice, this means three operating assumptions: human review on decisions; clinician sign‑off or labeling on clinical messages; and heightened scrutiny whenever your output affects access. The same agent might be acceptable if it drafts a clinician note, but not acceptable if it reroutes a patient around a clinic queue because that routing could be treated as a consequential decision. A diabetes coach that nudges adherence could require a disclosure banner in California unless a clinician signs off, and that banner might not be enough if the conversation drifts into therapy‑like territory in Illinois. A payer that wants automation could still need on‑record human reviewers in California, and might need to turn automation off if Minnesota’s approach advances. Clinicians will likely remain accountable to their boards for outcomes tied to AI they use, which suggests that a truly autonomous AI doctor does not fit into today’s licensing box and could collide with Corporate Practice of Medicine doctrines in many states.

We should adopt a federal framework that separates assistive from autonomous agents, and regulate each with the right tool. Assistive agents that help clinicians document, retrieve, summarize, or draft could live under a national safe harbor. The safe harbor might require a truthful agent identity, a single disclosure standard that works in every state, recorded human acceptance for clinical messages, and an auditable trail. Preemption matters here. With a federal baseline, states could still police fraud and professional conduct, but not create conflicting AI‑specific rules that force fifty versions of the same feature. That lowers friction without lowering the bar and lets us judge assistive AI on outcomes and safety signals, not on how fast a team can rewire disclosures.

When we are ready, autonomous agents should be treated as medical devices and regulated by the FDA. Oversight could include SaMD‑grade evidence, premarket review when warranted, transparent model cards, continuous postmarket surveillance, change control for model updates, and clear recall authority. Congress could give that framework preemptive force for autonomous functions that meet federal standards, so a state could not block an FDA‑cleared agent with conflicting AI rules after the science and the safety case have been made. This is not deregulation. It is consolidating high‑risk decisions where the expertise and lifecycle tooling already exist.

Looking a step ahead, we might also license AI agents, not just clear them. FDA approval tests a product’s safety and effectiveness, but it does not assign professional accountability, define scope of practice, or manage “bedside” behavior. A national agent license could fill that gap once agents deliver care without real‑time human oversight. Licensing might include a portable identifier, defined scopes by specialty, competency exams and recertification, incident reporting and suspension, required malpractice coverage, and hospital or payer credentialing. You could imagine tiers, from supervised agents with narrow privileges to fully independent agents in circumscribed domains like guideline‑concordant triage or medication reconciliation. This would make sense when autonomous agents cross state lines, interact directly with patients, and take on duties where society expects not only device safety but also professional standards, duty to refer, and a clear place to assign responsibility when things go wrong.

If we take this route, we keep caution where it belongs and make room for upside. Assistive tools could scale fast under a single national rulebook. Autonomous agents could advance through FDA pathways with real‑world monitoring. Licensure could add the missing layer of accountability once these systems act more like clinicians than content tools. Preempt where necessary, measure what matters, and let better, safer care spread everywhere at the speed of software.

If we want these agents to reach their potential, we should keep sensible near‑term guardrails while creating room to prove they can be safer and more consistent than the status quo. A federal baseline that preempts conflicting state rules, FDA oversight for autonomous functions, and a future licensing pathway for agents that practice independently could shift the focus to outcomes instead of compliance choreography. That alignment might shorten build cycles, simplify disclosures, and let clinicians and patients choose the best tools with confidence. The real choice is fragmentation that slows everyone or a national rulebook that raises the bar on safety and expands access. Choose the latter, and patients will feel the benefits first.

Leave a comment