AI and the Prepared Mind: Engineering Luck in Drug Discovery

We are at a fascinating, paradoxical moment in the history of medicine. We stand in awe of a new AI-powered “Logic Engine” for drug discovery—a computational marvel like AlphaFold, which treats biology as an information system to be engineered. It promises a future of rational discovery. And yet, when we look at our most important medical breakthroughs, so many were not rationally designed. They were the result of messy, unpredictable, and entirely human processes: a happy accident, a surprising side effect, or a creative leap of intuition. This isn’t a story of one replacing the other. For me, it’s the story of how we build the bridge between them. The future, I believe, lies in marrying AI’s logic with this enduring human spark.

How has luck played out in drug discovery? With reference to some famous examples, I believe serendipity comes in three distinct flavors.

First, there is the Physical-World Accident. This is the classic tale of Alexander Fleming. He doesn’t hypothesize and then test; he returns from vacation to find a physical anomaly on a petri dish, a “moldy” halo where bacteria wouldn’t grow. The breakthrough was not the idea; it was his prepared mind recognizing the profound significance of a simple, physical event.

Second, there is the Clinical Data Anomaly. This is the story of Viagra. Researchers at Pfizer were not looking for an erectile dysfunction drug; they were testing a new angina medication. But in the clinical trial data, they spotted a consistent, statistically significant “side effect.” Their genius was not in the drug’s design, but in their ability to see that this “failure” was, in fact, the drug’s true purpose.

And third, there is the rarest and most powerful form: the Cross-Domain Synthesis. This is the almost-mythical origin of the GLP-1 drugs. In the 1980s, Dr. John Eng, an endocrinologist at the VA, was grappling with the dangerous, real-world clinical problem of hypoglycemia in his diabetic patients. His deep “embodied context” of this problem led his curiosity to a non-obvious place: the venom of the Gila monster. He made a creative, “analogical leap,” betting that a creature who could feast and then fast for months must have a powerful metabolic regulator. He was right, and this single, human-driven hypothesis proved the therapeutic principle that led to the multi-billion-dollar GLP-1 field, from Exenatide to Ozempic and Mounjaro.

Dr. Eng’s leap of intuition was not a brute-force data search; it was an act of wisdom. “Embodied context” is the sum of lived, physical, sensory, and experience-based intuition. This, to me, is the undigitized data we’re missing in all of the data being used to train AI: the “gut feeling” of a 30-year veteran clinician, the intuition born from seeing, touching, and feeling a problem.

This is not just a poetic concept. It is the data that isn’t in the database: the specific sound of a patient’s cough, the feel of a tumor’s texture, the non-verbal cues a patient gives, or the “gut feeling” that connects a skin rash to a GI symptom seen months prior—a non-obvious, low-signal pattern. An AI, no matter how powerful, is a disembodied logic system. Its “experience” is limited to the digital archive of human knowledge. It has read the map; it has not walked the territory.

Dr. Eng’s leap was not just data; it was purpose. He had witnessed the “litany of horrors” of his patients’ suffering. That context, which exists in no database, is what aimed his curiosity. It allowed him to connect three disparate domains: the clinical problem (hypoglycemia), the zoological trait (a lizard’s stability), and the mechanistic hunch (venom). An AI, lacking this embodied context, would have no reason to see this as anything but a low-probability statistical correlation.

Now, one could argue that this “embodied context” is just a polite word for human bias, the very thing a logic engine is designed to eliminate. This is not wrong; intuition is notoriously flawed. But this is precisely why the partnership is essential. The loop’s purpose is not to blindly trust human wisdom; it is to interrogate it. The human provides the testable, experience-based hypothesis; the AI and the lab provide the objective, high-throughput validation.

This reliance on rare, human-driven leaps is not a reliable strategy. It is slow and random, and it’s why, in my view, our industry has been trapped by the brutal economics of Eroom’s Law, which observes costs rising exponentially, driven by a catastrophic “valley of death” in clinical trials where the vast majority of drugs fail.

This is the problem the AI-powered “Logic Engine” was built to solve. It is a revolutionary solution to “Bad Chemistry.” By designing the perfect molecular “key” in silico, it ensures a drug is potent, specific, and far less likely to be toxic. But these perfect keys may still hit the Phase 2 wall. They are colliding with “Bad Biology.” A perfect key for the wrong lock is still a failure. Even today’s intelligently designed blockbusters, from Keytruda to Ozempic, owe their massive success to unexpected clinical findings—like breakthrough weight-loss or cardio-renal benefits—that were discovered serendipitously, long after the initial design.

I firmly believe a purely in silico model is not enough. A “digital twin” or simulation, trained only on our current, incomplete data, is merely a sophisticated mirror of our existing ignorance. It’s an echo chamber. A purely computational AI would have been blind to Fleming’s mold, dismissed Viagra’s side effect as noise, and never possessed the creative, context-driven curiosity to make Dr. Eng’s leap.

This is why we must complement the Logic Engine with another type of system: a data-fueled Serendipity Engine. Across the biotech ecosystem, many are actively building the components of this system: the high-fidelity data “brain,” the automated “body” of human-relevant lab models, and the “nervous system” feedback loop. But a truly integrated, closed-loop system is not yet a reality. Much work remains to connect these parts into a seamless whole. This system has three core components.

First, it needs a “brain.” This is the Multimodal Data Foundation. To find human targets, it must learn from human data, building a high-fidelity map of disease as it actually exists, integrating genomics, proteomics, longitudinal clinical records, and real-world outcomes.

But a brain is not enough. It needs a “body.” This is the Human-Relevant Experimental Layer. The AI’s in silico predictions must be tested not in a simulation, but on a fully automated, high-throughput lab with a biobank running on patient-derived organoids, complex cell models, and organ-chips that actually recapitulate human physiology, not a mouse’s.

Finally, we must build its “nervous system”: a Closed Feedback Loop. This loop connects the brain and the body. The AI designs, tests on the physical model, and the real-world experimental data is fed back to the AI. The system learns, updates its map of biology, and designs the next experiment.

If we build this perfect, closed-loop system, what happens when it becomes an AGI? What happens when it can formulate its own novel hypotheses? Is the human “prepared mind” finally and fully disintermediated?

The answer, I speculate, is no. At least, not for the foreseeable future. An AGI, no matter how powerful, seems to be the ultimate “what” and “how” engine. It can find correlations and model mechanisms with superhuman speed. But it may end up being a stranger to the “so what?” This AGI Scientist will not create a lack of work, but it could create a new, paralyzing problem: an overload of tens of millions of valid, novel, and testable hypotheses. Which one matters? Which of these is a fascinating biological quirk, and which one, if pursued, would change the lives of millions? The AGI, as a pure optimization system, may not inherently know the difference. It can rank hypotheses by p-value or predicted novelty, but it is unlikely to be able to rank them by true human significance.

Elon Musk offered what I thought was a powerful analogy about the role of human beings in an AGI world. He noted that our cortex (thinking/planning) constantly strives to satisfy our limbic system (instincts/feelings). Perhaps, he suggested, this is how it will be with AI. The AI is the ultimate, boundless cortex, but we are what gives it meaning. We are the “limbic system” it serves. This, I believe, offers a framework for how to think about human scientists in an AGI world for drug discovery. And this is where human wisdom and “embodied context” become the most valuable commodity in the system. This context isn’t just the clinician’s (like Dr. Eng). It is also the hard-won wisdom of the “drug hunter”, someone like Al Sandrock who undoubtedly developed an intuition for biological signal.

The future, then, may not be an AI scientist working alone. The human’s new, and perhaps final, role is to be the “prepared mind” that our Serendipity Engine is built to serve. This role, in effect, scales the intuition of the veteran drug hunter with the brute-force logic of the AI. Our job is not to find all the answers, but to stand at the dashboard of this vast Serendipity Engine, ask the right questions, and point to a single anomaly, saying:

“That one. The AI says it’s novel, but my experience tells me it’s relevant.”

In the end, the AI is the ultimate “what” and “how” engine. The human, I believe, will always be the “so what?”

Leave a comment