Your Gut is a Value Function

Tim Cook once said the most important lesson he learned at Apple was to listen to his gut. That stuck with me, mostly because I had to learn it the hard way myself.

Early in my career, I thought “trust your gut” was code for “I don’t have the data.” But over time, I learned to listen to my gut, just like Cook describes doing at pivotal points of his career. The decisions I have regretted most were ones where I didn’t give voice to an uneasy feeling. And the best decisions have often been intuitive in the face of ambiguity and imperfect information.

It’s easy to dismiss this gut feeling as mystical, but now I realize it’s computational.

Your gut is a compressed summary of long‑horizon experiences that your conscious mind can’t read yet. In machine‑learning terms, it behaves a lot like a value function, the internal machinery that estimates how good or bad a situation is and where it’s likely to lead.

That idea turns “trust your gut” from self‑help cliché into a serious claim about how intelligence works, a concept that clicked for me while listening to Ilya Sutskever talk about a patient who lost his emotions.

The man who couldn’t decide

In a fantastic recent interview, Ilya Sutskever recounts a famous case from neurologist Antonio Damasio. A patient suffered brain damage to the area processing emotion. Post-surgery, his IQ was normal. His memory was perfect. He could list the pros and cons of any option.

But his life fell apart.

He spent twenty minutes deciding which pen to use. He couldn’t prioritize. Without the machinery to feel the difference between “good” and “bad,” he was trapped in an infinite loop of reasoning.

Damasio’s conclusion was that we don’t use logic to value things. We use “somatic markers”—emotional tags attached to past experiences. When a similar situation arises, your body replays a trace of the feeling: regret, relief, shame. That physical response is a shortcut. It solves the stopping problem.

As Sutskever suggests: emotions are the value function. Remove them, and you don’t get a super-rational agent. You get a system that can’t land the plane.

Why AI can’t fake this

This is exactly where the gap between human judgment and current AI lies.

Models read oceans of text to learn patterns. Then we use Reinforcement Learning to tell them what “good” looks like. But the rewards are short-term and dense: Did you solve the puzzle? Did the user like the summary?

That is a completely different animal from the human loop.

Your emotional value function is trained on the messy, long-term reality of your actual life. It integrates feedback that arrives years later—as a broken relationship, a career derailment, or the quiet satisfaction of doing the right thing. It associates tones of voice, deal structures, and clinical smells with outcomes that haven’t happened yet.

It’s not infallible—it carries bias and trauma—but it is the only model you have that has been trained on reality at scale.

Trying to approximate that with current AI training methods is like trying to learn “good parenting” from a dataset of multiple-choice quizzes. We give models crude rules: don’t be toxic, be helpful. That’s useful. That’s useful. But it’s nowhere near a value system that understands that a decision can be technically correct and still be completely wrong.

The intelligence of the gut

This matters most in domains with slow feedback and high stakes—strategy, medicine, policy.

AI is already a powerful tool for reasoning. It can out-read and out-simulate us, dig through mountains of data, and catch patterns you’d miss. But we shouldn’t confuse reasoning with judgment.

The pattern-recognition parts of our jobs are being automated. The piece that remains scarce is the long-horizon, emotionally anchored sense of what is actually worth doing.

“Trust your gut” isn’t an abandonment of reason; it’s a reminder that there is a layer of intelligence we haven’t yet reproduced in silicon. Your emotional life is a value function continuously trained by reality over years, while today’s AI systems still optimize short‑term, narrow proxies on curated benchmarks. For the decisions that actually shape a life or an organization, that quiet hum in your chest is not something we’re going to outsource anytime soon. It is the distinction between calculating what we can do, and knowing what we should do.

Leave a comment