Strategy in the Age of Infinite Slop

“AI is going to replace McKinsey.”

It’s a popular dunk on AI Twitter. The logic is seductive: if a model can generate a Porter’s Five Forces diagram and a perfectly serviceable deck in seconds, why pay millions for a team of human analysts to take six weeks?

I spent more than ten years at McKinsey working on the exact problems assumed to be next on the chopping block: the ultimate open-ended questions of “where to play” and “how to win.” But looking at those problems through the lens of Andrej Karpathy’s concept of “verifiability,” I’ve come to the opposite conclusion: the closer you get to real strategy, the harder it is for AI to replace it.

The closer you get to real strategy, the less it resembles the tasks AI is good at.

Karpathy’s “Software 2.0” thesis is simple: AI mastery relies on a loop. Attempt a task, get a score, reset, repeat. If you can verify the outcome cheaplyDid the code compile? Did the math hold? Did you win the game?—the model can practice its way to superhuman performance.

This explains why AI is crushing coding and math. These are “high verifiability” domains. The reward signal is crisp, binary, and instant.

Corporate strategy lives at the opposite extreme.

As a strategy consultant, when you advise a client to enter China or divest a legacy unit or to sell the company, you don’t get a clean error message immediately. You get a noisy stream of signals over five years. A competitor takes share with a new product. The macro environment shifts. A new CEO gets hired.

You cannot reset the world. You cannot run the A/B test. There is only one realized future and a graveyard of unknowable counterfactuals. And from the perspective of an AI training loop, that means the “reward signal” for any one decision is sparse, delayed, and hopelessly entangled with everything else. The pattern recognition for “good strategy” gets developed over years of many studies and outcomes.

So, is AI useless in the boardroom?

Absolutely not. While AI cannot verify a strategy, it is unparalleled at generating the raw material for one.

Strategy is fundamentally a game of connecting dots across a massive, messy board. It requires looking at a mountain of proprietary data, market reports, and competitive intelligence, and spotting the pattern that others miss.

This is where modern LLMs shine. They act as a force multiplier for reasoning by analogy. A partner can ask a model to look at a B2B logistics problem and apply the “physics” of a consumer marketplace, or to search for historical parallels for the AI infrastructure buildout in 19th-century rail monopolies.

In this phase, the AI is not an oracle; it is a Disciplined Hallucinator. It provides the expanse. It widens the aperture from three conventional options to twenty wild ones. It does the “grinder” work of synthesis that used to burn out armies of business analysts. A lot of those options will be wrong, implausible, or “slop” in the eyes of critics, but in strategy, exploring wrong futures is often how you discover the few worth betting on.

But options are not decisions.

There is a distinct limit to how far this can go. As AI researchers like Yann LeCun argue, current LLMs are not “World Models.” They predict the next token in a sequence; they do not understand the underlying causal physics of reality. They cannot reason about cause and effect in a chaotic environment because they have no internal representation of how the world actually works.

They can simulate the text of a strategy, but they cannot simulate the reality of its execution.

This means the “Silicon Partner” isn’t arriving anytime soon. Until AI creates a true internal model of the world—one that understands human psychology, political friction, and temporal consequences—it remains a statistical engine, not a strategic one in the strong sense.

The Shift: From Processing to Judgment

As AI automates the verifiable layer of intelligence—the analysis, the synthesis, the slide-making—the value of the remaining bottleneck skyrockets.

That bottleneck is Judgment.

Judgment is the ability to look at the AI’s twenty generated options and intuitively know which three will survive contact with reality. It is the ability to stare down an irreversible decision where the “right” answer is mathematically unknowable—and act anyway.

We aren’t paying consultants to process information anymore. We are paying them to use these new instruments to hallucinate a better future and to have the courage to speak truth to power and own the risk of being wrong.