There’s a cursor blinking at me from my terminal. White text on black, minimal, waiting. It reminds me of sitting in front of my Commodore 64 decades ago—same patient expectation before the first keystroke.
I’m 10 years old again.
The weird part is: the command line is back—not because UX failed, but because intelligence makes friction survivable. The agent is the new GUI.
The MS-DOS Era of AI
Brian Chesky recently said we’re in the “MS-DOS era of AI agents,” and the moment I heard it, something clicked. Not just intellectually—viscerally. Because I lived the actual MS-DOS era.
I remember the ritual of C:\>DIR /W to see what was possible before deciding what to do next.
And now, decades later, I find myself back at a prompt. Except this time, when I type, something types back.
Learning to Speak Computer, Then and Now
My childhood was spent memorizing incantations: LOAD "*",8,1 to boot a game. COPY CON to create a text file. AUTOEXEC.BAT and CONFIG.SYS as sacred scrolls—tweaking memory just to squeeze enough RAM for Wing Commander.
I ran a BBS back then—those pre-internet islands of connection where a 2,400 baud modem felt like a portal. None of it was intuitive. All of it required dedication, trial and error, and a willingness to break things.
The GUI revolution came as a relief to most people. For me, it felt like something was lost: that direct conversation with the machine, the sense that you were commanding rather than clicking.
As I grew older, the terminal receded. I moved on to PowerPoint decks and Excel spreadsheets, and decades passed.
Then an Agent Moved Into My Terminal
I’ve been building things again. Not because I suddenly learned to code “properly,” but because AI agents changed the cost of trying.
It started with Claude Code, Anthropic’s command-line tool for delegating coding tasks. What I found wasn’t “automation.” It was a new kind of collaboration: describing what I want, getting a first draft back, iterating through dialogue, and learning just enough to steer the work without needing to understand every detail.
Over the past year, I’ve built iOS apps this way, shipping features I couldn’t have imagined creating alone.
But Claude Code was just the on-ramp.
Building Clawdbot, Moltbot, OpenClaw: My Personal Agent
What I really wanted was a persistent assistant—something always available, aware of my context, and connected to the systems I actually use. So I built my own personal OpenClaw: a self-hosted agent running 24/7 on my Mac that I can text like a colleague.
Most days it feels mundane.
- 8am: it pings me my Oura sleep and readiness scores.
- 8:05am updates on the stock market and any meaningful moves on Polymarket.
- 5pm: the handful of emails I starred but didn’t handle, plus a digest of the day’s X posts I may care about.
I recently gave it a memory system and loaded it with a ton of context that I can ask about anytime.
There is something about this experience that feels like my BBS days: OpenClaw is a living system that I’m constantly extending. New skills, new automations, new cron jobs. It’s not a product I downloaded—it’s something I’m building iteratively, through conversation.
The IDE for Everyone
In a recent interview, Satya Nadella described Microsoft 365 evolving into “an intelligent IDE powered by agents,” a command center that orchestrates work across tools.
That matches what I’ve been feeling from the bottom up. For developers, IDEs have always been the place where context, tools, and execution meet. What’s new is that everyone now needs that scaffolding: memory, identity, permissions, and tool access.
Right now, in the MS-DOS era, that “IDE” is… a terminal window, a local server, config files in ~/.openclaw/, and a pile of tiny instructions that teach the agent how to do things safely.
It’s not pretty. It’s not polished. You still debug PATH variables and restart daemons when they go sideways. But here’s the twist: the agent makes the rough interface usable. When I get stuck on a bash command, I ask the agent. When a skill file isn’t loading, I paste the error and get a next step. The terminal is still foreign territory—but I have a native guide.
The Familiar and the Novel
What strikes me most is how familiar this feels: the learning-by-doing rhythm, the hacking ethos, the late nights chasing some feature that seems tantalizingly close. The small thrill when something actually works.
It’s the same dopamine hit I got at 12 when a door game finally ran on my BBS or when ANSI art loaded properly.
But it’s also categorically different. The internet means I’m not isolated with a manual and a prayer. Documentation exists. Communities form overnight. And the agent is a patient teacher in a way no RTFM culture ever was.
When I was a kid, the computer demanded I speak its language. Now, it’s learning to speak mine. Both eras reward curiosity and a tolerance for failure. Both deliver that irreplaceable feeling of creation.
The Long Tail Becomes Tractable
In my work context, I’ve been asking myself the obvious question: why can’t we just give every clinician and researcher an agentic terminal and let them build?
Because healthcare can’t tolerate the failure modes that personal automation can. When OpenClaw plays the wrong song, I laugh. When an agent hallucinates a lab value or miscalculates a dosage, someone gets hurt. The same flexibility that makes agents powerful makes them dangerous in domains where precision isn’t optional.
And yet the need is real. Every health system has a hundred workflow frictions that don’t justify a vendor contract but collectively drain hours every week. The long tail of heterogeneous, local tasks that resist standardization but absorb enormous time and cognitive effort.
This is where I see one of the biggest opportunities for Verily: harnessing the power of agents—the conversational building, the rapid iteration, the leverage—while pairing it with a platform that can securely contextualize sensitive data while providing the guardrails, validation, and auditability that healthcare demands. Not agents or control. Agents with control.
The practitioners closest to the work should be able to describe what they need and get it built. But the system has to ensure what gets deployed is safe, accurate, and accountable. That’s the balance worth chasing.
What the MS-DOS Era Means
Chesky’s analogy is apt, but I’d extend it: the MS-DOS era is the exciting era. The era before everything gets smoothed over and simplified and—yes—improved for mass adoption, but also before it becomes ordinary.
Right now, working with agents through a terminal feels like having a secret. There’s craft and friction involved. And there’s the particular pleasure of making something work that wasn’t designed to be easy.
The GUI will come. The Macintosh moment for agents is inevitable—Microsoft is clearly building toward it, and plenty of others will too.
But I’m grateful to be here now, in this liminal moment, rediscovering the same joy I felt at 10: a blinking cursor, a willing machine, and the sense that anything I can describe clearly enough might just become real.
If you’re in healthcare and exploring what’s possible—whether building personal agents or tackling workflow problems with vibe coding—I’d love to hear from you. This is more fun with fellow travelers.