What If We’re Building AI for the Wrong Decade?
We’re in the middle of a technology shift that feels as significant as the move from command line to GUI. Our response is to make better command lines that understand natural language?! Seriously.
In 1973, engineers at Xerox PARC developed the Alto, the first computer with what would become known as the WIMP interface: Windows, Icons, Menus, and Pointers. It was revolutionary. For the first time, people could interact with computers through visual metaphors rather than memorizing commands.
Over fifty years later, we’re still using that same paradigm. Sure, the graphics are prettier, the animations smoother, and the screens touch-responsive. But the fundamental model—windows containing applications, icons representing files, menus organizing commands, pointers selecting things—hasn’t changed.
Now we’re in 2025, and according to McKinsey’s latest research, 78% of organizations use AI in at least one business function, up from just 55% a year ago. Companies are racing to integrate AI into their products. And how are they doing it?
By bolting a chat box onto the WIMP interface and calling it innovation.
I’ve been sitting with this nagging feeling lately. Every product demo I attend, every “AI-powered” feature launch I read about, follows the same pattern: traditional interface + chat box = revolutionary AI product. And I can’t shake the question—are we actually innovating, or are we just adding natural language processing to a 50-year-old paradigm and hoping no one notices?
The Desktop That Won’t Die
I was in a product demo last week. Startup pitching their “next-generation AI development environment.” The founder was genuinely excited, demonstrating how you could chat with their AI to generate code. Which then appeared in... a traditional text editor. With a file tree on the left. Terminal at the bottom. Tabs across the top.
It looked exactly like VS Code. Except with a chat box.
The technology was impressive—the code generation was solid, context awareness was good. But something felt off. Here we are with the most transformative technology in decades, and we’re using it to make 1995’s interface slightly more conversational.
The WIMP paradigm was designed for a world where computers were single-user machines, storage was measured in kilobytes, networks barely existed, and AI meant “if-then” statements in BASIC programs. We’re in 2025. We have AI that understands context, maintains conversations, and orchestrates complex workflows. And we’re using it to make better autocomplete in our text editors?
What the Kids Are Teaching Us
Pew Research found that 45% of teenagers report being online “almost constantly,” and 95% have access to a smartphone. But what’s interesting is that they don’t think in files and folders. They don’t organize hierarchically. They search. They use natural language. They expect technology to understand context without explicit instructions.
Show a kid a traditional desktop interface and watch their confusion. “Why do I need to remember where I saved something? Why can’t I just ask for it? Why do I have to click five times to do something simple?”
They’re not learning our language. Which raises an uncomfortable question: Why should they?
The Chat Box Delusion
I’ve been in enough architecture reviews this year to spot the pattern. Company adds AI by:
Keeping the entire existing interface exactly as-is
Adding a chat box (usually bottom right corner)
Training the AI to manipulate the existing UI elements
Calling it “AI-native”
This is like adding a steering wheel to a horse and buggy and calling it a car.
The chat interface itself is borrowed from another decades-old paradigm: IRC, instant messaging, forums. We’re taking a technology designed for human-to-human communication and repurposing it for human-to-computer interaction. Not because it’s optimal, but because it’s familiar.
What bothers me most is that the chat interface requires users to know what to ask for. It puts the cognitive burden on humans to decompose complex tasks into conversational requests. It’s a friendlier command line, sure, but still fundamentally about humans adapting to how computers want to be talked to.
A truly AI-native interface would do the opposite. It would adapt to how humans actually think and work.
The Missing Interface Generation
Between 1984 (original Macintosh) and 2007 (iPhone), the fundamental interface paradigm barely changed. Better graphics, more colors, higher resolutions. But the basic model of windows containing documents, menus containing commands, pointers clicking things stayed constant.
The iPhone disrupted this, but only partially. Touch replaced the mouse. Apps replaced windows. But we still have icons, menus (now called hamburger menus), and a fundamentally application-centric model.
Now we’re in the AI era. And what are we doing? Adding chat to the iPhone paradigm. Adding chat to the WIMP paradigm. Adding chat to everything.
I’m genuinely curious: Where are the interfaces that feel native to AI capabilities? Not chat. Not “traditional UI but you can talk to it.” Something actually different.
What It Might Actually Look Like
I don’t have the answer. If I did, I’d be building it instead of writing about it. But I’ve been collecting examples of what it might not look like, and the patterns are telling.
It probably doesn’t look like files and folders. When AI can understand context and retrieve information based on semantic meaning, why are we still organizing documents hierarchically? That’s a solution to a storage problem from 1960s mainframes.
It probably doesn’t look like applications. The app model assumes discrete tools for discrete tasks. But AI excels at orchestrating across contexts. Why do I switch between email, calendar, and project management when AI could maintain context across all three?
It probably doesn’t look like menus. Menus solve an affordance problem. Discovering what actions are possible. But if AI understands intent, why navigate a menu tree to find “export as PDF” buried three levels deep?
It probably doesn’t even look like screens. We’re assuming the interface is visual because that’s what we know. But we have natural language, voice, spatial computing, gesture control. Why are we still staring at rectangles?
The Economics of Interface Inertia
Here’s a gotcha to consider: No major tech company can afford to reimagine interfaces from scratch. They have millions of users who’ve spent years learning their existing interface. Any radical change means retraining users, support costs, churn risk, competitor advantage.
So they add a chat box and call it innovation.
According to BCG’s 2024 research on AI adoption, only 26% of companies have developed the capabilities to move beyond proofs of concept and generate tangible value from AI. The rest are stuck in what they call the “sandbox”—running pilots, testing features, but not fundamentally reimagining anything.
Those numbers tell you everything. Incremental enhancement, not radical reimagining. It’s the innovator’s dilemma playing out in real-time.
What This Means for CTOs
I’m not suggesting you rebuild your entire product from scratch. That would be insane. But I am curious whether other CTOs are feeling this same cognitive dissonance.
We’re in the middle of a technology shift that feels as significant as the move from command line to GUI. But our response has been to make better command lines that understand natural language, not to imagine what comes after GUIs.
Maybe I’m wrong. Maybe the chat-augmented-WIMP interface is actually the right abstraction for the AI era, and we’ll look back in 20 years wondering what we were worried about.
But I keep thinking about interface paradigms and generational shifts. The WIMP interface was designed for a world that no longer exists. And we’re using AI technology that can understand natural language, maintain context, and orchestrate complex workflows to make that obsolete paradigm slightly more convenient.
The Questions Keeping Me Up
I don’t have answers, but I have questions:
Are we building the right abstraction layer? Right now, AI sits beneath the interface, powering features within the traditional UI. What if AI should be the interface, and the traditional UI should be the implementation detail?
Who’s going to reimagine this? The major tech companies have too much to lose. Consumer AI companies are focused on models and APIs, not interface paradigms. Where does the next generation of human-computer interaction come from?
How long can we coast on WIMP? When does “good enough” become “actively holding us back”?
What are we not building because the interface paradigm constrains our thinking? This is the scary question. How many product ideas never happen because we can’t imagine them within the constraints of windows, menus, and chat boxes?
What I’m Watching For
I’m not waiting for a specific company or product to solve this. I’m watching for patterns. For experiments. For weird prototypes that don’t quite work but feel like they’re pointing somewhere interesting.
I’m watching what happens when companies stop asking “how do we add AI to our product” and start asking “if we were designing this product today, knowing what AI can do, what would it look like?”
I’m watching for the company that has the courage to confuse their existing users in service of building something genuinely new.
And I’m curious what other CTOs are seeing. Are you feeling this same tension? Are you having conversations with your product teams about what comes after chat boxes? Or am I overthinking this, and the chat-augmented interface really is the future?
I genuinely want to know. Because right now, I feel like we’re all building faster horses when someone should be inventing the car.
And I’m not sure if that someone is supposed to be us.



Enjoyed reading your thoughts on this and agree that unless we embed AI in the workflow adding a chatbox to the existing UI feels like adding Bluetooth to a fax machine! :-)
Hey, great read as always. You totally hit the nail on the head about bolting chat onto WIMP. What if the next truly revolutionary interface isn't visual at all? Or what if it's so integrated, we don't even see windows anymore? Food for thought!