The CTO’s Extinction Event
The title on my own job spec started feeling archaic. So I went looking for what the CTOs leveling up are actually doing differently.
I’m drafting an email to a recruiter.
We’re hiring a senior executive leader to lead AI across one of our portfolio companies. Owning the strategy. Building the team. Deciding what we build versus buy versus partner on. Making the call on which products live, which die, and which get rebuilt around the new economics. The kind of mandate that shapes whether a company makes it through the next two years.
I get to the line where I have to type the title.
I pause.
Then I write something else. Chief AI Officer. Head of AI. VP, AI Platform. I cycle through three or four. None of them are the obvious one.
The obvious one is CTO. And I cannot bring myself to type it.
I sit with that for longer than I’d like to admit. I founded 7CTOs. I’ve spent fifteen years of my career arguing that the CTO role is the one that decides whether a technology company lives or dies. And here I am, building a job spec for exactly that role, and the title I built my life around feels archaic. It feels tone-deaf. It feels like writing “Director of Telegraphy” on a 1950s org chart.
This isn’t because I think CTOs can’t do AI work. Most of the best CTOs I know are deep in it. Something else has happened. Something faster than I expected. Almost overnight, the title started carrying a weight it didn’t carry a year ago. A weight of yesterday.
I closed the laptop and didn’t write the email that day. I needed to understand what had shifted.
This piece is what I came back with.
The asymmetry that built the role
For a long time, the CTO was the only person in the C-Suite who could see into the substrate. Not because we had a crystal ball. Because we understood how things actually worked. We knew why latency mattered. We knew why the architecture decision in March would either save the company in November or sink it.
That asymmetric understanding made us valuable. The CEO needed us to translate possibility into plan. The CFO needed us to translate plan into cost. The board needed us to translate cost into risk.
The LLM has taken that conduit and given it to everyone in the room.
Your CEO can now ask Claude what an event-driven architecture is. Your head of sales can ask GPT-5 to explain vector databases. Your CFO can prompt her way to a competent-sounding question about model evaluation. None of them will be experts. They no longer need you to do the basic translation.
This should not be a crisis. It is a crisis only because too many CTOs have responded to it by becoming consumers of the same tool that flattened their advantage.
The parallel that actually fits
I keep thinking about a split that ran through our own field in the seventies and eighties. The split between programmers who understood the machine and programmers who only understood the language.
Both groups wrote code. Both groups shipped product. Both groups used the high-level languages, the compilers, the abstractions everyone else used. None of them refused the new tools. But the ones who knew how memory worked, what the registers were doing, why a cache miss cost what it did, why the kernel scheduled the way it scheduled, those engineers had an asymmetric advantage that compounded for the rest of their careers. They could see why systems failed. They could see where the abstraction would leak. They could design things the application-layer programmers couldn’t even describe.
The same split is forming right now around AI. Most CTOs are choosing the application layer. Some of us are going to choose the substrate. The people who go deeper will run things. The people who only consume the abstraction will be managed by them.
I want us to be the second group.
The data on which way most of us are walking
A 2025 study by Microsoft Research and Carnegie Mellon, presented at the CHI conference, surveyed 319 knowledge workers across 936 real AI-assisted tasks.
Higher confidence in the AI was associated with less critical thinking. Higher self-confidence was associated with more critical thinking.
The people who trusted the AI thought less. The people who trusted themselves thought more. The researchers were direct about why: knowledge workers refrain from critical thinking when they lack the skills to inspect, improve, and guide AI-generated responses.
The people who don’t understand the tool can’t think alongside it. They can only consume it.
Two ways we’re going extinct
I see two patterns in our profession and both lead to the same place. I’ve watched both happen to colleagues I respect. I’ve caught myself drifting toward the first one more times than I’d like to admit.
1. Token Monkey
The first is the Token Monkey. The CTO who has reduced the job to procurement. The week is spent negotiating with Anthropic and OpenAI. We live inside spend dashboards. We run pilots that produce slides. We’ve become AI buyers with a fancy title. When the CEO asks what’s next, we answer with vendor names.
2. Vibe Coding
The second is more dangerous because it looks like work. The CTO who is vibe coding their way through the week, throwing prompts at Cursor, watching agents produce repos, confusing the appearance of building with actual building. We’re shipping artifacts we couldn’t defend in a code review. We’re accumulating output we don’t fully understand. We feel productive. We’re not.
Neither version is going to be replaced by AI. We’re going to be replaced by the next person who walks into the room and demonstrates that they can still think.
The advice we all swallowed
For two decades the industry has told CTOs the same story. Get out of the code. Step away from the implementation. Your job is to build the team, set the vision, optimize the workflow, manage the stakeholders. The technical work is what your engineers are for. The strategic work is what you are for.
I’ve preached versions of this myself. The CTO’s Perfect Week is partly built on it. “Your contribution to the company no longer lies in the code you’re producing” was the whole framing of that piece. I still think it was true for the world I wrote it in.
It is not enough for the world we are in now.
The orthodoxy worked when the technical landscape underneath you was relatively stable. You could afford to ascend, because the substrate didn’t shift faster than you could oversee it. You could trust your engineers to keep up with the substrate while you kept up with the business. The split between technical depth and executive altitude was a clean trade.
That trade is broken. The substrate is now moving faster than any team you build can follow on your behalf. Your principal engineers are using tools that are six months ahead of where they were when you hired them. Your platform team is making model choices that change unit economics. Your product team is shipping AI features whose failure modes nobody on your staff can fully explain to you, because you trained yourself out of being the person who could ask the hard questions.
The CTO who has spent the last five years optimizing the org chart and running quarterly OKRs is not at the top of their game right now. They are exposed. The role they were trained into is the role most at risk.
What I’m asking for is not regression. I’m not asking you to write JIRA tickets again.
I’m asking you to become a technical intellectual at a depth you may not have allowed yourself in fifteen years.
To read the papers your engineers are reading and read them more carefully.
To understand the math behind the model choice.
To argue the architecture at a level your CEO cannot follow you to and isn’t supposed to.
That’s not a step backward. That’s a different kind of seniority. It’s the kind we let go of, and it’s the kind that’s coming back into demand whether or not the industry has admitted it yet.
The CTOs who survive this aren’t the best team-builders. They’re the best technical thinkers who can also build teams. The order of those two things has reversed.
What the ones leveling up are actually doing
I get to watch a lot of CTOs up close. Some of them are pulling ahead right now.
Six habits show up in almost every one of them.
They are reading complete white papers, not just the summaries.
They can name the architectural choices behind the models they use, and defend them.
They are studying the cognitive science underneath, not just the technology.
They write to think, not to publish.
They are in regular, hard conversation with other CTOs about all of it.
When they don’t know something, they sit in the not-knowing instead of reaching for the model.
None of these six are technically hard. All of them have been trained out of us. The rest of this piece is what I’ve learned about each one.
Confidence comes from the taxonomy, not the demo
When you actually understand the taxonomy of AI, something shifts in how you walk into a meeting. You stop using the words wrong. You stop saying “the AI” like it’s a single thing. You know the difference between a foundation model, a fine-tune, a retrieval system, an agent, and an orchestration layer. You know what a context window is, what an embedding is, what a token actually represents. You know what evaluation means in this domain and why eval design is harder than the model design.
That knowledge changes your executive presence in a way nothing else can right now. Your CEO walks out of the meeting saying “my CTO actually knows what they’re talking about.” Your CFO stops asking the AI for second opinions on your recommendations. The board starts inviting you into conversations you weren’t in before.
The confidence is real because it’s earned. It’s not the hollow confidence the Microsoft study warned about, where leaning on the AI made people think less and feel more sure. It’s the opposite. It’s the confidence that comes from sitting with the hard thing until you understand it.
What I want you to learn (and one example of why)
Read the papers. Not the summaries. The papers themselves. When Anthropic publishes circuit-tracing work, when DeepMind drops a technical report, when a Stanford team publishes inference economics, those are maps of the terrain. The CTO who reads the actual map will be looking at a different country than the one reading the AI-generated travel brochure.
Then learn how the brain works. Is it now more important than ever for a CTO to understand how the brain actually works? Yes. Every paper you read on transformer attention is also, properly digested, a paper about human attention. Every paper on memory retrieval will tell you something about your own. Every paper on reinforcement learning will tell you something about how habits form in your team.
When you understand attention heads in transformers, that mechanism where the model learns to weight which tokens to attend to in a context, you suddenly have a framework for what’s happening when one of your engineers is overwhelmed in a sprint. They’re context-saturated. They have no attention budget. The technical concept and the human concept aren’t analogies. They’re the same problem at different levels of abstraction. The CTO who carries that frame leads differently. They redesign the sprint. They change the meeting cadence. They protect attention as a resource because they now understand attention as a resource.
That is what foundational knowledge does. It grows flowers in domains you didn’t expect.
The two other skills
There are two more things I want us to invest in.
Writing
The first is writing. Not LinkedIn posts. Real writing. The kind where you sit down with a question you don’t know the answer to and write your way toward an answer that’s actually yours. I’ve written about this before in The CTO’s Hidden Notebook. Writing is the only practice I’ve found that exposes whether you actually understand something or you’re just pattern-matching on words you’ve heard. AI cannot do this for you. The moment you let AI write your thinking, you stop having thinking to write.
Discussing
The second is talking to other CTOs about hard things. This is why we run the 7CTOs peer groups every month. We pick a paper or a deep idea, and a small group of CTOs sit around it and argue about it for ninety minutes. No slides. No vendor pitches. Brains in real time, working something out together. White papers don’t fully come alive on the page. They come alive in discussion, when somebody else’s interpretation collides with yours and forces you to defend or refine what you actually believe.
If you don’t have a peer group like this, build one. If you want into ours, message me. The hardest thinking I do every month happens in those rooms.
The frontier is the work
The CTOs who come through this well will not be the ones who managed AI spend the best. They will not be the ones who deployed agents fastest. They will be the ones who, when everyone else in the room had outsourced their thinking, were still doing the work.
That’s a small group right now. Smaller than it should be. I’d rather it be a much bigger one, and I’d rather we get there together.
So read the papers. Learn the taxonomy until you can defend the architectural choices yourself. Learn how the brain actually works. Write your way to your own positions. Talk to other CTOs who refuse to be consumers. Stop asking the model what you think.
I still haven’t sent that recruiter email. When I do, I think I’m going to write CTO in the title field. Not because the role is the same as it was last year. Because I want the title to mean something again.
That’s on us.


