The CTO’s Entropy War
The second law of thermodynamics is coming for your codebase. AI just handed it a flamethrower.
I’m standing behind Marcus, one of my best engineers, watching him demo a feature he built in forty minutes using Claude Code. A reporting dashboard. Clean layout. Responsive. Tests passing. He’s beaming.
Hey my new book CTO Refactor is coming out in May 2026 - head over to ctorefactor.com to get advance copies and some bonus treats. - Etienne
I pull up the codebase. There are three utility functions doing nearly the same date formatting. The error handling pattern doesn’t match anything else in the repo. A new data access layer has been introduced that duplicates logic we already have in our services directory. And Marcus has no idea, because he didn’t write this code. He directed it.
I ask him why the date formatter doesn’t use our existing utility. He scrolls through the file tree. “We have one?” He genuinely doesn’t know. This is an engineer who has been with us for two years.
I close my laptop and walk back to my desk. The dashboard works. It passed review. It will ship. And it will quietly make our system a little harder to understand, a little more fragile, a little less coherent. Not today. Not next week. But the weight is accumulating. I can feel it in the pull requests, in the onboarding conversations, in the growing silence when I ask why something was built this way.
That silence is entropy. And fighting it just became the most important part of my job.
The Universe Wants Your Codebase to Fall Apart
In 1850, Rudolf Clausius articulated what would become the second law of thermodynamics: in any natural process, the total entropy of an isolated system can only increase. Things move from order to disorder unless you put energy into keeping them organized.
Meir Lehman applied this thinking to software in 1974. His second law of software evolution states that the complexity of a software system increases unless work is done to maintain or reduce it. For fifty years, this has been the quiet backdrop to every engineering organization. Codebases rot. Standards drift. Architecture erodes. Entropy wins unless you fight it.
We have always fought it. Code reviews. Refactoring sprints. Architectural standards. Style guides. These are the maintenance rituals that keep disorder at bay. The energy we pour into keeping our systems coherent.
AI just changed the equation. Dramatically.
A recent large-scale study of over 300,000 AI-authored commits across 6,275 GitHub repositories found that AI-introduced technical debt is growing rapidly, climbing from a few hundred surviving issues in early 2025 to over 110,000 by February 2026. About 24% of those issues still persist in current codebases. GitClear’s analysis of over 211 million lines of changed code found copy-pasted code up 48% and refactored code declining 60%. Code churn, the percentage of lines rewritten within two weeks, has doubled in teams heavily using AI tools.
The disorder isn’t slowing down. It’s accelerating. And if you’re a CTO leading 40 to 120 engineers right now, entropy is your fight. Not AI strategy decks. Not model selection. Not prompt engineering workshops. Entropy.
Why AI is the Greatest Complexity Generator We’ve Ever Built
When code was slow and expensive to produce, friction was your friend. Every function a developer wrote required them to understand the system they were modifying. They had to read existing code. They had to navigate the architecture. They had to make choices that fit. The very slowness of writing code was a form of quality control.
AI removed the friction. A junior engineer with Cursor can now produce 500 lines of code that passes lint, follows naming conventions, and looks clean on review. But verifying whether that code fits the system, whether it respects architectural boundaries, whether it introduces hidden coupling, requires the senior reviewer to mentally reconstruct the entire logic flow. And they don’t. It takes too much energy. When the code looks right, the brain skims. The PR gets approved. The debt gets merged.
Stack Overflow’s 2026 developer survey found that 76% of developers using AI tools reported generating code they didn’t fully understand at least some of the time. Think about that. Three out of four engineers are shipping code into your production systems that they cannot fully explain.
Forrester predicts 75% of technology decision-makers will face moderate to severe technical debt by the end of this year. DORA’s research confirms the pattern: AI adoption links to higher throughput but lower delivery stability. More changes ship faster. Each change is slightly more likely to break something.
Ox Security published a report calling AI-generated code “highly functional but systematically lacking in architectural judgment.” That phrase is worth pausing on. The code works. It passes tests. But it doesn’t know your system. It doesn’t carry the context of why your team chose to handle errors that way, or why the data access layer was separated from the business logic, or why you migrated off that specific library three months ago.
AI treats every prompt as a greenfield project. Your codebase is anything but.
Your Engineers Aren’t Getting Worse. The Problem Is Getting Bigger.
I wrote about this in my recent article on The CTO’s New Engineering Ladder. Output used to be a reasonable proxy for engineering talent. It no longer is. When everyone can ship, what separates your Architects from your Apprentices is judgment. The ability to see downstream consequences. The instinct for what will break. The capacity to reason about a system under pressure.
But judgment depends on understanding the system you’re working within. And the system is getting harder to understand every day that AI-generated code ships without deep review.
This is the entropy spiral. AI produces code faster than your team can comprehend it. Comprehension gaps lead to architectural drift. Drift makes the codebase harder to reason about. And a harder-to-reason-about codebase makes AI-generated code even more dangerous, because the AI doesn’t know what it doesn’t know, and neither does the engineer directing it.
CircleCI’s 2026 State of Software Delivery report, drawn from over 28 million workflows, found that AI-assisted development drove a 59% increase in throughput. But most engineering organizations are leaving the majority of those gains on the table, not because AI isn’t working, but because their validation, review, and delivery systems haven’t caught up.
More code. Fewer releases. That’s the entropy signature.
The CTO as Entropy Fighter
The narrative circulating in boardrooms right now is that CTOs are becoming obsolete. Jack Dorsey announced Block would cut nearly half its workforce, telling shareholders that “100 people + AI = 1,000 people.” The stock jumped 24%. Atlassian split its CTO role in two, explicitly scoped to the AI era. Gartner predicts 40% of enterprise applications will feature task-specific AI agents by end of 2026.
If you listen to this noise, you might think the CTO’s job is shrinking. I think the opposite is true.
When code was expensive, the CTO’s primary job was to make sure enough code got written. When code becomes cheap, the CTO’s primary job is to make sure the system stays coherent. That’s a harder job. The universe is literally working against you.
The old CTO managed output. The entropy-fighting CTO manages order. Order across a codebase that’s being written by humans, AI agents, and hybrid workflows simultaneously. Order across teams that are shipping faster than the architecture can absorb. Order across an organization where product managers, data analysts, and operations leads are all generating code that ends up in production systems.
This is not an abstraction. I coach CTOs who are living this right now. The ones who are thriving have stopped thinking of themselves as accelerators and started thinking of themselves as governors. Not in the bureaucratic sense. In the mechanical sense. The device that prevents an engine from tearing itself apart at high speed.
Three Forces That Create Entropy in AI-Driven Development
Comprehension debt
Addy Osmani coined this term in early 2026, and it captures something that traditional technical debt doesn’t. Comprehension debt accumulates when your team ships code they didn’t write and don’t deeply understand. Every AI-generated feature that gets merged without someone building a genuine mental model of how it works adds to this debt. And unlike traditional debt, you don’t know you’re taking it on.
Architectural amnesia
AI doesn’t remember your design decisions. It doesn’t know why you chose event-driven architecture for that service, or why the auth layer is isolated, or why you deliberately avoided that ORM. Every prompt is a fresh start. Over time, the architectural intent that held your system together gets diluted by thousands of small, individually reasonable decisions that don’t add up to a coherent whole.
Review collapse
When a junior dev submitted 50 lines of code in 2023, a senior could review it meaningfully in ten minutes. When an AI-augmented engineer submits 500 lines that all look syntactically correct, the cognitive cost of genuine review goes through the roof. The natural human response is to skim. Approve. Merge the debt. One API security company found a 10x increase in security findings per month across Fortune 50 companies between December 2024 and June 2025.
These three forces compound each other. Comprehension debt weakens review quality. Weak reviews accelerate architectural drift. Drift makes comprehension harder. The spiral tightens.
Building the Anti-Entropy Machine
If you run an engineering organization of 40 engineers or more, you need what I call an entropy budget. A deliberate, protected allocation of engineering time dedicated not to building new things but to keeping existing things coherent.
1. Make AI your refactoring engine, not just your feature engine
The same tools that generate code at speed can find dead code, consolidate duplicates, generate documentation, and identify architectural drift. Most teams use AI exclusively for production. Flip the ratio. For every four AI-assisted features your team ships, dedicate one cycle to AI-assisted cleanup. Use AI to reduce entropy, not just increase it.
2. Treat AI output like a first draft from a talented stranger
Because that’s what it is. A stranger who doesn’t know your system, your team’s conventions, or your architectural intent. The code might be excellent in isolation. Your job is to make sure it fits the whole.
3. Rebuild your code review around judgment, not syntax
If your reviews are catching formatting issues and variable names, you’re wasting your most expensive engineers on work the linter should handle. Reviews should answer one question: does this change make the system easier or harder to reason about in six months?
4. Instrument your entropy
Track code churn (lines rewritten within two weeks of being written), module coupling over time, and the ratio of new code to refactored code. If new code is climbing and refactored code is declining, your entropy is accelerating. You can see this in GitClear’s data at the industry level. You need to see it at the team level.
5. Protect the learning pipeline.
AWS CEO Matt Garman said it plainly when he heard proposals to replace junior engineers with AI: “That’s like, one of the dumbest things I’ve ever heard.” A Stanford Digital Economy study found employment for software developers aged 22 to 25 has declined nearly 20% from its 2022 peak. If you stop hiring people who are learning to think about systems, you won’t have people who know how to think about systems in five years. Entropy will have won by default.
Your Agentic Future Depends On This
The next chapter of AI development is agentic. Multi-agent systems that plan, execute, and iterate without waiting for a human prompt. Gartner reported a 1,445% surge in multi-agent system inquiries from Q1 2024 to Q2 2025. GitHub’s Agent HQ, announced in February 2026, lets developers run multiple AI agents simultaneously on the same task.
An agentic future running on a high-entropy codebase is a disaster. Agents that can autonomously modify code, run tests, and ship changes will amplify whatever state your system is in. Strong foundations get amplified into faster, more reliable delivery. Weak foundations get amplified into faster, more creative destruction.
The CTO who has spent the last year fighting entropy, keeping architectural boundaries clean, maintaining comprehensible systems, building review processes that catch drift, will have a codebase that agents can navigate and improve. The CTO who chased velocity above all else will have a codebase that agents can barely understand, let alone safely modify.
Your readiness for agentic AI is directly proportional to how well you’ve managed entropy today.
Three Things to Do This Week
Write down the architectural decisions that live only in your head or in the heads of your senior engineers. If your AI tools don’t know why the system is built this way, they can’t maintain it. If your new engineers don’t know, they can’t review the AI’s work. Architectural intent must be documented, not folklored.
Pull your team’s code churn numbers for the last quarter. How much of what was written last month was rewritten this month? If the number is climbing, your entropy is accelerating and no amount of velocity will outrun it.
Pick one senior engineer and give them an explicit mandate: spend 20% of their time next sprint making the codebase more coherent. Not shipping features. Not closing tickets. Reducing entropy. See what happens when someone’s job is to fight disorder instead of produce output.
The CTO isn’t dead. The CTO’s job just got a lot more physical. You’re fighting thermodynamics now. And the engineers who will build your team’s future are watching to see if you care more about speed or coherence.
I know which one keeps companies alive.
Etienne
Etienne de Bruin is the founder of 7CTOs and coaches technology leaders through the complexity of scaling engineering organizations.
Research backing: The article draws on the March 2026 arXiv study of 304K AI-authored commits, GitClear’s 211M lines analysis, CircleCI’s 2026 report (28M workflows), Stack Overflow’s 2026 survey, DORA findings, Ox Security’s “Army of Juniors” report, the Stanford Digital Economy study on junior dev employment, and Gartner’s multi-agent inquiry data. I also wove in Lehman’s Laws (1974) and Clausius for the entropy framing.


