Meta just signed a 20-year deal for 1,121 megawatts of nuclear power.
Microsoft is spending $1.6 billion to restart Three Mile Island.
Amazon invested $650 million in a nuclear-powered data center campus, plus an additional $500 million in experimental reactor technology.
Google is locking in 500 megawatts from reactors still in the prototype stage.
The tech press is calling this a sustainability win.
They’re missing the plot entirely.
This isn’t about going green.
This is the most spectacular failure of imagination in tech leadership I’ve seen in two decades.
Let me tell you why I couldn’t sleep last night.
The Confession Hidden in Plain Sight
When Meta signs for the entire output of a nuclear plant, 1,121 megawatts for 20 years, they’re not making a climate statement.
They’re making a confession:
“We have no idea how to make our technology more efficient. So we’re just going to throw a nuclear reactor at the problem.”
Let that sink in.
We are literally splitting atoms because we can’t optimize our code.
The numbers are staggering:
Google and Microsoft each used 24 terawatt-hours of electricity in 2023—more than countries like Iceland or Ghana.
Microsoft’s energy use doubled since 2020.
Meta’s jumped 34% in just one year.
AI workloads now account for 24% of all server electricity demand.
Power density has increased, from 5–10 kW per rack to 60+ kW, some hitting 120 kW.
This is like hiring 1,000 engineers because you can’t debug your code.
Or like buying a supercomputer to run Excel.
Or like using a sledgehammer to hang a picture frame.
The Pattern I Can’t Unsee
We’ve entered the brute-force era of AI development.
I’ve seen this pattern before. I’ve been this pattern.
Build something that barely works but shows promise
Instead of optimizing, scale the infrastructure
When that gets expensive, find cheaper/bigger power
Repeat—until physics or finances force you to rethink the model
Twenty years ago, it was server farms thrown at messy code.
Now it’s nuclear reactors thrown at bloated models.
We’re not building for brilliance.
We’re building for bloat.
And nobody’s asking the most important question:
What if we’re solving the wrong problem?
The $700 Billion Wake-Up Call
Let’s talk about DeepSeek.
A Chinese startup built an AI reasoning model that rivals OpenAI’s best.
They didn’t have infinite GPUs or billion-dollar budgets.
They trained their 671B parameter model on just 2,048 GPUs for a total cost of $5.5 million.
Meanwhile, OpenAI and others are spending hundreds of millions.
The result?
DeepSeek’s efficiency sent shockwaves through the market, wiping out $700 billion in US tech cap in a day.
Nvidia lost $560B.
Berkeley replicated an OpenAI reasoning model for $450 in 19 hours.
Stanford did it for $50 in 26 minutes—using distillation.
$50. 26 minutes. Versus a nuclear reactor.
The Question That Changes Everything
What if the next leap in AI isn’t a bigger model or faster chip…
but the realization we’ve been holding the map upside down?
Remember when Instagram served 14M users with just 3 engineers?
Or WhatsApp handled 900M users with 50 engineers, processing 50 billion messages a day?
They didn’t go nuclear.
They went smart.
Instagram’s mantra: “Do the simple thing first.”
WhatsApp’s approach: “Just enough engineering.”
They used Erlang and FreeBSD when everyone else was scaling Java clusters to infinity.
The Lesson: Constraints forced genius.
Instagram didn’t have resources to burn, so they innovated.
WhatsApp couldn’t afford scale for scale’s sake, so they architected for simplicity.
Meta’s deal with the Clinton Clean Energy Center isn’t a power move.
It’s a white flag.
The CTO Dilemma This Exposes
Every technical leader faces the same decision:
Option A
Follow the herd.
Budget for massive infrastructure.
Justify it as “scale.”
Sign your own version of a nuclear deal.
If Microsoft is reviving Three Mile Island, who are you to say no?
Option B
Do the harder thing.
Challenge the assumption.
Ask:
“What would we build if we only had 1/10th the compute?”
Option A gets you promoted.
Option B gets you remembered.
I know which one most will choose.
I also know which one keeps me up at night with excitement.
The Uncomfortable Truth
MIT’s Lincoln Lab has cut AI training energy by 80% with simple power-capping. Training took 2 hours longer.
Groq delivers 10x performance at 1/10th the electricity of GPUs.
Startups are reaching near-GPT performance for under $600, thanks to model distillation.
We’re not in an AI race.
We’re in an efficiency race dressed up like an AI race.
The company that wins won’t be the one with the most nuclear plants.
It’ll be the one that figures out how to do with 1 GPU what others need 1,000 for.
Some startup in a garage is looking at these nuclear deals and laughing.
They’re doing with clever code what Big Tech is doing with uranium.
And when they succeed, these billion-dollar reactor contracts will look like Blockbuster’s real estate strategy in 2005.
Your Move
So here’s my challenge:
Before you sign off on more infrastructure…
Before you budget for 10x more compute…
Ask yourself:
“Am I solving the problem… or just making it more expensive?”
Because right now, Big Tech is telling us that the answer to inefficient AI is nuclear fission.
Microsoft is spending $1.6 billion to restart a reactor that’s been off since 1979.
Amazon is betting $500 million on reactors that don’t even exist yet.
By 2030, data centers will use 945 terawatt-hours, equal to all of Japan.
If that doesn’t tell you we’re doing something wrong, we’ve lost the plot.
The future isn’t in bigger power plants.
It’s in better minds, both artificial and human.
What sacred cow in your tech stack needs questioning?
Let’s talk.
Drop your comments below so we can discuss, or email me directly at etienne@7ctos.com, especially if you're staring down your own "nuclear reactor" decision.