Meta's $200 Million Gamble on One Engineer Is Everything Wrong with Big Tech
Why throwing money at individual "geniuses" is the worst way to build artificial general intelligence
In 2001, the Washington Wizards made what seemed like the deal of the century. They signed Michael Jordan out of retirement - arguably the greatest basketball player who ever lived. The team's management believed that adding one transcendent talent would transform their losing franchise. Two seasons later, the Wizards failed to make the playoffs both years, finishing with a worse record than before Jordan arrived. The superstar who had led the Chicago Bulls to six championships couldn't even drag Washington to the postseason.
This phenomenon isn't limited to sports. When Yahoo struggled to compete with Google in the late 2000s, they recruited several high-profile executives from Wall Street and tech companies with massive compensation packages. Despite the star power and hefty price tags, Yahoo's market share continued to decline. The board had confused individual achievement in one domain with the ability to transform an entire organization.
Now Meta is making the same mistake on an unprecedented scale. They're offering compensation packages exceeding $200 million to build their "superintelligence" team, with former Apple engineer Ruoming Pang reportedly receiving a package in the hundreds of millions. The assumption? You can buy your way to artificial general intelligence by collecting individual geniuses, much like trading cards.
The $200 Million Man
Ruoming Pang isn't just any engineer. He was Apple's distinguished engineer in charge of the company's Apple Foundation Models team, managing roughly 100 people who built the AI that powers Apple Intelligence features like Genmoji, Priority Notifications, and on-device text summarization. He joined Apple from Google in 2021, bringing expertise in a very specific kind of AI: small, efficient models that can run on devices rather than in massive data centers.
This detail matters. While the rest of Silicon Valley races to build ever-larger language models requiring thousands of GPUs, Pang specialized in the opposite challenge: making AI work within the constraints of a phone. It's like hiring Formula One's best fuel efficiency engineer to design your rocket ship. The skills are impressive, but they're solving a different problem.
Apple's AI models haven't exactly been a huge success — they're far less capable than what OpenAI, Anthropic, and even Meta offer. Apple has reportedly even considered tapping third-party AI models to power its forthcoming AI-enabled Siri upgrade. This context makes Meta's astronomical offer even more puzzling. They're paying $200 million for someone whose most recent work has been, by most measures, underwhelming.
But here's the real kicker: Sources told Bloomberg that Pang's departure might be the first of many in Apple's troubled AI unit. Several engineers are telling colleagues they are planning to leave in the near future for Meta or elsewhere. Meta isn't just buying one person - they're potentially triggering an avalanche that could gut Apple's entire AI effort.
This is the hidden cost of these mega-packages. When you offer one person $200 million, you don't just destabilize the team they're joining. You destabilize every team they're leaving behind. The remaining Apple engineers now know their worth in Meta's eyes. They know their manager just got paid more than most CEOs. How long before they demand their own astronomical packages?
The Myth of the Multiplier Effect
Harvard Business School professor Boris Groysberg spent years studying what happens when star employees switch companies. His research revealed an uncomfortable truth: nearly half of star analysts who changed firms failed to maintain their high performance at their new companies.1 Their performance didn't just dip temporarily - it often remained depressed for years. The portable skills they thought they possessed turned out to be deeply embedded in their previous organization's culture, systems, and relationships.
But here's where it gets interesting. The damage isn't just to the superstar's performance. Teams with newly imported stars often show decreased collaborative output and increased turnover among existing high performers. The very presence of these mega-compensated individuals can create what organizational psychologists call a "talent paradox" - where individual excellence becomes organizational poison.
Research on pay inequality within organizations consistently shows that extreme compensation gaps reduce information sharing and decrease discretionary effort among team members. When one person makes hundreds of times more than their colleagues, it fundamentally changes team dynamics.
Why This Matters Now
The AI talent wars have reached absurd proportions. OpenAI's Sam Altman claims Meta offered his employees signing bonuses as high as $100 million.2 These aren't just big numbers - they're civilization-changing amounts of money being thrown at individual contributors who've never proven they can build sustainable organizations.
This matters because we're at an inflection point. The companies that win the AI race won't be the ones with the most expensive talent. They'll be the ones who figure out how to make talented people work together effectively. Meta's approach - hiring more than 10 researchers from OpenAI, plus top talent from Anthropic and Google - looks impressive on paper. But they're building a collection of soloists, not an orchestra.
Consider DeepMind's success with AlphaGo. The breakthrough didn't come from hiring away the world's best Go players or even the most celebrated AI researchers. It came from creating an environment where researchers could collaborate intensively, challenge each other's assumptions, and build on each other's work. Their compensation was competitive but not extraordinary. What was extraordinary was their collective output.
The Hidden Cost of Superstar Culture
When you pay one engineer $200 million while others make $200,000, you're not just creating pay disparity. You're establishing a caste system. The psychological research on this is clear: extreme pay gaps within teams damage collaboration and morale.
I've watched this dynamic play out at major tech companies. The highest-paid engineers often became islands of excellence, building systems only they could understand. Their colleagues, demoralized by the pay gap, became less likely to share ideas or go above and beyond. Innovation slowed in teams with the highest concentration of "rockstar" engineers.
The problem compounds when these superstars need to collaborate. Put five people in a room who each think they're the smartest person there, and you don't get five times the intelligence. You get a turf war. Meta's superintelligence team, packed with former chief executives and founders, risks becoming a collection of leaders with no one willing to follow.
What Actually Works
Bell Labs created the transistor, the laser, and produced eight Nobel Prize winners without paying anyone $200 million. They did it by creating what they called "controlled chaos" - an environment where talented people at different levels could collide randomly, argue productively, and build on each other's ideas.
The Manhattan Project didn't succeed because they paid Oppenheimer more than everyone else. It succeeded because they created a structure where brilliant individualists learned to function as a team. The pay was modest by today's standards. The results changed the world.
When Moderna developed its COVID vaccine in record time, it wasn't because they had the highest-paid researchers. It was because they had built collaborative systems over a decade that allowed rapid information flow between teams. Their success came from organizational capability, not individual genius.
The Real Path to Superintelligence
Building transformative AI systems isn't about collecting prestigious résumés. It's about creating conditions for collective intelligence to emerge. Here's what actually works:
Flatten the hierarchy. OpenAI's early innovations came from eliminating traditional corporate structures. Everyone from interns to senior researchers could contribute to core decisions. That culture is now at risk as they chase Meta's compensation arms race.
Create knowledge commons. Instead of hiring superstars who hoard insights, build systems that make everyone smarter. Google's internal tools like Borg and MapReduce succeeded because they amplified every engineer's capabilities, not because a genius built them in isolation.
Optimize for learning velocity. The team that learns fastest wins, not the team with the highest aggregate IQ. Create rapid feedback loops, encourage productive failure, and measure progress by how quickly ideas evolve, not by who proposed them.
Design for emergence. Complex systems like artificial general intelligence won't spring from any individual mind, no matter how brilliant. They'll emerge from the interactions between minds. Your job is to maximize those productive interactions.
Making the Shift
If you're a CTO watching this talent war, resist the temptation to play Meta's game. You can't outbid them, and even if you could, you shouldn't.
Instead:
Start by identifying your organization's knowledge bottlenecks. Where does information get stuck? Which individuals, if they left tomorrow, would cripple your capabilities? Those are your vulnerability points.
Next, create systems that distribute expertise. Pair programming, mob programming, and rotating team leadership aren't just nice-to-have practices. They're insurance against the departure of any individual, no matter how talented.
Build compensation structures that reward collective achievement over individual brilliance. Yes, pay your people well, very well. But tie the biggest rewards to team outcomes, not personal metrics.
Most importantly, hire for collaboration skills, not just technical excellence. The brilliant engineer who can't explain their work to others is a liability, not an asset. The good engineer who elevates everyone around them is worth ten isolated geniuses.
The Trillion-Dollar Question
Meta is betting that superintelligence can be purchased. They're wrong. Intelligence - super or otherwise - isn't a commodity you can buy. It's an emergent property of well-designed systems.
The companies that will win the AI race aren't the ones writing the biggest checks. They're the ones building the best cultures. They're creating environments where talented people want to work, not because of the money, but because of the mission. Where information flows freely, ideas build on each other, and collective intelligence exceeds the sum of individual contributions.
Throwing $200 million at an individual engineer isn't just wasteful. It's counterproductive. It creates the very conditions that prevent breakthrough innovation: ego conflicts, information hoarding, and team dysfunction.
The CTO's job isn't to win bidding wars for talent. It's to create systems where ordinary people can do extraordinary things together. That's not just a nice philosophy. In the age of AI, it's the only strategy that works.
Because at the end of the day, artificial general intelligence won't be built by a single genius, no matter how much you pay them. It will emerge from teams that have learned to think together. And you can't buy that. You have to build it.
https://hbr.org/2004/05/the-risky-business-of-hiring-stars
https://www.reuters.com/business/sam-altman-says-meta-offered-100-million-bonuses-openai-employees-2025-06-18/