Artificial intelligence just hit another milestone as major players in the tech industry gear up for an AI-powered future. The latest? A wild bet on computing power so massive it involves $125 billion data centers and, yes, even plans to build them in space. At the same time, Ilia Satova’s new AI venture, Safe Superintelligence, has grabbed the spotlight with a staggering $5 billion valuation, all aimed at creating something that’s been on humanity’s wishlist for decades: true superintelligence.
The question that looms over all these developments: Are we about to usher in a new era of AI, or are we witnessing one of the riskiest gambles humanity has ever taken?
The $5 Billion Bet on Safe Superintelligence
Ilia Satova, a name you probably know if you follow AI closely, is back in the game with his latest project, Safe Superintelligence. Once a leader at OpenAI, Satova co-headed its super alignment team, working on making superintelligent AI systems safe for humanity. Now, just a year after leaving OpenAI, he’s staking everything on a brand-new venture that’s barely 3 months old. But here’s the kicker—it’s already valued at $5 billion. Yep, $5 billion for a company most people hadn’t even heard of until now.
What’s the plan? According to Satova, Safe Superintelligence is trying to crack the code on how to scale up AI to achieve true superintelligence. The team is keeping its methods pretty vague, but the pitch boils down to this: give us a couple of years, and we’ll deliver superintelligence in one go. Ambitious? Definitely. But here’s the reality check: the startup currently has just 10 employees and a lot of work ahead.
Backed by heavyweight investors like Sequoia Capital and Daniel Gross, Safe Superintelligence has a clear mission to use its newly-raised $1 billion towards one thing—computing power. That’s right; they’re not hiring massive teams or working on side projects. Every penny is getting funneled into more machines and more power to scale AI to the extreme.
Satova’s not following the same blueprint as OpenAI. While most of the AI world seems fixated on scaling up language models, Satova is more skeptical about what exactly people are scaling. His approach appears to focus on doing something different, something he believes could lead to breakthroughs that current methods won’t reach.
But is that enough to ensure success? Or is this just another overhyped AI moonshot?
The Scaling Hypothesis: Will It Work or Crash?
For anyone who hasn’t been following AI closely, the “scaling hypothesis” is the belief that increasing the size of language models by piling on more data and computing power will unlock true artificial intelligence. Proponents of this theory, like OpenAI and other AI labs, are betting big on it. But if the scaling hypothesis is wrong, then all of this—the $5B raised by Safe Superintelligence, the massive investments in new data centers, the race for bigger and better models—will be seen as a spectacular waste of money.
The upside? If scaling does work, we could see breakthroughs that change everything about how we live, work, and solve the biggest problems facing humanity today. The idea is simple but bold: more compute equals better models, and those better models will eventually reach human-level reasoning—or beyond.
Critics, however, caution against blind optimism. Some believe we might end up scaling AI systems that continue to spit out text and process data but never genuinely “think.” If that’s the case, this whole endeavor could be remembered as an insane misallocation of resources.
Either way, it won’t take long to find out. As companies push to scale up their models, including GPT-6 level systems, we’ll soon confront the limits of what computing power alone can achieve.
Computing Power: The New Arms Race
Let’s get one thing straight—computing power is everything right now in AI. From Safe Superintelligence’s $1 billion spending plan to Elon Musk’s X AI team bragging about building the “most powerful AI training system in the world,” everyone is gunning for more compute.
Musk, never one to be outdone, recently claimed that Grok 2, developed by his X AI team, rivals OpenAI’s GPT-4. Not only that, but they’re expecting to train an AI system more powerful than anything we’ve seen—by December. His team even has its own monster data center in the works, known as Colossus. Just to give you some perspective, this thing is set up to house 100,000 GPUs, each equivalent to Nvidia’s state-of-the-art H100 chip.
And yet, computing power isn’t just about sheer numbers—how you manage, optimize, and deploy these systems also matters. Teams across the board are scrambling to find new ways to distribute their compute work more efficiently, especially when faced with the reality that single-site data centers just can’t handle it anymore. Microsoft, for example, found that packing 100,000 GPUs into one region could collapse the local power grid. They’ve started distributing their systems across multiple locations to avoid this problem.
$125 Billion for Supercomputers: Data Centers on Earth… and Space?
It’s not just about making AI smarter; it’s also about where we put the hardware that powers AI models. Over the last week, rumors broke about two insane $125 billion data centers being planned. And because the earth apparently isn’t big enough, there are talks about launching data centers into space. Yes, you read that right—into space!
Enter Lumen Orbit, a startup out of Y Combinator that aims to take AI computing to the stars. These space-based data centers would initially consume 4 gigawatts of power, with the potential to scale to 5 gigawatts. These systems could pre-train models as large as GPT-6. The promo video for Lumen Orbit even shows futuristic concepts of data centers hovering among satellites, pulling in energy from the sun.
Sure, it sounds a bit like a sci-fi plot, but it’s not without precedent. Microsoft tried something similar when they built underwater data centers to take advantage of natural cooling. While technically successful, it wasn’t practical or cost-effective to maintain, and the project was quietly put back on the shelf. If underwater data centers were tricky, maintaining one in space won’t be any easier, no matter how exciting the concept sounds.
There are also the more traditional mega data centers—on land—that are about to open. They’re being designed to handle up to 5 gigawatts of power, meaning AI training runs that dwarf anything we’ve currently done. To put that in perspective, that much energy could take us from GPT-4 level models to something like GPT-6. Yet, even with all this planned capacity, the challenge of managing power consumption looms large.
The Battle for AI Supremacy: OpenAI vs. Musk’s X AI
If you thought the AI space was already competitive, buckle up. While Safe Superintelligence is trying to quietly build revolutionary superintelligence, other companies are taking their battles front and center. Most notably, there seems to be tension building between OpenAI and Elon Musk’s X AI.
Musk’s Colossus system, with its ever-growing number of GPUs, has raised concern inside OpenAI. Sam Altman, CEO of OpenAI, reportedly told Microsoft executives that he worries Musk’s X AI could soon have more computing power than OpenAI. That’s a big statement considering OpenAI has access to Microsoft’s massive computing resources.
Still, Musk’s ambitious claims about his training system shouldn’t be taken lightly. Grok 2, developed by the X AI team, is already said to closely compete with GPT-4, and Grok 3 is just around the corner. As both teams ramp up their efforts, we could be staring down the barrel of a full-on AI rivalry that reshapes the landscape. But whether it’s Grok, Orion (OpenAI’s next project), or Safe Superintelligence, the name of the game remains the same—scale.
Model Scaling and New Techniques: Beyond GPT-4
While everyone’s obsessed with scaling up, there are others trying to move AI forward with new techniques and smarter designs. OpenAI, for instance, is working on what they call the “Strawberry” or “QAR” system—an experimental approach to enhance reasoning in AI. Early hints show that Strawberry might help AIs solve more complex problems, like those found in New York Times word puzzles (yes, really). But just how transformative this will be remains to be seen.
Still, there’s a growing sentiment that scaling alone may not get us all the way to true AI reasoning. GPT-4 and even Grok 2 have their limits, despite their massive model sizes. Some researchers believe the secret ingredient might lie in how the models are trained, not just how large they are.
The Real Cost of Scaling: Energy, Environment, and Ethics
But while the AI world focuses on scaling, there’s another glaring issue—energy consumption. Every one of these massive data centers eats up a mind-boggling amount of power. At a time when environmental sustainability is under intense scrutiny, you have to wonder how these projects fit into the bigger picture.
Several tech giants, including Google, Microsoft, and Amazon, have made public pledges to go carbon-neutral by 2030. But with the kind of power being poured into AI, it’s a tall order. Scaling up AI to superintelligence levels may quite literally come at the cost of both our energy resources and climate goals.
If the scaling hypothesis doesn’t pan out, it could turn out that these AI titans don’t just miss the technological target—they might also fail their climate commitments. With power demands growing beyond what local grids can even handle, it’s hard to see a future that balances both AI development and environmental responsibility.
Will the AI Bubble Burst?
Let’s be real—this AI boom is already frothy. Valuations are hitting the stratosphere, companies are raising billions, and the race for superintelligence has all the ingredients of a bubble. What happens if scaling fails to deliver?
If scaling leads to breakthroughs and we nail superintelligence, the returns could be unimaginable. But if companies like Safe Superintelligence don’t pull it off, the massive investment could evaporate. Entire fortunes could go down the drain, triggering a financial bubble reminiscent of the dot-com crash. The higher the hype, the harder the fall—and AI is marching boldly up that hill right now.
Conclusion
All eyes are on the coming months as Safe Superintelligence, Musk’s Grok 3, and OpenAI’s next models take shape. The race toward superintelligence is officially on. Whether it’s mega data centers, distributed systems, or experimental systems like Strawberry, there’s no stopping the AI machine—literally. Billions of dollars and gigawatts of power will be spent on this bet to scale AI until the true intelligence emerges—or doesn’t.
Are we on the brink of the biggest breakthrough in human history? Or are we blindly throwing resources at something we barely understand? Time will tell, but either way, the stakes have never been higher.