Fake Data Centers: When AI Infrastructure Becomes a Real Estate Illusion
- Todd Colpron

- 2 days ago
- 4 min read

The global race to build AI infrastructure has triggered what looks, on the surface, like a data-center supercycle. New projects are announced weekly. Capacity targets grow larger by the month. Megawatts are promised freely and confidently. Yet beneath the headlines, a more uncomfortable reality is emerging: many of these facilities are not real data centers at all.
As Jonathan Ross, CEO of Groq, who was recently acquired by NVIDA for $20BIL, bluntly put it in a recent interview, “They’re not real data centers… they’re fake data centers.” His remark was not rhetorical flourish. It was a technical and economic diagnosis of a market that has confused real estate development with mission-critical infrastructure.
Ross’s warning is gaining traction well beyond podcasts. In a recent article by Business Insider Africa, a major Groq investor publicly stated he was “deeply concerned” about the data-center market, citing stress on the system and a growing disconnect between supply claims and operational reality. The concern is not about demand for AI. It is about whether the infrastructure being built can actually support it.
The Misunderstanding at the Heart of the Boom
At the core of the problem is a fundamental misunderstanding of what a data center is—and what it must do in an AI-driven world.
Too many developers, sponsors, and even investors continue to treat data centers primarily as real estate assets: land acquisition, shell construction, marketing, and lease-up narratives. In that framework, success is measured in square footage, headline megawatts, or announced capacity.
But AI compute does not care about any of those metrics. AI workloads—particularly inference workloads—are brutally sensitive to power quality, thermal management, uptime, latency, and reliability. They demand continuous availability, redundant systems, and infrastructure designed from the ground up to handle sustained, high-density compute. A building without hardened power, resilient backup, and properly engineered cooling is not an AI data center, regardless of how it is branded.
Ross has been explicit on this point. The problem, as he frames it, is not malicious intent but architectural ignorance. Many facilities are being built with standard, inefficient designs that may work for legacy enterprise IT or basic colocation but are fundamentally misaligned with the economics and physics of AI compute at scale.
Inference Changes Everything
One of the most important—and least appreciated—dimensions of Ross’s critique is his focus on inference economics.
AI training has dominated headlines and capital allocation over the past several years. It is capital-intensive, hardware-heavy, and often margin-rich, with GPUs at the center of the stack. But inference—the continuous, real-time serving of AI models to users and systems—is where long-term demand will concentrate.
Inference is not a high-margin business. It is a high-volume, efficiency-driven one. Power costs matter. Cooling efficiency matters. System simplicity matters. Any infrastructure that cannot deliver inference reliably and cheaply will struggle to remain economically viable, no matter how impressive it looks on paper.
This is why Ross has criticized the current wave of speculative development as structurally flawed. The market is overbuilding facilities optimized for the wrong problem, assuming that AI demand alone will paper over architectural inefficiencies. It will not.
The Data Center Problem No One Wants to Price In
Ross often refers to what he calls “the data center problem no one is talking about.” It is not a shortage of ideas or capital. It is the mismatch between existing and planned infrastructure and the real requirements of scalable, energy-efficient AI.
Many projects advertise enormous power targets without secured interconnections, credible timelines, or realistic redundancy planning. Backup generation is treated as optional. Cooling and water requirements are deferred. Hardware availability is assumed rather than contracted. In effect, execution risk is pushed into the future, while valuations are pulled forward.
This behavior made sense in an era of cheap capital and forgiving markets. It does not make sense now. As capital tightens and AI workloads move from experimentation into production, the tolerance for infrastructure failure will drop sharply. Projects that cannot deliver uptime, power stability, and cost efficiency will not fail loudly—they will simply be bypassed.
From Announcements to Accountability
The market is already beginning to self-correct. Operators are becoming more selective. Investors are asking harder questions. Hyperscalers and enterprise buyers are no longer impressed by land holdings or marketing decks. They want proof of execution.
This shift will likely expose a large portion of today’s “AI data center pipeline” as non-viable. Not because AI demand is slowing, but because infrastructure quality matters far more than volume. In this environment, the distinction between real and fake data centers becomes decisive.
Where Eliakim Capital Fits
At Eliakim Capital, this distinction sits at the center of our advisory and infrastructure work. We advise and work alongside data-center suppliers who operate across the full supply cycle and deliver what speculative projects often lack: hardened power systems and real, deployable compute hardware, in one integrated platform.
On the power side, that means turbines, generators, transformers, switchgear, cabling, battery and backup systems, microgrids, and advanced HVAC and cooling solutions engineered for performance, reliability, and rapid deployment. These are not theoretical designs. They are systems built to support continuous AI operation under real-world conditions.
On the compute side, it means access to data-center-grade hardware that is available now, not promised later: GPUs and AI accelerators, HPC servers and racks, AI training platforms, baseboards and SXM systems, FPGA and edge devices, high-speed networking, advanced memory, enterprise storage, and control and management systems. No waitlists. No vaporware. Just infrastructure that can be turned on and used.
By aligning power, compute, and capital discipline from the outset, we help ensure that data centers are built as infrastructure first—not as speculative real estate.
The Quiet End of the Fake Data Center Era
The AI boom is real. The demand is real. But the tolerance for illusion is rapidly disappearing.
AI will not be constrained by ambition or imagination. It will be constrained by megawatts, cooling, uptime, and execution. Facilities that understand this will become foundational assets in the next decade of compute. Those that do not will fade quietly, long before they ever reach full utilization.
As Jonathan Ross’s blunt assessment makes clear, the era of fake data centers is already ending. What comes next will be defined not by announcements, but by infrastructure that actually works.



Comments