Utility permits, EPA filings, and sustainability reports reveal more than most realize. Select a real facility. Power figures are pre-researched from public records, never entered manually.
We ran the numbers on Google's The Dalles, Oregon facility, one of their oldest and most documented campuses, with a published PUE of 1.09 and ~$1.8B in disclosed investment. Using the formula above, our model estimated facility power. We then independently modeled the same campus in dc-simulator-omega.vercel.app, a third-party data center infrastructure simulator, matching the rack count and density. Here's what happened.
Open dc-simulator-omega.vercel.app → click Hyperscale Cloud preset → set 8 halls at 417 racks each → set rack density to 30 kW/rack in Rack Parameters → check Facility Power in the Inspector panel. You should land at ~122 MW, matching our model.
| Provider | Type | PUE | Util | Intel | AMD | ARM | Wtd TDP | PUE Source |
|---|---|---|---|---|---|---|---|---|
| CPU-Mix | 1.09 | 85% | 25% | 15% | 60% | 168W | Google 2024 Environmental Report | |
| Meta | CPU-Mix | 1.08 | 85% | 40% | 35% | 25% | 251W | Meta 2024 Environmental Data Report |
| AWS | CPU-Mix | 1.15 | 85% | 30% | 15% | 55% | 179W | AWS Sustainability Report 2024 |
| Microsoft Azure | CPU-Mix | 1.16 | 85% | 35% | 30% | 35% | 228W | Microsoft CSR 2024 |
| CoreWeave | GPU-Centric | 1.35est | 77% | 50% | 40% | 10% | 285W | Not disclosed, modeled as mid-colo |
| Nebius | GPU-Centric | 1.10 | 77% | 50% | 40% | 10% | 285W | Nebius SEC 6-K FY2024, Finland |
| Scaleway | CPU-Mix | 1.30est | 68% | 55% | 35% | 10% | 282W | 1.37 fleet avg / 1.25 AI cluster 2024 |
| Equinix (Colo) | CPU-Heavy | 1.45est | 65% | 55% | 30% | 15% | 258W | Not disclosed, modeled as colo average |
| Digital Realty (Colo) | CPU-Heavy | 1.48est | 62% | 58% | 28% | 14% | 265W | Not disclosed, modeled as colo average |
| Oracle Cloud | CPU-Mix | 1.40est | 70% | 60% | 25% | 15% | 262W | Not disclosed, modeled from facility type |
| Applied Digital (AI) | GPU-Centric | 1.30est | 80% | 50% | 40% | 10% | 285W | Not disclosed, GPU-centric AI cloud |
| AT&T / Verizon (Telco) | CPU-Heavy | 1.45est | 57% | 65% | 25% | 10% | 283W | Verizon best site 1.28 (2017) |
| Reliance Jio (India) | GPU-Centric | 1.30est | 75% | 45% | 35% | 20% | 264W | Nvidia partnership announced 2025 |
| OVHcloud (EU) | CPU-Mix | 1.35est | 65% | 55% | 35% | 10% | 281W | Water cooling focus, est. from reports |
| Hetzner (EU) | CPU-Mix | 1.30est | 70% | 60% | 30% | 10% | 276W | Green energy, est. from sustainability page |
| Enterprise / Colo | CPU-Heavy | 1.56 | 50% | 60% | 30% | 10% | 282W | Uptime Institute 2024 global average |
Each provider's forecast uses their own reported or guided CapEx from SEC filings and earnings calls. GPU share of server spend is sourced from Dell'Oro Group and Goldman Sachs. CPU budget is extracted from the remaining non-GPU server spend after removing facility, networking, and memory allocation.
Sources: Dell'Oro Group Q3-Q4 2024 (40% accelerated share); Goldman Sachs 2026 estimate ($180B GPU out of $450B AI infra); CreditSights (75% of 2026 hyperscaler CapEx is AI).
| Year | Own CapEx ($B) | DC IT CapEx ($B) | GPU Share | CPU Budget ($B) | New CPU Sockets | Retired (20%) | Cumulative Fleet | Notes |
|---|
New CPU sockets = CPU Budget / blended CPU cost. Cumulative fleet adds new deployments and retires 20% annually on a 5-year cycle.
Data centers do not publish server counts. But they leave public traces in utility permits, EPA generator filings, sustainability reports, and analyst coverage. This model sources power figures from those records. Power is never entered manually. Three variables then convert sourced power into an estimated CPU count.
PUE values come directly from official sustainability reports and SEC filings; changing them would contradict primary source data. Utilization rates come from the Lawrence Berkeley National Lab 2024 report and Uptime Institute 2024 Global Survey. CPU mix ratios come from ARM Holdings SEC filings and Dell'Oro Group analyst reports. All values are sourced, not estimated, where public data exists. Hover over any assumption on the model page to see the exact source and reasoning.
CPU-centric providers (Google, AWS, Meta, Azure) use CPUs as the primary compute unit. GPU-centric providers (CoreWeave, Nebius, Applied Digital, Reliance Jio) use GPUs for AI; CPUs exist only as host controllers at roughly 1 CPU per 2 GPU sockets. For GPU-centric facilities, the model applies an 85% GPU IT power correction before estimating host CPUs.
Minimum assumes all CPUs run at 500W TDP (Intel Xeon Granite Rapids 6980P). Maximum assumes 75W (all ARM). Point estimate uses provider-specific weighted TDP. A plus or minus 5% error target is achievable when power data comes from a utility permit rather than a press release.