Opinion

Data Centers in Space: The Thermodynamics of Hype

The Pitch: Unlimited Solar Power and Free Cooling. The Physics: Vacuum Is the Worst Coolant in the Universe.

Tech leaders from SpaceX to Google are pouring resources into orbital data centers. The vision is seductive: move AI infrastructure to space, tap unlimited solar power, let the cold vacuum handle cooling, and free up land and energy on Earth. There is one problem. Vacuum does not cool things. It insulates them. The same physics that keeps your coffee hot in a thermos makes space one of the worst possible environments to reject waste heat from processors. One company is honest about this. The rest are selling narrative over thermodynamics.

April 30, 2026

The Setup

In January 2026, SpaceX shared plans to launch up to one million satellites to form an orbital data center constellation. That number is not a typo. There are roughly 15,000 satellites in low Earth orbit today. Elon Musk is proposing to multiply that by 67.

He is not alone. Google published a peer-reviewed feasibility study for its "Suncatcher" project in November 2025. Starcloud (formerly Lumen Orbit), a startup backed by NVIDIA's Inception Program, plans to launch orbital data center hardware in 2026 and has announced intentions to mine Bitcoin in space. The European Space Agency's ASCEND program is studying orbital data centers for carbon-neutral computing. China launched 12 satellites for a space computing constellation in May 2025, the first batch of a proposed 2,800-satellite fleet.

The motivation is understandable. AI data centers are consuming extraordinary amounts of energy. The International Energy Agency reported that data center electricity demand will likely more than double from 460 TWh in 2022 to over 1,000 TWh by 2026, roughly equivalent to the entire electricity consumption of Japan. Communities like Loudoun County, Virginia, home to more than 250 data center facilities, are straining under the load. If you could move that infrastructure to orbit, powered by uninterrupted solar energy, the appeal is obvious.

The question is not whether the appeal is real. It is. The question is whether the physics supports the pitch. And this is where the narrative and the thermodynamics diverge sharply.

The Pitch vs. The Physics

The standard pitch for orbital data centers rests on two claims: unlimited solar power and natural cooling from the cold vacuum of space. The first claim is legitimate. In certain sun-synchronous orbits (specifically the dawn/dusk plane), satellites experience near-constant sunlight, providing reliable solar energy without the intermittency challenges of terrestrial solar installations. This is a real advantage.

The second claim is not just wrong. It is the opposite of reality.

The background temperature of deep space is approximately 3 Kelvin, or roughly -270°C. That sounds like the ultimate cooling system. But temperature and cooling capacity are fundamentally different things. A thermos keeps coffee hot for hours precisely because the vacuum between its walls prevents heat transfer. Space works the same way. The vacuum of orbit is a near-perfect insulator.

On Earth, data centers reject heat through convection: air or liquid absorbs thermal energy from processors and carries it away. Fans push hot air out. Chillers circulate coolant. The atmosphere itself acts as an essentially infinite heat sink. None of this works in space. There is no air. There is no water. There is no medium to carry heat away through convection. The only remaining mechanism is thermal radiation: emitting infrared photons into the void.

Thermal radiation is governed by the Stefan-Boltzmann law, which states that radiated power scales with the fourth power of temperature. This means hotter surfaces radiate heat more efficiently, but server racks do not operate at glowing-hot temperatures. At the operating temperatures of computing hardware, radiation is a slow and inefficient process that requires enormous surface area to reject meaningful amounts of heat.

The U.S. Government Accountability Office published a Science and Technology Spotlight on space data centers on April 28, 2026 (two days before this article), stating plainly that space-based data centers "generate excess heat, but space does not cool computing hardware efficiently" and calling this "a major engineering challenge." The GAO noted that the solar arrays needed would be "larger than any launched and assembled in space" and that cooling solutions at data center scale "are also unproven."

The Cooling Problem Nobody Wants to Talk About

To understand the scale of this constraint, consider the International Space Station. The ISS uses an ammonia-loop cooling system with eight billboard-sized radiator panels, weighing several tons, to dissipate approximately 70 kilowatts of heat. That is enough to cool a modest residential building.

A single modern AI data center rack equipped with eight NVIDIA H100 GPUs draws 10 to 15 kilowatts. A meaningful AI training cluster (thousands of GPUs) requires megawatts of cooling capacity. A hyperscale data center facility can consume 100 megawatts or more. That is roughly 1,400 times the heat rejection capacity of the entire International Space Station.

SatNews described this in March 2026 as "The Physics Wall," noting that the vacuum of space acts as a near-perfect insulator and that heat generated by AI chips "cannot be whisked away by fans or liquid convection" but instead "must be radiated away as infrared light, requiring massive, complex thermal management systems."

The engineering implications are severe. Radiators must be oriented to face deep space rather than the Sun or Earth (both of which radiate heat back), adding complexity to spacecraft attitude control. The radiator panels must be physically enormous. Their mass increases launch costs. And the entire thermal architecture must work without the active maintenance that terrestrial data centers rely on daily.

Voyager Space's CEO, who operates computing hardware on the ISS and has firsthand experience with orbital thermal challenges, told CNBC in February 2026 that the cooling problem is not a minor engineering detail. The assessment was direct: all heat dissipation in space must happen via radiation, which requires radiators pointing away from the Sun, and that constraint is fundamental, not incremental. He estimated that viable solutions are not arriving in two or three years.

The thermal management problem also compounds internally. Without air, heat transfer inside the spacecraft enclosure itself is harder. On Earth, server rooms use forced airflow to move heat from processors to the building's cooling infrastructure. In a sealed orbital module, there is no natural convection, and even the internal heat transport from chip to radiator requires engineered solutions (liquid metal loops, heat pipes, phase-change materials) that add weight, cost, and failure modes.

The Launch Cost Fantasy

Even if the thermal problem were solved, the economics of getting data center hardware to orbit remain prohibitive for anything resembling current timelines.

SpaceX's Falcon 9, the world's most-flown orbital rocket with over 400 cumulative launches, charges approximately $74 million per dedicated launch as of early 2026, translating to roughly $3,000 per kilogram to low Earth orbit. Google's November 2025 feasibility study concluded that orbital data centers could become cost-competitive with terrestrial energy costs if launch prices fell to $200 per kilogram. That is a 15x reduction from current commercial pricing.

The path to $200/kg runs through Starship, SpaceX's super heavy-lift vehicle designed for full reusability. The theoretical economics are compelling: if Starship achieves full reuse at high flight rates, SpaceX has projected eventual costs below $100/kg. But Starship's current status reveals the gap between projection and reality.

As of April 2026, Starship has flown 11 times, with 6 successes and 5 failures. It has reached space and near-orbital velocities, but has never completed a commercial orbital mission or deployed a payload to orbit. The V3 variant, which would underpin any serious space infrastructure program, has not yet flown. SpaceX targeted 25 Starship launches in 2025 and completed 5, missing the target by a factor of five.

A detailed analysis of SpaceX's public milestone track record, covering 32 dated commitments from 2006 through early 2026, found that the company delivered roughly 16% of its promises on time, 31% late, and left 53% undelivered as of April 2026. Hardware engineering milestones eventually arrive, but typically 2 to 5 years behind schedule (Falcon Heavy was 5 years late, Crew Dragon 3 years late). Every dated Mars prediction since 2011 has been missed, and every replacement date has also been missed.

None of this means Starship will never work. SpaceX has a demonstrated pattern of eventually delivering on engineering challenges, just on dramatically longer timelines than announced. The relevant question for space data centers is not whether Starship will eventually achieve $200/kg, but when. Google's estimate: approximately 2035, assuming 180 Starship launches per year by then. Betting on Musk's timeline means betting against a 53% non-delivery rate and a mean forecast error exceeding three years.

The Adult in the Room

In November 2025, Google published what is arguably the most honest and rigorous assessment of space-based computing to date. The Suncatcher project feasibility study, authored by Google Research, did something unusual in an industry saturated with hype: it acknowledged the constraints.

Google's researchers tested their Trillium v6e Cloud TPU (the latest-generation AI accelerator) in a 67 MeV proton beam to simulate the radiation environment of low Earth orbit. The results were cautiously encouraging: the high-bandwidth memory subsystems began showing irregularities after a cumulative dose of 2 krad(Si), roughly three times the expected shielded five-year mission dose. That suggests the hardware can survive, at least for medium-duration missions, but raises questions about long-term reliability.

On inter-satellite communication, Google acknowledged that large-scale machine learning workloads require distributing tasks across thousands of accelerators with high-bandwidth, low-latency connections. Delivering performance comparable to terrestrial data centers would require links between satellites supporting tens of terabits per second, achievable only with satellites flying in extremely close formation (kilometers or less). This is an unprecedented orbital engineering challenge with no demonstrated precedent.

Most importantly, Google stated the conclusion clearly: "Our initial analysis shows that the core concepts of space-based ML compute are not precluded by fundamental physics or insurmountable economic barriers. However, significant engineering challenges remain, such as thermal management, high-bandwidth ground communications, and on-orbit system reliability."

That is the key sentence. Not precluded by fundamental physics is a carefully chosen phrase. It does not say feasible. It does not say economical. It says: the laws of nature do not make it strictly impossible. That is a very different statement from "we should build this now."

Google's next step is a learning mission in partnership with Planet Labs, launching two prototype satellites by early 2027. This is the scientific method: small-scale experiment, gather data, evaluate, then decide whether to scale. Contrast that with SpaceX's announcement of one million data center satellites, or Starcloud's plan to mine Bitcoin in space. One approach is engineering. The other is marketing.

The distinction matters for investors. When Google says "maybe by 2035," that is a projection from a company that built the infrastructure behind Search, YouTube, Cloud, and the world's largest AI training clusters. When a startup says "by 2027," that is a fundraising timeline, not an engineering one.

The Sustainability Paradox

One of the most prominent selling points for orbital data centers is environmental sustainability: remove energy-hungry facilities from Earth, power them with solar energy in space, and reduce carbon emissions. The narrative is tidy. The data is not.

Researchers at Saarland University in Germany published a paper titled "Dirty Bits in Low-Earth Orbit" that calculated the full lifecycle emissions of an orbital data center. Their finding: a solar-powered space data center could produce roughly an order of magnitude greater emissions than a terrestrial data center, once you account for rocket launch emissions and the atmospheric reentry of spacecraft components. The majority of those excess emissions come from burning rocket stages and hardware during reentry.

Starcloud, one of the more visible orbital data center startups, estimates that its space-based infrastructure could achieve 10 times lower carbon emissions compared to a land-based data center powered by natural gas. The Saarland researchers reached the opposite conclusion using a more comprehensive emissions accounting that includes the manufacturing, launch, and disposal of orbital hardware.

The discrepancy is telling. It reflects a pattern common in emerging technology narratives: the proponents measure only the operational phase (solar-powered computing in orbit produces zero emissions) while ignoring the full lifecycle (building, launching, maintaining, and de-orbiting the infrastructure produces substantial emissions). This is the same accounting error that plagued early electric vehicle emission claims before lifecycle analysis became standard.

None of this is to say that sustainability concerns about terrestrial data centers are invalid. They are real and growing. But the honest answer may be terrestrial solutions: nuclear power, advanced geothermal, offshore or underwater data centers (Microsoft's Project Natick demonstrated this concept in 2018), or simply building data centers in regions with abundant renewable energy. Moving the problem to orbit does not eliminate the emissions. It may multiply them.

The Latency Tax

Even if the thermal, launch cost, and sustainability challenges were all resolved, orbital data centers would face a constraint that no amount of engineering can eliminate: the speed of light.

Low Earth orbit sits roughly 500 to 2,000 kilometers above the surface. Round-trip latency for ground-to-space communication at these altitudes ranges from approximately 5 to 20 milliseconds, depending on orbit altitude and ground station location. For batch AI training workloads (processing large datasets over hours or days), this latency is manageable. For latency-sensitive applications (real-time inference, interactive cloud services, financial trading, autonomous vehicle decision loops), it is disqualifying.

This means that space data centers, even in their most optimistic form, would serve only a subset of computing workloads. The highest-value, fastest-growing applications in AI (real-time inference, edge computing, interactive agents) require latencies measured in single-digit milliseconds. Orbital infrastructure cannot deliver that.

There is also the bandwidth constraint. Transmitting the volumes of data required for AI training between Earth and orbit, or between satellites in a constellation, requires communications infrastructure that does not exist at the necessary scale. Google's feasibility study noted that inter-satellite links need to support tens of terabits per second, achievable only through advanced optical systems operating at ranges of kilometers, not the thousands of kilometers typical of current satellite constellations.

What Could Make This Work

A fair analysis requires acknowledging the scenarios where orbital data centers could eventually prove viable.

Starship could deliver on its cost targets. SpaceX has a demonstrated pattern of eventually solving hard engineering problems, even if timelines slip dramatically. If Starship achieves full reusability and high flight cadence, launch costs below $200/kg by the mid-2030s are plausible. That would remove the largest economic barrier to orbital infrastructure of any kind.

Thermal engineering could advance faster than expected. Active thermal control systems, including space-rated heat pumps that boost radiator temperatures to increase dissipation efficiency, are expected to mature by 2027. Liquid metal cooling, advanced phase-change materials, and deployable radiator systems could meaningfully improve the heat rejection equation, even if they cannot fully overcome the fundamental vacuum constraint.

Batch AI training is a legitimate use case. Unlike real-time inference, model training can tolerate higher latency and operate asynchronously. A constellation of orbital compute nodes, powered by continuous solar energy, could process training workloads during periods when terrestrial energy demand peaks. This is a narrow but real application.

The energy crisis on Earth is real and growing. If AI electricity demand truly doubles to 1,000+ TWh by 2026 as the IEA projects, and terrestrial power generation cannot keep pace, the relative economics of orbital compute improve by default. The worse things get on the ground, the more attractive space becomes, even with its constraints.

Defense applications provide anchor demand. The U.S. Space Development Agency's Proliferated Warfighter Space Architecture and the Golden Dome program both envision space-based data processing for real-time targeting and surveillance. Government contracts could fund early infrastructure that commercial applications later leverage.

These are not trivial possibilities. The long-term trajectory of technology tends to make the impossible merely expensive, and the expensive eventually affordable. The question, as always, is timeline, and the gap between where the engineering sits today and where the promises suggest it will be tomorrow.

The Bottom Line

Data centers in space are not impossible. Google's researchers said as much: the concept is "not precluded by fundamental physics." That is an important statement, and it deserves respect. But "not precluded by physics" and "viable in the near term" are separated by a decade of unsolved engineering, unproven economics, and the hard constraint of a universe that does not provide free cooling in vacuum.

The pitch is seductive: unlimited solar power, no land constraints, AI infrastructure beyond the reach of terrestrial energy bottlenecks. But the physics is stubborn. Vacuum insulates. Radiators require mass. Mass requires launches. Launches cost money and produce emissions. Every link in the chain is governed by constraints that marketing budgets cannot repeal.

The most credible voice in this space, Google, projects that the economics might work around 2035, contingent on Starship achieving a launch cadence and cost profile that SpaceX has not yet demonstrated. The least credible voices are announcing million-satellite constellations and orbital Bitcoin mining, backed by a track record where 53% of dated promises remain undelivered and the mean forecast error exceeds three years.

For investors, the framework is familiar. We have seen it in quantum computing, where real science was packaged in unreal valuations. We have seen it in eVTOL, where a working aircraft was priced as though the business model were already proven. The pattern repeats: legitimate technology, premature economics, and a narrative that travels to the destination years before the engineering arrives.

At Wealth Engine Pro, the approach is to evaluate what exists, not what is promised. What exists today in orbital computing is a handful of toaster-oven-sized experiments, an unsolved thermal management problem at scale, launch costs 15 times higher than the break-even threshold, and a very good feasibility study from Google that says "check back in 2035." That is not a reason to invest. It is a reason to keep watching, and to be deeply skeptical of anyone who tells you the future is closer than the physics allows.

Evaluate the Data, Not the Demo

Wealth Engine Pro scores 5,500+ stocks across financial health, trend strength, and intrinsic value. Our tools help you separate real progress from promotional timelines. Data over narrative.

This article represents the opinions of the author and is not financial advice. The views expressed are based on publicly available information and publicly reported financial data. Always do your own research before making investment decisions.