
OpenAI has signaled a significant shift in its long-term financial and operational strategy, informing investors of a revised target to spend approximately $600 billion on compute infrastructure by 2030. This figure represents a sharp reduction from the eye-watering $1.4 trillion projection previously floated by CEO Sam Altman. The recalibration comes as the company finalizes a historic funding round expected to exceed $100 billion and outlines an ambitious path to generating $280 billion in annual revenue within the same timeframe.
This strategic pivot suggests a move from unbridled, "growth-at-any-cost" expansion toward a more grounded, albeit still massive, capital deployment plan. As the artificial intelligence sector matures, OpenAI is positioning itself to balance the exorbitant costs of training frontier models with a clear roadmap for monetization and profitability, likely in preparation for a highly anticipated initial public offering (IPO).
The decision to cap compute spending at $600 billion through 2030 marks a pivotal moment in the AI arms race. Previously, Altman had engaged in talks regarding infrastructure investments that reached into the trillions, sparking debates about the sustainability of such capital expenditures. The revised $600 billion target, while still dwarfing the capex of most sovereign nations, indicates a focus on efficiency gains and a more realistic assessment of hardware scaling laws.
Sources close to the matter indicate that the reduction is not a retreat from AGI (Artificial General Intelligence) ambitions but rather a reflection of improved model architectures and hardware efficiencies. The spending plan is now explicitly designed to align capital outlays with expected revenue growth, ensuring that the company’s burn rate does not outpace its ability to monetize its technology.
A key driver behind this recalibration is the changing economics of AI operation. Reports indicate that OpenAI’s inference costs—the computing power required to run models rather than train them—quadrupled in 2025. This surge in operational overhead caused adjusted gross margins to compress from 40% to roughly 33%. By tempering the top-line infrastructure spend, OpenAI appears to be acknowledging the need to optimize current architectures before committing to the next order of magnitude in hardware spending.
To justify a $600 billion infrastructure buildout, OpenAI has presented investors with a staggering revenue projection: $280 billion annually by 2030. For context, the company reported approximately $13.1 billion in revenue for 2025, beating its internal target of $10 billion.
The projected growth relies on a dual-engine strategy, with revenue expected to be split nearly evenly between two core segments:
Achieving this target would require OpenAI to sustain a compound annual growth rate (CAGR) that few companies in history have managed. However, the annualized revenue run rate, which reportedly topped $20 billion entering 2026, suggests the momentum is currently in their favor.
The following table outlines the stark contrast between OpenAI's previous trajectories and its current crystallized targets for the end of the decade.
| Metric | Previous Estimates / 2025 Actuals | 2030 Target |
|---|---|---|
| Compute Spending | $1.4 Trillion (Previous Estimate) | $600 Billion |
| Annual Revenue | $13.1 Billion (2025 Actual) | $280 Billion |
| Gross Margins | ~33% (2025 Actual) | Targeting Expansion |
| Revenue Split | Majority Consumer (Current) | 50% Consumer / 50% Enterprise |
Concurrent with these revised forecasts, OpenAI is in the final stages of closing a funding round that could shatter private market records. The round is expected to raise over $100 billion, potentially valuing the company at more than $850 billion post-money. This influx of capital is critical for funding the $600 billion compute roadmap without relying solely on operating cash flow.
The composition of this funding round highlights the deep entanglement between OpenAI and the broader AI hardware ecosystem. Strategic investors are reportedly contributing the lion's share of the capital, cementing long-term partnerships essential for infrastructure access.
The funding round is not merely about cash; it is about securing the supply chain. The following investors are reportedly central to the deal:
| Investor | Estimated Commitment | Strategic Alignment |
|---|---|---|
| Nvidia | ~$30 Billion | Ensuring priority access to next-gen GPUs (Blackwell and Rubin architectures). |
| SoftBank | ~$30 Billion | Masayoshi Son's continued aggressive bet on the AI singularity. |
| Amazon | Up to $50 Billion | Potential expansion of compute on AWS and usage of Trainium chips to diversify hardware. |
| Microsoft | Participation Confirmed | Maintaining the core "Stargate" partnership and Azure integration. |
Note: Specific investment figures are based on reported discussions and may be subject to final adjustments.
The revised $600 billion figure inevitably intersects with the "Stargate" supercomputer project, a joint initiative with Microsoft. While the original scope of Stargate fueled the trillion-dollar rumors, the current budget suggests a more phased approach to building these massive data centers.
The involvement of Amazon in the funding round adds an interesting dynamic. Historically tethered to Microsoft Azure, OpenAI’s potential diversification into Amazon’s ecosystem (potentially utilizing Trainium chips) could be a tactic to reduce reliance on Nvidia’s pricing power and Microsoft’s cloud dominance. This "infrastructure agnosticism" may be the key to keeping the $600 billion budget from ballooning back into the trillions.
This strategic reset is widely interpreted as the groundwork for an Initial Public Offering, potentially as early as late 2026. Public market investors demand a credible path to profitability, not just infinite scaling. By cutting the capital expenditure forecast by nearly 60% while maintaining aggressive revenue goals, Sam Altman is presenting a business model that—while expensive—is mathematically viable.
For the broader AI industry, this serves as a signal that the "experimental" phase of the generative AI boom is ending. The focus has shifted squarely to unit economics, margin preservation, and sustainable scaling. If OpenAI can execute on its $280 billion revenue promise, it will not only justify its own valuation but potentially validate the entire generative AI economy. Conversely, failure to control the quadrupling inference costs could leave the company with a $600 billion infrastructure bill and a difficult path to solvency.