
For the past two years, the global semiconductor narrative has been dominated by a single acronym: GPU. As the generative AI boom ignited, hyperscalers and enterprises scrambled to secure NVIDIA hardware, positioning Graphics Processing Units (GPUs) as the undisputed engine of the AI revolution. However, a new report from Morgan Stanley suggests that the industry is on the verge of a significant architectural shift. As Agentic AI matures—moving from simple chatbot interfaces to autonomous task-performing systems—the demand for computing power is poised to expand into a broader ecosystem of hardware, specifically revitalizing the CPUs (Central Processing Units) market.
According to analysts at Morgan Stanley, this transition represents more than just a marginal change in procurement; it is a fundamental reconfiguration of data centers. While GPUs remain vital for the massive parallel processing required to train Large Language Models (LLMs), the deployment of autonomous "agents" requires a different computational profile—one that heavily relies on the versatile, latency-sensitive capabilities of traditional processors.
The transition from "generative" to "agentic" AI is a shift toward systems that can plan, execute, and iterate on complex workflows with minimal human oversight. These agents require constant, rapid decision-making and interaction with disparate databases, APIs, and real-time environment data.
Morgan Stanley projects that this paradigm shift could add up to $60 billion in incremental revenue to the CPUs market by 2030. This growth is driven by several key factors:
To visualize how the hardware landscape is diversifying, we have outlined the core differences in workload affinity between the evolving categories of AI deployment.
| Hardware Type | Primary AI Function | Responsibility in Agentic Workflow | Hardware Emphasis |
|---|---|---|---|
| GPU | Model Training & Massive Inference | Parallel processing of massive neural node calculations | High-throughput HBM memory & Tensor cores |
| CPU | Task Orchestration & Data Pre-processing | Managing agent logic, data routing, and API calls | High clock speed & low-latency I/O bandwidth |
| ASIC/NPU | Domain-Specific Acceleration | Power-efficient inference at the edge | High performance-per-watt efficiency |
The infusion of $60 billion into the CPU sector signifies that the "AI data center" of 2030 will look fundamentally different from the GPU-heavy clusters of 2024. Industry architects are currently redesigning server racks to balance the massive heat dissipation of high-wattage GPUs with the high-bandwidth connectivity and processing agility of sophisticated CPUs.
This evolution is a positive signal for the semiconductors sector at large. By diluting the hyper-concentration of spending on a single hardware category, the market is achieving a more sustainable growth trajectory. As Morgan Stanley analysts pointed out, this diversification reduces the risk profile of the broader technology supply chain, ensuring that compute resources are matched to the specific characteristics of the software running on them.
For observers at Creati.ai, the implications of Morgan Stanley’s research are clear: the AI boom is entering its "implementation phase." While the "training phase" was an era defined by GPU scarcity and capital expenditure hyper-growth, the "agentic phase" will be defined by integration, optimization, and balanced hardware heterogeneous computing.
Companies that have historically been sidelined by the intense focus on GPU-only hardware players now find themselves back in the spotlight. Leading CPU manufacturers and specialized compute providers are expected to play as critical a role in the future of the AGI (Artificial General Intelligence) value chain as the traditional GPU hardware leaders.
As we look toward 2030, the $60 billion in projected CPU growth is not merely a statistical forecast; it is a roadmap for how the industry anticipates the future of productivity. The autonomous agents of tomorrow will rely on a holistic architecture, proving that in the race to build the intelligence of the future, the versatile CPU remains an indispensable piece of the infrastructure puzzle.