Memo No. 2 — On Thematic Intelligence

The Largest Infrastructure Buildout
in Human History.
Has Your Advisor Mentioned It?

McKinsey and JPMorgan project cumulative AI infrastructure investment of $5–7 trillion by the end of the decade. The bottlenecks are not in software — they are in energy, chip packaging, memory fabrication, and the physics of moving data. If your portfolio has "AI exposure," the question is which layers you actually own — and why.
CIOffice  ·  April 2026  ·  Independent Wealth Intelligence
Download PDF

At the World Economic Forum in January 2026, Jensen Huang — founder of the world's most valuable semiconductor company — sat beside BlackRock's Larry Fink and described artificial intelligence not as a technology but as a five-layer industrial stack: energy at the base, then chips, infrastructure, models, and applications at the top. Every successful application, he said, pulls on every layer beneath it, all the way down to the power plant that keeps it alive.

This was not a product pitch. It was a description of a capital expenditure cycle without modern precedent. The five largest hyperscalers alone plan to spend approximately $690 billion in 2026 — nearly tripling their 2024 outlay. Add the upstream supply chain — semiconductor fabrication, memory manufacturing, energy infrastructure — and the annual figure is substantially larger. McKinsey and JPMorgan project cumulative AI infrastructure investment of $5–7 trillion by the end of the decade.

This matters to anyone with a portfolio. Not because you should pick stocks in any of these layers, but because this buildout is restructuring the economy beneath your existing holdings in ways that most advisory chains are not equipped to see — let alone position for.

"AI is infrastructure. Every country should treat it like electricity or roads."
— Jensen Huang, WEF Davos, January 2026
Exhibit 1 — The Five-Layer Stack
Five layers. Five bottlenecks. One investment thesis.
Every AI application depends on every layer beneath it. Each layer has different economics, different physical constraints, and different risk — and different capital flowing through it.
The AI Infrastructure Stack — With Active Constraints
5
Revenue lags investment
Applications
Where economic value is ultimately created — drug discovery, robotics, autonomous systems, software development. But revenue from AI applications has not yet caught up with the infrastructure being built to support them. The application layer must eventually justify every layer beneath it.
4
Commoditising rapidly
Models
Free, open-source AI models now perform almost as well as the most expensive proprietary ones. The cost of using AI as a service — what companies charge per query — has dropped roughly 97% in two years. Building a great model is no longer enough to guarantee pricing power.
3
Copper hitting physical limits
Infrastructure
Data centres, cooling systems, and the networking that connects thousands of processors into a single machine. The critical shift: data currently travels through copper wires, but copper can no longer handle the speed and density required. The industry is replacing it with laser light through glass fibre — a transition that is capital-intensive, technically complex, and invisible to most advisory chains.
2
Packaging & memory constrained
Chips & Compute
The market focuses on the GPU designer. But the actual constraint on AI chip production has shifted to what happens after the chip is designed: the advanced packaging that connects logic to memory, and the specialised high-bandwidth memory (HBM) itself. Both are sold out through 2026. Memory shortages may persist through 2027.
1
Binding constraint
Energy
Intelligence generated in real time requires power generated in real time. A single hyperscale AI facility can require five gigawatts of power — more than the entire state of New Hampshire. The electricity grid cannot deliver fast enough. Hyperscalers are contracting directly with nuclear plants, restarting decommissioned reactors, and building their own power generation on-site.
2026 Capital Expenditure — Across the Stack
Hyperscalers (Layers 2–3: buying chips, building data centres)
Amazon$200B
Alphabet$175–185B
Microsoft$120B+
Meta$115–135B
Oracle$50B
Hyperscaler Subtotal
~75% directed at AI infrastructure · Up from $256B in 2024
~$690B
2026 Annual — Upstream Supply Chain (Layers 1–2)
Semiconductor fabs (TSMC, Samsung, SK Hynix, et al.)~$200B
Energy, grid & power infrastructure$100B+
Committed Pipeline — Beyond the Hyperscalers
Stargate (SoftBank / OpenAI / Oracle) through 2029$500B
Gulf sovereign buildouts (Saudi, UAE, Qatar)$100B+
EU AI Continent Action Plan€200B
xAI (Musk) — 2026$30B+
Cumulative AI Infrastructure Through 2030
McKinsey / JPMorgan
$5–7T
Company guidance Q4'25/Q1'26 · Goldman Sachs (2025) · Semiconductor Intelligence (2026) · McKinsey · JPMorgan · Morgan Stanley (2026)
Layer 1 — Energy: The Binding Constraint

Every AI computation requires electricity — not stored, not batched, but delivered in real time. US data centre power demand currently sits below 15 gigawatts. Lawrence Berkeley National Laboratory projects demand reaching 325–580 terawatt-hours by 2028, up from 176 in 2023. Gartner projects global data centre electricity consumption doubling to 980 TWh by 2030.

The grid cannot keep pace. In Virginia — home to the largest concentration of data centres in the world — Dominion Energy proposed its first base-rate electricity increase since 1992, driven directly by data centre load. Hyperscalers are not waiting: they are signing direct contracts with nuclear plants, restarting decommissioned reactors, and building behind-the-meter gas generation because permitting new grid connections takes three or more years. Energy is not a supporting player in the AI story. It is the foundation — the layer that determines how much intelligence the system can produce.

Layer 2 — Chips: The Bottleneck Has Moved

The dominant GPU designer captures approximately 90% of AI accelerator spend. That is a remarkable franchise. But the constraint on AI chip production is no longer chip design — it is what happens after the design is finished.

To build a working AI chip, you need three things: the logic die (the chip itself), high-bandwidth memory (HBM — a specialised memory that stacks vertically and feeds data to the processor at extraordinary speed), and advanced packaging (a process called CoWoS that bonds these components together). The packaging and memory steps are where production bottlenecks now sit. Four companies consume roughly 90% of global advanced packaging capacity and HBM supply, while using only about 12% of advanced chip fabrication capacity. The entire AI hardware buildout depends on a sliver of global manufacturing — which is why the constraints are structural, not cyclical, and unlikely to resolve quickly.

HBM is particularly constrained: producing a gigabyte of high-bandwidth memory requires three to four times the silicon wafer area of standard memory. Every wafer allocated to HBM is a wafer not available for conventional chips — which is why memory prices are rising across the entire electronics industry, from servers to smartphones. This is a supply chain effect that extends far beyond AI, and it is one that very few thematic products account for.

Layer 3 — Infrastructure: From Copper to Light

An AI data centre is not a server farm. It is a factory for producing intelligence — requiring liquid cooling at industrial scale, power delivery systems designed for constant load, and networking architecture that connects tens of thousands of processors into a single machine.

Here is where a critical and largely invisible transition is underway. Inside these facilities, data currently moves through copper wires. But at the speeds now required — 224 gigabits per second per lane — copper loses signal integrity within a single metre. The wires must become thicker and shorter, and even then, they consume enormous amounts of energy: interconnects alone now account for nearly 30% of total data centre power consumption.

The industry's solution is to replace copper with light. Silicon photonics — embedding laser-based data transmission directly into processor packages — allows data to travel further, faster, and with dramatically less energy loss. In March 2026, a single company invested $4 billion in the photonics supply chain in a single announcement. An industry alliance of chip designers, hyperscalers, and networking companies has formed specifically to develop optical standards for AI clusters. This is not a future technology. It is being deployed now, at scale, and it represents one of the most capital-intensive transitions in the stack.

Most wealth advisors have never heard of silicon photonics. The companies enabling this transition — in optical transceivers, laser fabrication, and co-packaged optics — are not household names. They are also not in most thematic AI funds.

Exhibit 2 — Five Questions for Your Advisor
One question per layer. Most advisors can't answer all five.
The layers with the hardest physical constraints — energy, packaging, photonics — are where supply scarcity creates pricing power. The layers with the lowest barriers — models and applications — are where competition destroys it. A thematic "AI fund" concentrated in familiar software names may be positioned in exactly the wrong part of the stack.
The Five-Layer Diagnostic
L1 — Energy
What is our exposure to the power generation and grid infrastructure that the AI buildout physically requires?
If blank — they haven't thought about the foundation.
L2 — Chips & Compute
Beyond the dominant GPU name, do we own the supply chain — the memory, the advanced packaging, the chip fabrication — or just the headline?
If they name one company — they're seeing the surface, not the structure.
L3 — Infrastructure
Who builds, cools, powers, and connects these facilities? Can they explain the transition from copper to optical interconnects — and what that means for our positioning?
If they can't explain what's replacing copper inside these facilities — they're not seeing the infrastructure layer.
L4 — Models
Open-source AI models are now nearly free and performing at frontier levels. If our "AI bet" depends on a model provider maintaining pricing power, what happens as costs approach zero?
If they haven't heard of the open-source price collapse — they're two years behind.
L5 — Applications
The five companies spending $690 billion are not yet earning it back from AI. If our "AI exposure" is the same names already at the top of the index — what exactly are we paying a thematic premium for?
The companies with pricing power today are the suppliers — in energy, memory, and photonics. The hyperscalers are the customers.
The Capital Intensity Problem
Combined hyperscaler capex now exceeds combined free cash flow. AI assets depreciate at roughly 20% per year — meaning the top five spenders face annual depreciation charges approaching $400 billion, more than their combined 2025 profits. Capital intensity has reached 45–57% of revenue, levels without historical precedent in technology. None of this means the buildout is wrong. It means the application layer must eventually generate returns that justify the infrastructure beneath it. That justification is a matter of conviction, not certainty.
Epoch AI (2026) · TSMC CEO Q3'25 earnings call · IEEE Spectrum (2025) · Stanford HAI AI Index (2025) · Sequoia Capital (2025) · Futurum Group (2026)
Layer 4 — Models: The Price of Free

In January 2025, a Chinese research lab released an open-source AI model that matched frontier performance at a reported training cost of $5.6 million — a fraction of what Western labs spend on comparable systems. Markets erased $600 billion in technology market capitalisation in a single trading session.

Since then, the gap between free, open-source models and the most expensive proprietary ones has narrowed to under 2% on key benchmarks. The cost of accessing AI as a service — what the industry calls "API pricing," essentially the per-query fee that companies like OpenAI charge — has fallen by approximately 97% since 2022. A query that cost $3 per million tokens two years ago now costs under ten cents from the most competitive providers.

This is commoditisation at a speed the technology industry has rarely seen. The implication for your portfolio: if your AI exposure depends on a company maintaining pricing power at the model layer, the structural economics are working against it. The value is migrating — away from the models themselves and toward the infrastructure that runs them and the applications that deploy them.

Layer 5 — Applications: Customers, Not Winners

Drug discovery platforms. Industrial robotics. Autonomous vehicles. Legal and financial copilots. Software that writes software. The application layer is where economic value will ultimately be created — and it is where the $690 billion in annual infrastructure investment must eventually find its return.

It has not yet. Several hyperscalers now spend more on AI infrastructure than they generate in free cash flow. They are funding the gap with debt issuance — transforming historically cash-funded businesses into leveraged ones for the first time. The bet is that AI-powered services will produce trillions in revenue over the next decade. That may prove correct. But it is a forward-looking bet, not a settled fact.

Here is the distinction that most thematic positioning misses: the five companies spending $690 billion are not the AI suppliers. They are the AI customers. They must buy the energy, the memory, the chips, the packaging, and the photonics from companies that most advisors have never heard of — and those suppliers are sold out, raising prices, and generating real profits today. Not in a forward model. Now.

Buying the five largest hyperscalers as "AI exposure" is like buying JPMorgan in 2015 as a "fintech play" — because it was the largest bank and the safest-sounding name adjacent to the theme. But JPMorgan was not fintech. It was the incumbent that fintech was aimed at. Owning it gave you exposure to the company that had to respond to the disruption, not the company creating it. The same structural question applies here: are you owning the theme, or are you owning the incumbents who are spending to survive it?

THE CIOffice PROPOSITION
The Intelligence Layer. Architecture Alpha.
This memo does not recommend buying or selling anything. It demonstrates a structural analysis that sits above any individual advisor, fund, or product — and that most advisory chains are not built to deliver.
Consider a concrete example. Your bank puts you into a thematic AI fund. It holds the same ten mega-cap names that already dominate your core portfolio. Nobody noticed, because nobody consolidated. That is not just a cost problem. That is a performance problem. You are paying for diversification you do not have — and you are absent from entire layers of the buildout where real scarcity creates real pricing power.
CIOffice does not replace your advisors. We do not pick stocks. We provide the intelligence layer — the consolidated architectural view that makes every advisor across your structure more effective. We don't just help you stop losing. We help you win.
Architecture Alpha — the performance improvement that comes from better instructions to better-positioned advisors, working from a consolidated view of your entire capital structure. Not stock-picking alpha. The return that only becomes visible when you finally have the complete picture.
About This Memo
This memo is provided for educational and informational purposes as a demonstration of CIOffice's thematic research perspective. It does not constitute investment advice, a recommendation, or an offer to buy or sell any security. CIOffice provides independent governance and capital architecture — all investment execution should be conducted through your regulated broker or bank.
Back to all insights