As several companies continue to brawl for AI dominance, megacap tech entered the earnings confessional.
The results were yet another affirmation of what CEOs have been telling us: they would rather overspend than get left behind.
While the goalposts keep shifting on investing in the next industrial revolution, the chorus calling this a bubble continues to crescendo.
Both bears and bulls have conviction on their sides of the story, but the truth lies somewhere in between.
Each technological revolution is born from periods of mass euphoria. Companies will fail. But those that are able to capture lightning in a bottle stand to change the world.
I believe the companies that succeed will all have one thing in common: vertical positioning in the tech stack.
TL;DR: Here are three trends to watch with the AI infrastructure buildout:
- Big Tech’s CapEx arms race is reshaping AI infrastructure: Microsoft, Google, Meta, and Amazon are spending at unprecedented levels to secure GPU supply, expand data centers, and vertically integrate AI capabilities from chips to cloud.
- The “overbuilt” debate remains unresolved: While some warn of an AI infrastructure bubble, current capacity constraints and accelerating usage suggest the market isn’t yet saturated. The next few years will hinge on GPU pricing, contract renegotiations, and real-world AI demand.
- Vertical integration determines the winners: Each megacap tech company has their own strengths, and all of them are building moats through scale, data, and integration.
Cloud Boom and the CapEx Arms Race
One theme rang loud across all these earnings: an unprecedented build-out of AI infrastructure.
Big Tech is engaged in an arms race to stockpile computing power. They are expanding data centres, buying chips, and laying fibre at a pace never before seen. This CapEx explosion is both a response to surging AI demand and a preventative measure to secure future dominance.
Microsoft spent $34.9B on CapEx in a single quarter, roughly 1.5x year-ago levels, split between GPUs, servers, and networking, and long-term data centre commitments. It plans to lift AI capacity by about 80% in fiscal 2026 and nearly double its data centres footprint within two years, acknowledging tight capacity through next June.
Alphabet lifted its 2025 capex plan from $85B to $91–93B after strong Q3 cloud demand, nearly double 2024’s spend and about four times recent years, with significant investment in proprietary TPUs to lower unit costs and support their search business, YouTube, and Google Cloud. Google’s cloud backlog jumped nearly $50B in a single quarter.
Meta guided 2025 capex to $70–72B and said 2026 will be notably larger, channelling funds into AI supercomputers and new AI-first campuses while accepting near-term free cash flow pressure to be ready if superintelligence arrives sooner than expected.
Amazon is adding capacity rapidly, noting AWS power has doubled since 2022, with higher 2024–2025 investments and strategic spend tied to Anthropic. The Q3 results showed AWS’s resurgence, and Amazon is adding data centre capacity as fast as components can be procured.
Bottom line: Taken together, this will be one of the largest capex cycles in history. Leaders argue that the AI platform shift demands it and that being under-capacity would be costlier than spending now.
Are We Overbuilt?
AI will change the world. We may be in an infrastructure bubble, but AI adoption itself is still early.
- We are still early on AI adoption, and
- We may be approaching overcapacity in the next 3-4 years.
Here is the constant tug-of-war that goes on in my head as I try to reason through both sides of the overbuilt debate: unlike the dot-com era, today’s CapEx is largely funded by operating cash flow and fortress balance sheets at the mega caps.
But wait! There are cautions. Meta structured an SPV that moves what should be on-balance-sheet obligations off balance sheet, with terms that pull inventory risk back to Meta. It also issued a $25-billion bond that drew roughly five times that in demand. NVIDIA’s proposed 100 billion dollar “investment” in OpenAI is effectively vendor financing that pulls demand forward.
At the same time, every hyperscaler says they are capacity-constrained. Token usage is rising quickly, and we are still early. Roughly $3 trillion of near-term AI infrastructure spend is lifting the broader U.S. economy.
The risk is depreciation. NVIDIA’s cadence now upgrades the GPU stack about every 18 months, compressing useful life. Model economics are improving fast, which could reduce the hardware footprint required per unit of output.
But wait! Jevons paradox suggests that cheaper AI increases consumption and can expand aggregate demand, sustaining infrastructure needs.
Market sentiment is cycling: every four to five months, investors conclude we are overbuilt, then earnings re-anchor the demand story. Today, there are no clear signs of implosion, but this can change gradually, then all at once.
Here’s what I am tracking to try to determine if we are overbuilt:
- Used GPU pricing: A sharp, broad decline signals faster-than-modelled depreciation and pressure on neo-cloud balance sheets.
- Contract renegotiations: As multi-year deals season, watch for price givebacks, minimum-commit relief, or capacity take-downs.
- Power delivery delays: Facilities without timely interconnects stall revenue, freeze follow-on spend, and tighten financing.
- Revenue realization and cost savings: Track model-provider revenue ramps and enterprise cost take-out. OpenAI guided to a $20-billion run-rate, and mega caps are already expanding margins through operating leverage.
- End-user demand. Management teams continue to report undersupply with accelerating usage. If that reverses, the thesis weakens.
Bottom line: On these measures, I don’t believe are not overbuilt yet. Keep monitoring all five. While the overbuilt debate rages on, there is some resilience that is being built into business models. This is what I’m focusing on now…
Who’ll Own the Stack
"When the product isn’t yet good enough… You had to do everything in order to do anything. When the way you compete is to make better products, there is a big competitive advantage to being integrated." — Clayton M. Christensen.
When thinking about who wins in the end, I have asked myself the following questions:
- How important is it to build a state-of-the-art model in-house?
- Is the core business being cannibalized or optimized?
- What exact parts of the value chain do you need to own?
With the quarterly numbers digested, our focus turns to the long game. Each of these tech giants is maneuvering to “verticalize” AI; in other words, to own as much of the AI value stack as possible from silicon to software to the end-user interface. The ultimate prize is a durable competitive advantage – a moat – in the age of AI.
So, which companies are best positioned on unit economics, pricing power, data moat, and long-term defensibility?
Google is the closest to a full stack: It designs chips with TPUs, builds frontier models, runs the world’s largest consumer distribution in Google Search, YouTube, and Android, and sells AI through Google Cloud. That end-to-end control lets it co-design hardware and models, deploy at web scale, and learn from billions of interactions.
The data advantage is massive and compounds. Verticalization should support better unit costs and differentiated features that lift ads, search quality, and cloud wins. The risk is a fast shift to agentic workflows that bypass search. Execution in Q3 suggests the model is holding.
Microsoft is less vertically integrated on core tech, but dominant in go-to-market: It rents best-in-class models through OpenAI while embedding AI into Office, Windows, Dynamics, GitHub, and LinkedIn. The ability to price Copilot at a premium across a huge installed base shows commercial power.
Near-term margins carry the cost of NVIDIA hardware and OpenAI usage, but scale and eventual in-house silicon can improve economics. Defensibility sits in the enterprise workflow depth, and the fact that many AI builders run on Azure and GitHub. Consumer search remains a relative weakness.
Amazon’s edge is infrastructure: AWS controls chips like Trainium and Inferentia, custom networking, and a vast data centre footprint. That stack can lower the cost to train and serve models, keeping AWS price competitive while protecting margin.
Partnerships such as the one with Anthropic also broaden choice for customers. Its data is more commerce and operations oriented than social, which suits retail ads, logistics, and Alexa.
The strategy is to be the default platform regardless of model style or deployment pattern. Risks include customer portability and strong rivals, but momentum has improved.
Meta integrates to serve itself: Unique social data and top research talent fuel better ranking and ads. Heavy capex targets AI supercomputers and AI-first campuses. Monetization flows through more effective ad auctions and product engagement rather than subscriptions. The bet is large, but the flywheel is working.
It’ll be fascinating to watch if one approach clearly outruns the others in 2026. But from this quarter’s lens, each of the Big Tech players is seeing some success with their chosen playbook.
The diversity of strategies also means the AI landscape remains competitive and innovative, rather than one company cornering everything (for now).
Strong Convictions. Loosely Held.
— Nick Mersch, CFA
The content of this document is for informational purposes only and is not being provided in the context of an offering of any securities described herein, nor is it a recommendation or solicitation to buy, hold or sell any security. The information is not investment advice, nor is it tailored to the needs or circumstances of any investor. Information contained in this document is not, and under no circumstances is it to be construed as, an offering memorandum, prospectus, advertisement or public offering of securities. No securities commission or similar regulatory authority has reviewed this document, and any representation to the contrary is an offence. Information contained in this document is believed to be accurate and reliable; however, we cannot guarantee that it is complete or current at all times. The information provided is subject to change without notice.
Commissions, trailing commissions, management fees and expenses all may be associated with investment funds. Please read the prospectus before investing. If the securities are purchased or sold on a stock exchange, you may pay more or receive less than the current net asset value. Investment funds are not guaranteed, their values change frequently, and past performance may not be repeated. Certain statements in this document are forward-looking.
Forward-looking statements (“FLS”) are statements that are predictive in nature, depend on or refer to future events or conditions, or that include words such as “may,” “will,” “should,” “could,” “expect,” “anticipate,” intend,” “plan,” “believe,” “estimate” or other similar expressions. Statements that look forward in time or include anything other than historical information are subject to risks and uncertainties, and actual results, actions or events could differ materially from those set forth in the FLS. FLS are not guarantees of future performance and are, by their nature, based on numerous assumptions. Although the FLS contained in this document are based upon what Purpose Investments and the portfolio manager believe to be reasonable assumptions, Purpose Investments and the portfolio manager cannot assure that actual results will be consistent with these FLS. The reader is cautioned to consider the FLS carefully and not to place undue reliance on the FLS. Unless required by applicable law, it is not undertaken, and specifically disclaimed, that there is any intention or obligation to update or revise FLS, whether as a result of new information, future events or otherwise.
