Blog Hero Image

Posted by Nicholas Mersch on Jan 13th, 2026

Build Once, Sell Forever? Not Anymore

For the better part of two decades, the magic trick in technology was simple: build software once, then sell it indefinitely. That led to high gross margins, near-zero marginal distribution costs, recurring subscriptions, and operating leverage that looked like a cheat code. It’s the business model that helped create the most valuable companies in the world.

AI is changing that model.

The more the industry pushes from chatbots to reasoning, and from reasoning to agents, the more “intelligence” starts to look metered. Every additional unit of capability requires additional inference. Every extra second the model “thinks” requires more compute, more networking, more memory bandwidth, more power, and more cooling. This isn’t just software distribution at scale; it’s a utility. The customer is not only buying a product, but also consuming a resource.

That framing matters because it changes where the economics accrue. Utilities can be fantastic businesses, but they tend to look nothing like classic SaaS. They are capital-intensive, sensitive to financing conditions, and often competitive in the middle of the stack. And when capacity is overbuilt, prices fall, and narratives shift.

With that setup, the AI infrastructure boom starts to rhyme with some of the most important capex cycles in modern history. So let’s take a look at historic examples to see what we can learn.

The AI Infrastructure Boom: Lessons from Railroads, Fibre, Shale, and Airlines

The buildout of AI infrastructure, from massive GPU data centres to advanced networking, has reached a scale that demands historical comparisons. For 2026, forecasts are calling for $500B in incremental spend across chips, data centres, and cloud infrastructure. It’s warping capital allocation decisions, corporate strategy, and in some quarters, even the composition of macro growth.

Here’s the tension: the investment pace is undeniable, but the current revenue pool still looks small relative to the implied “earnings power” needed to justify the capex on traditional timelines. That mismatch is where boom-bust analogies become useful, not because history repeats cleanly, but because it gives us a playbook for how capital cycles tend to end when spending runs ahead of monetization.

Let’s dig in.

1) Railroads: Build-Now, Profit-Later, Then Somebody Consolidates

Railroads were the high-tech infrastructure of their era. They were transformative, essential, and brutally capital-intensive. The U.S. laid track on optimistic assumptions, often funded with debt, and then repeatedly discovered that financing can disappear long before the cash flows arrive.

AI has similar fingerprints. Hyperscalers and startups are racing to secure capacity, signing long-duration agreements, pulling forward build schedules, and layering debt throughout the ecosystem, including data centre financing, GPU leases, and vendor financing. If AI monetization does not ramp fast enough, the risk is not that the technology fails – it’s that credit conditions tighten, refinancing becomes expensive, and marginal players get forced into restructuring or fire sales.

The other railroad lesson is the most uncomfortable one: the infrastructure can be indispensable, and early equity holders can still get wiped out. The assets survive. Ownership changes. Profit pools get redistributed.

One key difference: railroads lasted decades. AI hardware obsolescence is measured in product cycles. That compresses the window for returns to show up and increases the odds of abrupt repricing when utilization falls short.

What railroads teach: The tech can win, while capital providers still lose if financing outruns cash flow.

2) Fibre Optics: Overbuild Capacity, Then Prices Collapse, Then the Assets Become Cheap Gold

The late-1990s telecom and fibre buildout is the most mechanically similar “boom-bust” template to AI infrastructure. Telecom operators borrowed aggressively to lay fibre for traffic that was expected to explode. The demand did grow, but not fast enough relative to capacity. Bandwidth prices collapsed, balance sheets snapped, and equity holders got vaporized. Later, the fibre became immensely valuable, but only after consolidation and a long, painful reset.

Translate “dark fibre” into AI terms, and you get “dark silicon”: expensive compute sitting underutilized. The risk is not theoretical. When supply moves from scarcity to abundance, pricing power flips quickly. If compute becomes plentiful, the implied “price” of AI workloads can fall sharply, compressing margins for anyone without structural advantages.

The critical difference compared to fibre is the utilization rates. While there were extended periods of “dark fibre,” we’re not seeing the same dynamic unfold with AI data centres. As soon as facilities are spun up, they’re instantly used, and the industry remains supply-constrained. How long this will last is another question.

What fibre teaches: Capacity gluts destroy pricing, and consolidators buy world-class assets for cents on the dollar.

3) Shale: Abundance Without Profit, and Only the Lowest-Cost Survivors Win

If you want the cleanest “AI is working, but returns are uneven” analogy, you can find it in shale.

It’s the story of technology unlocking massive supply. It reshaped geopolitics, improved energy security, and created enormous downstream benefits. It also produced long stretches of disappointing returns for many investors because success created oversupply, oversupply crushed pricing, and pricing weakness punished anyone without a structural cost advantage.

AI maps surprisingly well onto that dynamic. The buildout creates a surge in “supply” of intelligence and compute. That abundance can be hugely beneficial to users and the broader economy while compressing unit economics for suppliers. As model capabilities diffuse and inference becomes cheaper, the system trends toward commoditization in the middle of the stack.

Shale also gives you the competitive endgame: a “few survive” market structure. The best balance sheets, lowest costs, and best strategic positioning consolidate the field. Everyone else becomes an acquisition target or a cautionary tale.

What shale teaches: Abundance can be real, transformative, and still hostile to producer margins.

4) Airlines: Essential Service, Massive Consumer Surplus, Historically Weak Economics

Airlines are the mature-state warning label for AI.

Air travel changed the world. It created enormous consumer surpluses. It also struggled to generate durable profits for long stretches because the service became broadly available, competition stayed intense, and price sensitivity forced margin pressure. Consolidation helped stabilize parts of the industry, but cyclicality and shock vulnerability remained.

AI services could drift toward that same equilibrium. If Big Tech bundles “good enough” AI into platforms as a defensive feature, then standalone AI functionality becomes commoditized. Value accrues to users, and downstream businesses deploy AI to drive measurable ROI – just not necessarily to the providers selling tokens.

What airlines teach: The end user often captures the surplus, and the middle layer fights for basis points.

So Which Analogy Fits Best?

Each analogy captures a real facet of the current cycle:

  • Railroads: Financing risk before profits show up
  • Fibre: Overcapacity leading to price collapse
  • Shale: Abundance that transforms the world while pressuring producer returns
  • Airlines: Commoditization and surplus captured by users

If I had to pick one, shale is the best single fit, with an important caveat: the most realistic outcome is a shale-to-airlines hybrid.

Shale explains the supply surge and margin compression. Airlines explain the mature-state outcome if AI becomes ubiquitous and bundled into platforms. The likely path looks something like:

Boom → overshoot → shakeout/consolidation → utility-like equilibrium.

This is the part investors routinely underestimate: the industry can be early, exciting, and growing, while the incremental returns for many participants still get competed away.

Winners and Losers: Where Value Could Accrue Over the Next 12–18 Months

If AI is moving toward a utility-like consumption model, the market will increasingly discriminate based on three things:

  1. Cost of production: Compute cost, power cost, utilization
  2. Distribution: Who owns customers and workflow
  3. Bottlenecks: Who controls scarce inputs when everyone is building

So based on these factors, who comes out ahead, and who could end up cutting their losses?

Possible Winners

1) Picks and shovels in bottleneck categories

This category breaks down into three groups:

  • Accelerators: NVIDIA remains the default compute layer while ecosystem lock-in and product cadence stay intact. AMD is the credible second source, and share gains matter disproportionately when the total pie is expanding.
  • Memory and bandwidth: High-performance memory remains structurally linked to scaling, especially as inference grows and workloads become more throughput and latency-sensitive.
  • Networking and interconnect: The less sexy parts of the AI stack often become the most important. AI clusters are data-movement machines. Firms positioned in switching, interconnect, and high-performance networking benefit when “moving data” becomes the constraint, not “doing math.”

2) Hyperscalers and AI Supermajors
The hyperscalers can fund the buildout, own distribution, and bundle AI defensively. Even if AI is margin dilutive in the near term, these companies can monetize through platform control, enterprise relationships, and cross-sell.

3) Power and Grid-Facing Infrastructure
If AI is a utility, then power is the meta-utility. Companies that can secure interconnection, deliver reliable generation, and sign long-duration contracts in constrained regions can benefit from attractive economics. This is where “AI is digital” collides with the physical world.

4) Software Incumbents with Distribution and a Path to Consumption Pricing
Classic SaaS economics were built on seat counts. AI pressures that model as workflows compress and teams get leaner. The winners are the platforms that can shift toward usage-based, outcome-based, or agent-based pricing while staying embedded as systems of record. In this scenario, every single company in application software will need to adapt, or they will slowly fade into irrelevance.

5) Downstream Adopters with Measurable ROI
In most capex booms, the biggest winners are often the users. If AI meaningfully reduces costs, improves conversion, or compresses cycle times, companies that apply it effectively can outperform even if they aren’t “AI companies.”

Possible Losers

1) Over-Levered Capacity Builders
Independent GPU clouds and infrastructure pure-plays that funded growth with expensive debt or lease obligations, and depend on a small customer set, are exposed to utilization shocks. In a metered world, utilization is destiny.

2) Moatless AI Software with Narrative-Heavy Valuations
If the product is easily replicated and the buyer can switch cheaply, the risk is that margins collapse as the market gets crowded or bundled away.

3) Middlemen that are Vulnerable to Bundling
If platform incumbents can give away “good enough” AI features at low incremental cost, intermediaries without proprietary distribution or defensible workflow integration get disintermediated.

4) Any Segment Exposed to Overcapacity-Driven Price Compression
If compute supply outruns the paying demand, inference pricing falls. That’s great for adoption and terrible for anyone whose business model assumes scarcity.

Closing Thoughts

This is one of the largest capex waves of our era, and it’s redefining what “technology” means as a business model. The old playbook was intangible scale and software-like margins. The new reality is increasingly metered consumption, capital intensity, and a value chain where bottlenecks, balance sheets, and distribution dictate who earns the returns.

AI will be essential. The infrastructure will be built. The hard part is the profit pool.

The question is not whether AI matters. It will. The question is who gets paid.

Strong Convictions. Loosely Held.

— Nicholas Mersch, CFA


The content of this document is for informational purposes only and is not being provided in the context of an offering of any securities described herein, nor is it a recommendation or solicitation to buy, hold or sell any security. The information is not investment advice, nor is it tailored to the needs or circumstances of any investor. Information contained in this document is not, and under no circumstances is it to be construed as, an offering memorandum, prospectus, advertisement or public offering of securities. No securities commission or similar regulatory authority has reviewed this document, and any representation to the contrary is an offence. Information contained in this document is believed to be accurate and reliable; however, we cannot guarantee that it is complete or current at all times. The information provided is subject to change without notice.

Commissions, trailing commissions, management fees and expenses all may be associated with investment funds. Please read the prospectus before investing. If the securities are purchased or sold on a stock exchange, you may pay more or receive less than the current net asset value. Investment funds are not guaranteed, their values change frequently, and past performance may not be repeated. Certain statements in this document are forward-looking.

Forward-looking statements (“FLS”) are statements that are predictive in nature, depend on or refer to future events or conditions, or that include words such as “may,” “will,” “should,” “could,” “expect,” “anticipate,” intend,” “plan,” “believe,” “estimate” or other similar expressions. Statements that look forward in time or include anything other than historical information are subject to risks and uncertainties, and actual results, actions or events could differ materially from those set forth in the FLS. FLS are not guarantees of future performance and are, by their nature, based on numerous assumptions. Although the FLS contained in this document are based upon what Purpose Investments and the portfolio manager believe to be reasonable assumptions, Purpose Investments and the portfolio manager cannot assure that actual results will be consistent with these FLS. The reader is cautioned to consider the FLS carefully and not to place undue reliance on the FLS. Unless required by applicable law, it is not undertaken, and specifically disclaimed, that there is any intention or obligation to update or revise FLS, whether as a result of new information, future events or otherwise.

Nicholas Mersch, CFA

Nicholas Mersch has worked in the capital markets industry in several capacities over the past 10 years. Areas include private equity, infrastructure finance, venture capital and technology focused equity research. In his current capacity, he is an Associate Portfolio Manager at Purpose Investments focused on long/short equities.

Mr. Mersch graduated with a bachelors of management and organizational studies from Western University and is a CFA charterholder.