Artificial intelligence has quietly crossed a threshold. What began as a productivity enhancer has become embedded infrastructure. Banks rely on AI models for fraud detection and credit scoring. Logistics firms use it to route fleets and manage inventory. Governments deploy it for surveillance, benefits administration, and cybersecurity. In many sectors, AI is no longer optional. It is mission critical.
That shift has triggered an uncomfortable realization among regulators. The global AI stack is extraordinarily concentrated. A small number of firms dominate advanced chips, foundation models, and cloud delivery. This is not just a competition issue. According to recent analyses by central banks and the Bank for International Settlements, it is a systemic risk issue.
The concern is straightforward. If one hyperscaler, chip supplier, or model provider fails or even falters, the disruption would not stay contained. It could cascade across finance, logistics, healthcare, and public services. What looks today like a valuation boom may, under stress, reveal itself as an AI dependency bubble.
The monoculture problem in the AI stack
Modern AI systems depend on three tightly coupled layers: compute, models, and cloud infrastructure.
At the compute layer, advanced AI workloads rely overwhelmingly on a narrow class of accelerators. Industry analyses cited by multiple central banks show that a single vendor controls roughly four fifths of the global market for data center AI accelerators. The manufacturing of these chips is itself concentrated in a handful of fabrication facilities located in geopolitically sensitive regions.
At the model layer, training frontier foundation models requires capital expenditure measured in the billions. This has limited viable competitors to a small circle of firms closely aligned with major cloud providers. Access to the most capable models is often contractually tied to specific clouds, further reinforcing dependency.
At the cloud layer, market structure is even more explicit. According to data referenced by regulators and competition authorities, the top three cloud providers account for around two thirds of global cloud infrastructure spending. These same providers host most large AI models and control the tooling, pricing, and scaling pathways that downstream users depend on.
This is the definition of monoculture. Diversity exists at the application layer, but the foundations are narrow, centralized, and tightly coupled.
What central banks and the BIS are warning about
Central banks are not known for alarmism. Yet in recent financial stability reviews, AI infrastructure concentration has appeared alongside topics like clearinghouses and payment rails.
The Bank for International Settlements has highlighted three specific risks.
First, operational fragility. Concentration increases the likelihood that technical failures, cyber incidents, or supply disruptions propagate widely. Cloud outages already provide a preview. When a single hyperscaler experiences downtime, thousands of unrelated firms are affected simultaneously.
Second, pricing power and procyclicality. When compute supply tightens, dominant vendors can raise prices or ration access. During periods of market stress, this can amplify downturns as firms lose access to critical AI services precisely when they need them most.
Third, opaque interdependencies. Financial institutions increasingly rely on the same AI tools for risk assessment and trading. If those tools share common models or infrastructure, they may fail or misbehave in correlated ways, undermining diversification assumptions that underpin financial stability.
These concerns echo earlier warnings about cloud concentration, but AI raises the stakes. AI systems increasingly make or inform automated decisions at machine speed, leaving little room for human intervention when failures occur.
From efficiency to systemic exposure
The irony is that concentration emerged because it was efficient. Training large models benefits from scale. Centralizing compute lowers unit costs. Standardized tooling accelerates adoption. These dynamics mirror earlier phases of financial globalization and just in time manufacturing.
History offers cautionary parallels. The global financial crisis exposed how reliance on a small number of systemically important financial institutions created hidden fragility. The pandemic revealed how supply chain efficiency could become a liability when shocks hit.
AI combines both dynamics. It is informational infrastructure layered on top of physical supply chains and financial systems. When many institutions depend on the same models hosted on the same clouds using the same chips, shocks become synchronized.
This is why some economists now speak of an AI dependency bubble. Valuations assume uninterrupted access to ever cheaper and more powerful AI services. They rarely price in the risk of systemic disruption.
Stress testing is lagging reality
One striking feature of the current debate is how little formal stress testing exists.
Banks conduct stress tests for credit risk, liquidity risk, and increasingly climate risk. Yet AI concentration risk remains largely outside these frameworks. Few institutions can map their indirect dependencies on specific cloud regions, chip supply chains, or model providers.
Regulators are beginning to respond. Some supervisory authorities are asking financial institutions to inventory critical third-party AI dependencies. Others are exploring operational resilience requirements that extend beyond traditional IT outsourcing.
However, these efforts are fragmented and slow relative to the pace of AI adoption. In many cases, responsibility falls between regulators of finance, technology, and competition, creating gaps that systemic risks can slip through.
Why a single stumble could cascade
To understand the danger, consider a plausible but non speculative scenario grounded in past events.
A major GPU supplier experiences a manufacturing disruption due to geopolitical tensions or a natural disaster. Supply tightens sharply. Cloud providers prioritize their largest customers and internal projects. Smaller firms and public sector users face degraded service or higher prices.
Banks relying on AI driven fraud detection see increased false positives or delayed processing. Logistics firms lose optimization capabilities, causing shipping delays. Government agencies experience backlogs in automated services. None of these failures is catastrophic alone. Together, they reinforce each other, slowing economic activity and eroding confidence.
This is not a prediction. It is an extrapolation from documented outages and supply shocks that have already occurred in adjacent sectors.
Toward a more resilient AI ecosystem
Avoiding this outcome does not require abandoning scale or innovation. It requires acknowledging AI as critical infrastructure and governing it accordingly.
Several policy directions are gaining traction.
One is diversification. Encouraging multi cloud strategies and interoperability standards reduces single points of failure. Another is transparency. Firms should be able to disclose and assess their exposure to concentrated AI suppliers, much as they do with financial counterparties.
A third is public investment. Governments and multilateral institutions can support open models, alternative hardware architectures, and regional compute capacity to reduce dependence on a narrow set of providers.
Finally, stress testing must catch up. Scenario analysis that incorporates AI infrastructure failures should become part of financial and operational resilience planning.
Conclusion: Before the bubble is tested for us
AI has earned its place as a general-purpose technology. But general purpose does not mean risk free. The same forces that have driven rapid adoption have also created a fragile monoculture at the core of the AI supply chain.
Central banks and the Bank for International Settlements are right to sound the alarm. The question is whether policymakers, firms, and investors will act before a real-world shock forces the issue.
The history of financial and technological crises suggests a familiar pattern. Risks that are visible but inconvenient tend to be ignored until they become unavoidable. The AI dependency bubble is forming in plain sight. The window to stress test and strengthen the system is still open, but it will not remain so indefinitely.