AI is pushing technology companies to rethink the foundations of cloud computing. As more businesses adopt AI tools, the industry is spending heavily on the hardware and data centres needed to run them.
Much of the current spending is focused on infrastructure rather than software. Chips, networking equipment, power systems, and large data centres are becoming the main priority as companies prepare for the computing demands of AI systems.
The scale of investment is growing quickly. US technology companies including Alphabet, Amazon, Meta, and Microsoft are expected to spend about US$650 billion on AI-related infrastructure in 2026, according to an analysis cited by Reuters. That figure would be a sharp increase from roughly US$410 billion in 2025.
These investments are reshaping how cloud platforms are built. The challenge is no longer only about creating new software. It is also about building the physical systems that allow AI models to run at scale.
AI workloads are increasing demand for cloud computing power
Running large AI models demands huge amounts of processing power. Training and operating these systems often requires thousands of graphics processors working together across distributed data centres.
This need is driving new investments in networking and data-transfer technology.
Nvidia recently announced plans to invest US$2 billion each in photonics companies Lumentum and Coherent to improve the technology used inside AI data centres, according to Reuters. The goal is to support faster communication between processors and improve the speed at which AI systems can move data between chips.
Photonics technology sends data using light instead of electrical signals. That approach can move information faster and use less power than traditional connections. As AI clusters grow larger, faster connections between processors become increasingly important.
These kinds of investments show how the bottleneck in AI development is shifting. For many companies, the main constraint is no longer software development but the infrastructure needed to support AI workloads.
Enterprise adoption is driving cloud demand
Demand for AI infrastructure is also rising because large organisations are beginning to use AI tools across their operations.
Businesses are adopting AI systems for tasks such as data analysis, customer support automation, and internal productivity tools. Many of these applications require access to powerful computing resources that most companies do not operate themselves.
Instead, organisations rely on cloud platforms that provide access to large clusters of GPUs and specialised AI hardware.
Reuters reports that companies across the technology sector are forming partnerships and long-term agreements to secure the computing capacity needed for these workloads. These deals often involve multi-year commitments worth billions of dollars as firms compete for access to AI infrastructure.
As a result, cloud providers and hardware manufacturers are investing heavily in data centres and the supply chains needed to build them.
The growing cost of AI infrastructure
Building AI infrastructure requires far more than installing additional servers.
AI data centres consume large amounts of electricity and require advanced cooling systems to keep processors operating safely. Networking equipment must also handle huge volumes of data moving between thousands of processors.
The broader investment trend reflects the rapid growth of the AI sector.
According to the Stanford AI Index Report, global private investment in generative AI reached US$33.9 billion in 2024, an increase of 18.7 per cent from the previous year.
While that figure focuses on startup and private investment, spending by large technology firms on infrastructure is much larger. Much of the current capital is being directed toward building new data centres, securing energy supplies, and developing the networking technology required for large AI systems.
Several large infrastructure projects are also emerging to support future AI demand. One example is the Stargate project, a joint initiative backed by OpenAI, SoftBank, and Oracle that plans to invest up to US$500 billion in AI infrastructure in the United States over several years, according to widely reported industry coverage.
Projects of this scale highlight how AI computing is becoming one of the largest infrastructure investments in the technology sector.
What AI means for enterprise cloud strategy
For enterprise technology teams, the surge in infrastructure spending signals a shift in how cloud computing is evolving.
Cloud providers are now focusing more on specialised hardware, large GPU clusters, and high-speed networking systems designed for AI processing. Access to these resources may become a key factor for organisations that plan to deploy AI tools at scale.
This shift may also change how companies evaluate cloud providers. Data centre location, availability of AI hardware, and long-term computing costs are becoming more important considerations.
The rapid growth of AI infrastructure suggests that the next phase of cloud computing will be shaped as much by physical capacity as by software innovation.
For enterprises exploring AI adoption, access to the right computing infrastructure may soon become one of the most important parts of their cloud strategy.
(Photo by Rob)
See also: Amazon plans huge AWS investment to meet AI cloud demand

Want to learn more about Cloud Computing from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
CloudTech News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.