Article | Adaptive Spaces
Navigating Data Centres: As demand for AI surges, what does this mean for data centres?
August 22, 2024

Indeed, most business are exploring AI in one form or another – whether to stimulate innovation and improve competitiveness or (‘elephant in the room’) to manage costs and enhance bottom-line performance. Whether you’re an established multinational or a start-up – it is clear that AI offers tremendous potential across sectors, including real estate.
As the AI revolution gathers pace, so too is demand for AI-ready data centres. Indeed, training and delivering AI requires a significant amount of computing power and data storage – which many existing data centres are unable to accommodate. As demand for AI products increases, this presents both challenges and opportunities for data centre investors, developers, operators and hyperscalers alike.
So what exactly is ‘AI’ and how does it impact data centres? While the term ‘AI’ has been widely adopted, the majority of processes underpinning today’s AI technology technically fall under ‘machine learning’ – a process that leverages large amounts of data to firstly ‘train’ a model which is then used to make predictions based on (new) data that the model has never seen before. These steps, known as (1) ‘training’ and (2) ‘inference’, place very different demands on data centre types, locations, specifications and characteristics, which we will explore in this article.

- | Inference: The use of a trained AI model to make predictions based on new (unseen) data. |
- | Training: The process of teaching a model to recognise patterns or perform a task. It involves feeding large amounts of data into the AI model and using algorithms to train the model to learn and make predictions or decisions based on that data. |
Data Centre Location & Latency: Data centres have typically been situated in proximity to large population centres in order to deliver a seamless end-user experience with low latency (think high-speed financial trading, streaming services or online shopping). However, workloads for AI ‘training’ do not have such latency constraints. Indeed, from a latency perspective, AI ‘training’ can technically be undertaken anywhere – which opens the door for alternative locations with potentially more advantageous characteristics (e.g. sites with renewable energy, access to power, cheaper land pricing and attractive risk and climate conditions).
AI ‘inference’ on the other hand is generally more latency dependent, although it does not necessarily need to be close to the large quantum of training data. This allows inference computing to be deployed in more strategically located co-location or edge data centres – or even housed within the device or application itself (e.g. mobile phones, robotics, autonomous cars).
Power Infrastructure: AI training relies on high-performance computing (HPC) to process complex algorithms with close access to huge amounts of training data. Accordingly, AI-ready data centres rely on significant and stable power supply. While modern co-location data centres have typically ranged anywhere from 5-50 MW, future AI capable data centres will comprise ‘campuses’ of 200-300MW (and beyond). Navigating the complexities of sourcing and delivering power of this scale continues to be one of the major hurdles today.
Rack Density: Not only is the power supply to the site extremely large, but so too is the power density in the data centre itself. Optimal AI training requires a high compute with chips placed as close as possible together to provide a low-latency within the data centre architecture. While data centres have typically accommodated rack densities of around 3-5kW per rack (for traditional co-location/cloud service), this has increased to around 10kW per rack. For context, NVIDIA’s DGX H100 system, a popular choice for AI players, can consume around 10kW in just one system alone. Stack a few of these systems together and densities of 40-120kW per rack will not be uncommon. Processors will also only get more powerful in the future, which brings us to the next challenge: Cooling.
Cooling: Of course, the higher the computational processing power, the greater the heat that will be released. This places sizeable demands on data centre cooling infrastructure. While traditional air cooling can accommodate rack densities of between 1-20 kW/rack, it is insufficient for those popular NVIDIA chips that require more advanced cooling technologies. In the race to capture demand, data centre developers and operators are turning to liquid, immersion and direct to chip cooling technologies which offer more efficient heat-transfer capabilities that are better suited to handle the AI chips of the future.
Looking ahead, the proliferation of AI and machine learning will continue to drive demand for computational capacity and data centres more broadly, and investors, developers and operators should consider exploring entry or expansion strategies in this space. Partnering with an experienced service provider can unlock value and opportunities across the data centre lifecycle, from site selection and due diligence, all the way through to leasing and operation.
Related Service
Related Insights
- Report | Intelligent Investment
Asia Pacific Data Centre Trends - Q1 2024
April 3, 2024 10 Minute Read
Despite a challenging economic environment, data centres remain in focus for the commercial real estate industry in Asia Pacific, with notable market developments across the region. This report explores key data centre investment trends and outlook for the sector in Asia Pacific, and offers insights into the data centre occupier and investment market in Australia, Hong Kong SAR, Japan, Singapore, India and Korea.