Skip to main content

Global data centre capital expenditure is forecast to approach $1 trillion by 2028, with accelerated servers (GPU-powered systems) accounting for a significant share of this growth (​delloro.com). This explosive trend underscores a worldwide shift toward AI-focused data centres.

 

NVIDIA’s Next-Gen AI Data Centre Architecture: From Chips to “AI Factories”

NVIDIA’s latest announcements at GTC 2025 outline a bold vision for the future of AI data centres. CEO Jensen Huang highlighted that NVIDIA has evolved from being “just a chip vendor” to providing fully integrated, rack-scale solutions encompassing computer, networking, and even cooling​(delloro.com). In essence, NVIDIA is constructing massive “AI factories” – entire data centre units optimized to produce AI insights. These AI factories treat data centres as production lines for intelligence, where the inputs are electricity and data and the outputs are AI models and predictions (tokens)​ (foundationcapital.com). This represents an architectural shift: instead of traditional data centres running fixed software, an AI factory continuously generates software (AI models and decisions) on the fly (​foundationcapital.com). NVIDIA’s approach integrates everything from silicon to software into one cohesive “assembly line,” providing not just GPUs but the full stack – high-speed interconnects, storage, and AI software frameworks – needed to transform raw data into AI-driven outcomes​ (foundationcapital.com).

At the heart of this vision is NVIDIA’s new Blackwell GPU architecture, which delivers leaps in AI performance. Blackwell GPUs excel at both training deep neural networks and performing inference and reasoning, a critical capability for modern generative AI and large language models (​delloro.com). Unlike straightforward question-answering AI, advanced reasoning models use iterative “thinking tokens” to refine their answers, dramatically increasing computational workload​(delloro.com). NVIDIA’s platform is optimized for this emerging class of AI, ensuring that future hyperscale AI workloads can be handled efficiently. In short, NVIDIA sees AI data centres evolving into AI factories – tightly integrated systems where GPUs, high-speed networks, and software work in unison to rapidly churn out intelligence.

Rack-Scale Supercomputers and Architectural Shifts

To enable AI factories, the scale of computing is expanding from single servers to entire racks operating as one supercomputer. NVIDIA traced the evolution of accelerated computing from single GPU accelerators, to integrated AI servers like the DGX, and now to rack-scale units such as the NVIDIA GB200 NVL72 system​ (delloro.com). The current generation NVL72 packs 72 GPUs in a rack, and the forthcoming “Vera Rubin Ultra” platform will cram an astonishing 572 GPUs into a single rack​ (delloro.com). This rack-scale design essentially turns a whole rack into a unified AI engine. However, pushing to this level of density and performance introduces significant architectural challenges that must be addressed (​delloro.com):

  • Density and Interconnect: With hundreds of GPUs forming a coherent compute fabric, GPUs are packed tightly and must communicate extremely fast. NVIDIA’s 72-GPU NVL72 rack already uses liquid cooling to manage heat in this dense configuration (​delloro.com). As racks grow, the length of connections between GPUs increases – raising the question of whether traditional copper cables can carry the data or if a shift to optical fibre interconnects will be required to maintain bandwidth and low latency (​delloro.com). NVIDIA has hinted that future designs may adopt optical or photonic interconnects to overcome copper’s limitations, despite current cost and power trade-offs (​delloro.com). Indeed, NVIDIA recently introduced a silicon photonics switch to begin reducing interconnect power consumption in these systems (​delloro.com).

  • Multi-Die GPU Architectures: To keep up with demand, GPUs have grown in size, but they are hitting the manufacturing reticle limit – essentially the maximum chip size. NVIDIA’s roadmap therefore points to multi-die GPU designs, splitting the GPU into multiple chips interconnected at close range​ (delloro.com). This shift will boost performance but also increases the physical footprint of each GPU package and adds complexity to GPU-to-GPU communication on the module​ (delloro.com). Future AI systems must accommodate these larger, multi-chip modules and ensure they communicate as seamlessly as monolithic GPUs.

  • Power and Cooling: The power density of these AI racks is soaring to unprecedented levels. NVIDIA’s current 72-GPU rack (NVL72) draws about 132 kW, and the next-gen 572-GPU NVL572 rack is projected to consume ~600 kW on its own (​delloro.com). For context, a typical large data centre might have 30–50 MW of total power capacity; a single 600 kW rack is an enormous power load. Fewer than 100 such racks could saturate a 50 MW facility​(delloro.com). This power density far exceeds what air cooling can handle – hence the move to liquid cooling, and in the future likely even more advanced cooling like two-phase immersion. The challenge for data centre operators is to deliver high power and cooling to each rack safely and cost-effectively. Power distribution, backup power, and heat removal all need reimagining to handle racks that behave more like industrial equipment than traditional servers (​delloro.com).

  • Geographically Distributed “Clusters”: Because a single site may not be able to supply enough power or space for limitless racks, NVIDIA envisions linking multiple data centres together into one giant virtual AI centre. Disaggregating AI compute across sites means splitting a massive AI workload across racks in different physical locations​ (delloro.com). The trick is connecting those locations with network links fast enough to make separated racks behave like they’re next to each other. This will likely require coherent optical networks and cutting-edge low-latency switching. NVIDIA suggests that co-packaged optics and photonics-based networking will be necessary to connect distributed AI factories over distance​(delloro.com). In fact, NVIDIA’s silicon photonics switch is a step in this direction, aiming to cut interconnect power and lay the groundwork for linking GPUs across data centres​(delloro.com). Additional innovation in wide-area data centre interconnect architectures will be needed so that a cluster spread over miles can function as one seamless AI supercomputer.

Addressing these challenges is central to NVIDIA’s strategy. Solutions are emerging in the form of advanced hardware like the NVLink Switch, which extends NVIDIA’s high-bandwidth NVLink interconnect beyond a single server. The NVLink Switch is the first rack-level switch chip capable of fully connecting up to 576 GPUs in a non-blocking fabric, with an astounding 1.8 TB/s of all-to-all bandwidth between every GPU pair​ (nvidia.com). In practical terms, this means dozens of AI servers can be fused into one giant “GPU” across a rack or even multiple racks. Technologies like NVLink Switch, along with InfiniBand and next-gen Ethernet with built-in photonics, form the accelerated networking backbone of AI data centres. They ensure that as we scale to hundreds or thousands of GPUs, data can move freely and quickly where it’s needed. NVIDIA also pairs these hardware innovations with in-network computing capabilities (for example, NVLink Switch has engines for optimized reduction operations in AI workloads) to squeeze maximum efficiency from the network​(nvidia.com).

Equally important is the software stack that binds the system. NVIDIA’s full-stack approach means the company provides not only the GPUs and switches but also the libraries, drivers, and management software to orchestrate this “AI factory.” The NVLink Switch and NVLink interconnect come with support in NVIDIA’s software (like CUDA and communication libraries) so that developers can treat a cluster of GPUs as a single resource ​(nvidia.com). NVIDIA’s AI Enterprise software suite, along with its NGC catalogue of optimized AI models and frameworks, is a crucial element of the AI data centre model​ (nvidia.com). The company even announced a new AI Data Platform at GTC, integrating high-performance storage with its accelerated computing stack to enable AI systems (or “AI agents”) to access data and deliver real-time business insights seamlessly​ (delloro.com). In short, the role of software is pivotal: the best hardware can only reach its potential with an equally powerful software ecosystem. This includes AI development frameworks, training/inference servers, cluster scheduling tools, and data management – all tuned to leverage the underlying GPUs and high-speed network fully. NVIDIA’s vision is clearly to provide a complete AI factory blueprint, where software-defined infrastructure manages thousands of GPUs, allocating them to tasks and maintaining high utilization and efficiency.

The Opportunity for Mauritius: Becoming an AI Infrastructure Hub

NVIDIA’s AI data centre paradigm is not just relevant to tech giants – it presents an opportunity for forward-looking nations like Mauritius. Mauritius has expressed strong ambitions to transform into a regional leader in AI and technology, as evidenced by its national AI Strategy aiming to make AI a pillar of its economy (​dig.watchdig.watch). By aligning with the architectural shifts defined by NVIDIA, Mauritius can leapfrog into the future and build a new economic pillar around AI and GPU-powered infrastructure.

Embracing the AI factory model could significantly support Mauritius’ economic development. Firstly, developing advanced AI data centres domestically would create a host of high-value jobs – not only in the construction and maintenance of these facilities, but in running AI services, data engineering, and software development atop this infrastructure​ (dig.watch). An AI data centre campus can become a magnet for technology talent, encouraging skilled Mauritian professionals (as well as expats) to work on cutting-edge AI projects at home. Secondly, it would stimulate the local tech ecosystem: startups and entrepreneurs would have access to world-class computing power, enabling them to build AI solutions in areas like finance, healthcare, and agriculture without leaving the country. This aligns with Mauritius’ goal to use AI as an enabler of innovation and productivity across sectors​ (dig.watch).

Furthermore, positioning Mauritius as an AI infrastructure hub can help attract substantial foreign investment and international technology firms. Global cloud providers and AI companies are investing tens of billions in expanding their data centre capacity for AI – for example, Microsoft recently announced an unprecedented $80 billion investment in AI data centres for a single year​(za.investing.com). If Mauritius can offer a favorable environment (stable power, connectivity, skilled workforce, and pro-business policies), even a fraction of those global investments could flow into establishing regional AI infrastructure on the island. The country has a track record of leveraging a friendly business climate to draw in foreign direct investment, and AI is the next frontier. By building state-of-the-art GPU data centres (potentially in partnership with companies like NVIDIA or large cloud service providers), Mauritius could invite tech firms to set up regional AI development labs or cloud zones in the country. Such investments would not only inject capital but also transfer knowledge and globally relevant skills to the Mauritian workforce.

Mauritius’s geographic and economic positioning offers unique advantages for this strategy. The nation already “punches above its weight” in the data centre space – studies have noted that Mauritius has an outsized number of data centres relative to its population, outpacing many larger African countries in available hosting capacity (​datacenterdynamics.com). This existing base indicates a developed ICT backbone and experience in data facility operations. Mauritius is well-connected via multiple submarine fibre optic cables, ensuring high-bandwidth internet connectivity to Africa, Europe, and Asia, which is vital for an AI hub. Its time zone and location (in the Indian Ocean between Africa and Asia) make it a convenient nexus for serving clients on multiple continents with minimal communication hurdles. Additionally, Mauritius consistently ranks at the top in Africa for ease of doing business and economic freedom, providing political stability, strong rule of law, and investor-friendly regulations. These factors are highly attractive for companies looking to establish critical infrastructure abroad, as they need assurance of stability and protection for their capital-intensive facilities.

The AI factory model also aligns with Mauritius’s vision of a sustainable and innovation-driven economy. AI data centres could become a new pillar alongside tourism, finance, and manufacturing – a pillar grounded in the digital economy of the future. They can enable Mauritius to export AI services and expertise regionally. For instance, a Mauritius-based AI supercomputing centre could offer AI-as-a-service to businesses in Africa and South Asia that lack local access to such advanced infrastructure, thus generating export revenue. It would turn Mauritius into a regional hub for AI R&D and deployment, enhancing its international profile in the tech industry.

Aligning Infrastructure and Policy for an AI-Driven Future

To capitalize on this opportunity, Mauritius will need to align its infrastructure investments and policies with the requirements of next-generation AI data centres. This involves several strategic actions:

  • Invest in Advanced Data Centre Infrastructure: Building or upgrading facilities to accommodate high-density GPU racks is a priority. Mauritius should encourage the development of at least one flagship AI data centre campus equipped with the latest in power and cooling technology – for example, liquid cooling systems ready for 100+ kW racks, and robust electrical supply capable of delivering tens of megawatts reliably. Exploring sustainable energy options is key: partnering with renewable energy projects (solar, wind, or even ocean thermal energy) can help supply the enormous power needs of AI clusters while keeping the carbon footprint in check. Innovative cooling techniques, such as using deep-sea water for cooling (given the island geography) or two-phase immersion cooling, could give Mauritius an edge in operational efficiency and attract environmentally conscious investors.

  • Deploy Cutting-Edge Networking: To truly adopt NVIDIA’s model, the data centre network in Mauritius must be state-of-the-art. This means installing accelerated networking hardware – such as NVIDIA’s Quantum InfiniBand or Spectrum Ethernet switches – to enable ultra-low latency and high throughput between servers. The NVLink Switch technology that connects GPUs into a unified fabric should be incorporated for any large GPU deployment, so that computing clusters in Mauritius can match the performance of those in Silicon Valley​(nvidia.com). Additionally, Mauritius can leverage its telecom strengths to ensure redundant high-bandwidth connections between any on-island AI data centre and major internet exchange points in Africa/Europe. Embracing upcoming photonic network technologies early (like co-packaged optics in switches) would signal that Mauritius’s infrastructure is ready for the future of distributed AI computing.

  • Foster a Supportive Policy Environment: Government policy will play a crucial role. Mauritius can provide incentives such as tax breaks, accelerated depreciation for high-tech equipment, and streamlined permitting for companies investing in AI data centres or purchasing GPU clusters for local use. Given the power requirements, policies that facilitate investment in power generation and grid upgrades for data centres are vital – for example, special energy tariffs for renewable energy used in IT infrastructure or public-private partnerships to build dedicated power facilities for an AI park. The government should also ensure regulatory clarity around data and AI, including strong data protection laws and cybersecurity measures, to give confidence to companies that their data-intensive AI operations can be safely hosted in Mauritius. Moreover, encouraging public-private collaboration is important: the state can partner with industry leaders (NVIDIA, hyperscale cloud companies, and local telecom providers) to design training programs and innovation labs around the AI data centre. This way, the hardware comes with an ecosystem of expertise.

  • Skilled Workforce and Training: While not focusing on academic institutions, Mauritius can still cultivate the necessary human capital through professional training initiatives, certifications, and on-the-job skill development. Operating an AI factory requires specialists in AI model development, data centre operations, networking, and system integration. The government and private sector should sponsor programs to train engineers in GPU programming (CUDA, AI frameworks), data centre management (including cooling and power systems for HPC), and AI project implementation. By developing local expertise, Mauritius ensures that the benefits of an AI data centre industry (like employment and innovation) are captured domestically. International experts may be brought in initially, but knowledge transfer to Mauritians will create a sustainable talent pool. This emphasis on skills aligns with the national AI strategy’s goal of talent development and helps embed the AI pillar deeply into the economy (​dig.watch).

By taking these steps, Mauritius can create an environment in which the most advanced AI infrastructure thrives. Imagine a scenario in the late 2020s where Mauritius hosts a flagship “AI Factory” data centre complex, powered by racks of Blackwell-generation GPUs or beyond. Global companies might deploy their AI workloads to Mauritius just as readily as they do to Frankfurt or Singapore, attracted by competitive costs and a robust setup. Locally, businesses and startups could rent time on these GPU clusters through cloud platforms, fueling an explosion of AI solutions made in Mauritius – from fintech algorithms for the banking sector to smart agriculture models for local farmers, all running on-island. The presence of such infrastructure could also spur foreign tech firms to open offices or regional headquarters in Mauritius, knowing they have nearby compute power and a supportive government partner. In turn, this would create a cluster effect: an AI innovation ecosystem building around the infrastructure.

Conclusion: Positioning Mauritius at the Vanguard of AI Infrastructure

NVIDIA’s vision of AI data centres — as tightly integrated AI factories with unprecedented compute density, networking speed, and intelligent software — represents the next inflection point in computing. Countries that recognize and adopt this model early will reap significant advantages. For Mauritius, a nation already oriented towards services and innovation, aligning with this paradigm shift offers a chance to establish a new economic pillar centered on AI and high-performance computing. By investing in cutting-edge GPU-centric infrastructure and enacting forward-thinking policies, Mauritius can attract global investment and technology firms looking for new locales to expand their AI capabilities.

Crucially, this strategy dovetails with Mauritius’s broader goals of economic diversification and moving up the value chain. It leverages the island’s strengths — political stability, strategic location, and a skilled, multilingual workforce — and addresses its future needs by injecting a high-tech, high-growth sector into the economy. The transformation will not happen overnight; it requires commitment and collaboration between government, industry, and technology partners. But the path is clear. The future of AI is being built now in the form of advanced data centres and AI factories​(delloro.com), and Mauritius has the opportunity to be not just a consumer of that future, but a builder and beneficiary of it. By embracing NVIDIA’s AI data centre model and tailoring it to its unique context, Mauritius can position itself at the vanguard of the AI revolution, securing long-term economic growth and a leadership role in the digital economy of Africa and the Indian Ocean region.

In summary, the evolution of GPU-centric compute architecture is more than a tech trend – it’s a blueprint for the next era of industry. If Mauritius stakes a claim in that era by developing AI data centre infrastructure, it could become a key regional hub for AI, drive significant economic development, and truly realize the vision of AI as a core pillar of its future prosperity​ (dig.watch). The convergence of NVIDIA’s cutting-edge technology with Mauritius’s development vision creates a compelling narrative: a small island nation harnessing world-class AI infrastructure to leap into the forefront of innovation. This is a bold endeavour, but with strategic execution, Mauritius can indeed transform into an “AI island” – a place where the most advanced AI workloads run, where global and local companies alike come to build the future, and where the benefits of the AI age are felt widely across the economy.

ABOUT

How Mauritius can leverage NVIDIA’s next-generation AI data centre architecture to build a new economic pillar around high-performance AI infrastructure and attract global tech investment.

 

#Mauritius #NVIDIA #AIFactory #DataCenter #GPUComputing #DigitalEconomy #Blackwell #NVLink #TechInfrastructure #AIHub #FDI #LiquidCooling #PhotonicNetworking