AI-Driven Server Technologies

Rackzar was fortunate to attend Computex 2025, held from 20 to 23 May in Taipei, Taiwan, under the theme “AI Next.” With 1,400 exhibitors from 34 countries, the event showcased advancements in enterprise, server, and networking technologies. Our team benefited immensely from direct engagement with vendors, staying abreast of the latest server developments shaping AI infrastructure and data centres worldwide. Here are a few photos of interesting products we encountered during the event.

The GB300 NVL72 delivers 1.5x more AI performance than the NVIDIA GB200 NVL72, as well as increases Blackwell’s revenue opportunity by 50x for AI factories, compared with those built with NVIDIA Hopper.

The NVIDIA GB300 NVL72

The NVIDIA GB300 NVL72 is a fully liquid-cooled, rack-scale platform that integrates 72 NVIDIA Blackwell Ultra GPUs and 36 Arm®-based NVIDIA Grace™ CPUs, designed to optimize test-time scaling inference. This advanced system, when paired with NVIDIA Quantum-X800 InfiniBand or Spectrum™-X Ethernet and ConnectX®-8 SuperNICs, delivers a 50x increase in reasoning model inference output compared to the NVIDIA Hopper™ platform.

Tailored for hyperscalers and research institutions.

Designed to process demanding AI workloads like DeepSeek’s R1 model (1,000 tokens/sec).

Estimated between $3.7 million and $4 million per unit.

The NVIDIA HGX B300 NVL16 features 11x faster inference on large language models, 7x more compute and 4x larger memory compared with the Hopper generation to deliver breakthrough performance for the most complex workloads, such as AI reasoning

ASUS 5U HGX B300 Server Liquid Cooling
Dual Intel Xeon 6 Processors
32 RDIMM Slots
16x NVIDIA Blackwell GPUs 2x NVLink Switch Chips for ultra-fast inter-GPU communication
105 PFLOPS FP4 performance for dense AI inference 2.3 TB of HBM3e memory

High Density Compute

The MiTAC C2820Z5 is an OCP-powered high-density 2OU 4-node dual-socket server that offers high performance computing and reduces server energy consumption and acoustic noise levels while improving power utilization efficiency.

  • Direct liquid cooling
  • 2OU 4-node dual-socket high-density with DC-SCM
  • Dual AMD EPYC 9005 CPUs up to 500W

Gigabyte B683 Blade Server
These blades feature direct-to-chip liquid cooling (DLC) for both CPU and memory, dramatically improving heat dissipation. 

Fits 20 CPUs in a 6U chassis, High-density compute with modular cooling Optimized for AI workloads at large scale.

GIGABYTE Intel Xeon Rackmount

Gigabyte High Density Intel Server 4 Nodes of Dual Socket Intel 6900 Series CPU in 3U.

Using the high-density E core Xeon CPUS one could achieve 2304 Cores in 3U which is incredible.

Foxxconn Modular Design Server with many expansion slots focused on large compute.

Dual Intel Xeon 6 Processors – 32 RDIMM Slots – 10 x PCIe Slots LP/FHHL – Air cooling

Foxxconn AI-Enhanced 8-GPU Server Built on Intel® Gaudi® 3 AI Platform

Support 8 Intel® Gaudi® 3 AI Accelerators on a Universal Baseboard
Support 20 x PCIe 5.0 Slots & 32 x NVMe Drive Bays

Supermicro Microcloud
Latest revision of the popular Microcloud. This version offers 10 nodes in the same 3U or 5 nodes with 2slot GPUS installed.

10 Node version with 2 x nvme or SATA SSD storage

AIS800-64O – Edge Core 64 Port 800G Switch

1.6T OSFP high speed optic

Datacentre Cooling

As data centres tackle rising demands from AI and cloud computing, cooling technologies are evolving rapidly. Liquid cooling systems are gaining traction, offering superior efficiency for high-density servers. These systems, including immersion and direct-to-chip cooling, transfer heat more effectively than air, reducing energy costs. Heat exchangers, such as liquid-to-liquid and air-to-liquid units, are also trending, enabling precise temperature control and waste heat reuse for facility heating

Rackmount L2L CDU to support cooling demands of between 3-5 racks.

Large “side-car” cooling racks. (Rear)
Large “side-car” cooling racks. (Front)
Rittal Coolant Distribution Unit
Delta showcasing their 80 kW Liquid to Air Coolant Distribution Unit
Datacentre Design Example

GPU’s

NVIDIA B300
1.1 Exaflops of FP4 Compute, 288 GB HBM3e Memory, 50% Faster Than GB200.
~1400W power draw per GPU.
AMD MI350
3nm, 185 Billion Transistors, 288 GB HBM3E Memory, FP4 & FP6 Support, MI355X 35x Faster Than MI300 & 2.2x Faster Than Blackwell B200. Roughly 1000W per GPU (Aircooled) 1400 Liquid

Taiwan’s Tech Prowess and Global Impact

Beyond the gadgets, Computex 2025 underscored Taiwan’s critical role in the global tech supply chain. With keynotes from industry leaders like Qualcomm and Foxconn’s, the event highlighted Taiwan’s influence in AI, robotics, and future mobility. From AI-powered PCs to bold prototypes and next-gen chips, Computex 2025 proved why it’s the epicentre of tech innovation.

Taipei delivered a thrilling glimpse into the future of computing, blending cutting-edge hardware with creative flair. As we await these game-changing devices to hit the market, one thing’s clear: the future is bright, and it’s powered by the ideas unveiled at Computex.