Gigabyte G291-Z20 Review
January 16, 2020The Gigabyte G291-Z20 (SHOP HERE) is powered by a single AMD EPYC processor, which also supports up to 8x dual-slot GPU cards. If you’re not very familiar with Gigabyte products it’s probably because Dell, HPE and Lenovo have been very effective at their brand marketing. Or it means you have not had a need for any High-Performance Computing applications in which Gigabyte excels. This is a 2U system with a single AMD processor that promises to deliver 2-socket performance for HPC applications in the datacenter.
As a single-socket system, the Gigabyte G291-Z20 has the potential to save you thousands of dollars while still delivering the goods, thanks to AMD’s EPYC processors. So, what’s the Gigabyte G291-Z20 good for? Your basic high-performance computing applications like real-time analytics, scientific simulation and modeling, engineering, visualization and rendering, and data mining to name a few. It has 8 drive bays up front and 2 slots for M.2. You can also install up to 8 double-wide GPUs supported by that single AMD processor. Keeping this platform cool is no mean feat either—there’s definitely some sort of sorcery happening here.
Physical System
On the front of the system, right in the middle, you’ll find 8x 2.5-inch drive bays with 2 high-performance fans to either side cooling the front GPU stacks. A control panel in the left server ear has a few LEDs for LAN 1, LAN2, a hard drive status, and system status. Then we have the power ON button, a system identification button, and a reset button.
On the back of the Gigabyte G291-Z20 you’ll also find 2 fans to either side with dual redundant power supplies located in the lower portion of the chassis. There are several ports above that, including a VGA port, server management LAN Port, power button, an ID button, a reset button, a few LEDs and then 2x 10GbE LAN1 and LAN2 ports next to a pair of USB 3.0 ports. The 10GbE ports can be switched out for two 25GbE SFP+ ports and you have additional options using PCIe cards. Two hot-swap 2200W 80 PLUS Platinum units are required to power all those GPUs. There are also 2 more PCIe slots, but we’ll get to those in a minute. That little RJ-45 port on the back of the system provides access to Gigabyte’s “Gigabyte Server Management,” or GSM for short.
Management
GSM management system software delivers at-chassis and remote management of the system— and the best part is it’s free! GSM is also compatible with other management options including the Intelligent Platform Management Interface (IPMI) and Redfish API. It offers a few subprograms to monitor your network Including GSM Server offering an intuitive browser-based user interface to monitor and manage multiple Gigabyte and compatible systems through a standard Base Management Controller.
GPU Configuration
Once you pop the cover panel off, you’ll see it looks very familiar, yet very different from typical server configurations. The motherboard runs down the center of the chassis with 2 PCIe riser slots to either side for the GPU risers. There are 3 distinct cooling lanes, well actually 5x if you consider there are vents in the side of the chassis specifically to cool the stacked GPUs towards the rear. The front set with GPUs 7 and 8 stacked on the left behind the fan, and GPUs 3 and 4 positioned behind the other fan. Similarly, in back, there are two stacks of GPUs with fans to either side pulling cool air over GPUs 1 and 2 on the left and 5 and 6 on the right through the perforated vents on the side of the chassis. A blocking shield between the stacked GPUs on each side direct that DRAM-Scented air from the front set of stacked GPUs over the CPU and memory modules, then out the back. The central cooling lane starts at the front with the 8x hard drives with two high-performance fans pushing fresh air over the CPU and memory modules, then past the two PCIe slots and out the back of the system. The design looks like a very elegant solution for keeping all of those GPUs cool, which are notorious for running hot.
The only GPUs listed as compatible on this system, at least in the parts list, are the Nvidia P40 with Pascale architecture and the Nvidia V100 offering Volta architecture with tensor cores. StorageReview ran the system with AMDs new Radeon instinct MI25, which is one of the world’s fastest accelerators. The remaining two PCIe slots on the system board at the back of the chassis can be used to augment those 10 or 25 GbE ports with more bandwidth on the network connection. You can also install more M.2 storage using an optional PCIe card.
AMD Over Intel
It sounds like AMD has something very similar to Nvidia’s NVlink, which offers advanced peer-to-peer GPU communications enabling data to be shared much faster than a traditional GPU to GPU transfer of up to 32GB/s. Using a NVlink board with the new SXM2 form factor GPUs achieves anywhere from 300GB/s to 600GB/s. On the other hand, AMD has Infinity Fabric Link Technology that can be used to directly connect up to 2 GPU hives of 4 GPUs in a single server at up to 5.75 times the speed of PCIe 3.0. It might not be as fast as Intel’s 300GB to 600GB/s connection, but 184GB/s is still damn impressive.
AMD’s technology is kicking Intel’s butt right now on the CPU front and there are reasons for that, which are also the same reasons this system can compete with a dual processor system. First off, up to 32 cores compared to just 28 cores with Intel Xeon Scalable CPUs and that’s just Gen 1 AMD EPYCs. And yes, we’re ignoring the second-generation Intel Xeon Scalable 9200 Processors with 56 cores—given they are not available on these off-the-shelf x86 systems anyways. Besides only 56 cores. These AMD processors are truly EPYC, and just wait for that ROME processor with up to 64 cores.
The AMD EPYCs have up to 128 PCIe 4.0 lanes compared to only 48 PCIe 3.0 lanes using Intel Xeon Scalable processors, and that includes the second generation. Yes, PCIe 4.0 compatible. Eight memory channels per processor, compared to only 6x memory channels per CPU using Intel Scalable processors. Also, support for up to 2TB of memory with the entire EPYC product line and integrated security-on-a-chip, which likely also gives the EPYC processors some additional resiliency—the Intel processors seemed to be lacking, but let’s not dwell on the past.
The EPYC processors also have what’s called a System-on-a-Chip design with the chipset integrated in the silicone. Intel Xeon Scalable processors have this additional functionality in the chipset, which is scattered all over the system board. As a result, AMD’s System-on-a-Chip design also makes it less expensive to build a system than using Intel Scalable processors.
Memory & Storage
With 8 memory channels supported, each memory module gets its very own channel for maximum performance. Only registered or Load-reduced memory modules are supported at memory speeds of up to 2666MHz. At least that’s what Gigabyte says. AMD says this processor supports DDR4 memory at speeds of up to 3200MHz. I’m not sure how many memory modules support that speed, but let’s move on.
Eight storage bays on the front of the Gigabyte G291-Z20 can also be outfitted with 2.5-inch SATA 3 drives with native support directly on the system board. SAS drives can be installed, but require using an HD/RAID controller offered by Gigabyte or other manufacturers. The two M.2 slots on the system board can deliver 32GB/s data transfer speeds using the PCIe bus and M.2 NVMe drives. They can also be RAIDed together for redundancy.
Conclusion
The Gigabyte G291-Z20 is surely capable of some extreme computing with all of those GPUs. If you add a 64-core 128-thread AMD ROME processor… we’re talking some screaming performance. The chassis we used for this review will be used by one of the local studios in a render farm of some sort, but we can also put one together for you.
If you’re interested in purchasing this server, click here! Or, if you’re interested in other servers or components, click here for IT Creations’ homepage.