Cisco UCS B480 M5 Blade Server ReviewFebruary 20, 2020
This is our first Cisco system review, the Cisco UCS B480 M5 Blade Server (SHOP HERE). This 4-socket server node supports up to 18TB and is designed for the Cisco UCS 5108 Blade Server chassis, which is part of Cisco’s Unified Computing System. M5 is Cisco’s latest generation server and features support for 1st and 2nd generation Intel Xeon Scalable processors. According to Cisco, this new design, Cisco UCS B480 M5, can reduce administrative costs by up to 63% and accelerate delivery of new application services by up to 83%. That sounds impressive!
Cisco UCS 5108 Blade Server Chassis
Cisco’s 6U UCS 5108 Blade Server Chassis can house up to 4 of these, full-width, something less than 1U in height Cisco UCS B480 M5 Blade Servers. It can also support up to 8 half-width nodes like the UCS B200, or combinations of blade form factors. Cisco’s Unified Computing System simplifies server deployment with a unified fabric and fabric-extender technology reducing the number of physical components, improving energy efficiency, and drastically reducing the need for independent management.
It’s also designed to reduce total cost of ownership by supporting several generations of server nodes from the M1 Generation through to the current M5 Generation. The chassis removes the need for dedicated chassis management, blade switches, excessive cabling, and can easily scale to 20 chassis. So, what’s the Cisco UCS B480 M5 server node good for? Your basic mission-critical enterprise applications, plus extreme virtualization and data base workloads, but there are many more options available depending on your choice of server node.
Cisco UCS B480 M5 Blade Server
The 5108 blade server chassis supports the power, cooling, management, and network connections. Up to four rows of server nodes occupy the upper half with 4x 2500W power supplies with integrated LEDs occupying the lower tier—and they are massive. Power can be provided on a non-redundant basis with just two PSUs or N+1 redundancy with all four installed. From a design standpoint, it looks a little strange with the PSUs tacked onto the bottom of the chassis, but it does isolate the GPUs from the main portion of the chassis, which are a major source of heat buildup.
On the back of the chassis there are two pairs of four large fans separated by either a Fabric Extender module or a Fabric interconnect Module. The chassis will take a maximum of two of the Fabric modules for aggregating bandwidth or for redundancy, but you can install just one. The server nodes connect to the Fabric Interconnects or Fabric Extenders through a midplane. The Fabric extenders replace the switches at the chassis reducing complexity and cabling. The Fabric extender we have here is the 2204XP featuring 4x SFP+ 10GbE Unified Fabric ports and associated LEDs for network communications status. There are two other Fabric Extender options with one that has 8x SFP+ 10GbE ports and another supporting QSFP+ ports at up to 40Gb/s using fiber optic cables. There is a single Fabric interconnect, the UCS 6324, which combines a Fabric Extender with a Fabric Interconnect for direct connection to an external switch. All of these options require transceivers and there are quite a few options there too.
Like I said earlier, the 5108 chassis will support up to 4 of these full-width UCS B480 M5 server blades. It also supports the B480 M4 version of this blade, but only two, so apparently some improvements were made. If you are installing other form factors in the chassis, then Cisco recommends placing the full-width units in the lower tier of slots. Our UCS B480 M5 blade server has four 2.5-inch storage bays on the front of the system. The lower right hand has a control panel. Reading left to right, there is an on/off button with integrated LED, a network link status LED, a blade health LED, connection for a crash cart, reset button, and a locator button LED.
Management of the system is accomplished through Cisco Unified Computing System Manager, or just the UCS Manager for short. There are several other applications under the Unified Computing Systems umbrella including Cisco’s Auto Discovery capability, which automatically recognizes and configures new chassis added to the network. Additional applications provide more granular control of the system’s virtual and physical hardware assets.
Once we pull out the server node, you can see the entire motherboard. The UCS B480 M5 can support up to 4 Intel Xeon Scalable processors from either the first or second generation, with up to 28 cores. Each of the four processors supports six memory channels with two modules per channel for 12 memory module slots per CPU. With all four processors installed there’s a total of 48 active memory module slots. You could also go with a 2-processor configuration but you will only get half the memory capacity and mezzanine slots 2 and 3 will not be active. The CPU, memory module, and configuration will determine the memory speed.
In general, first generation processors support a maximum memory speed of 2666MHz, and second-generation Intel Xeon Scalable processors will support 2933MHz. Second-generation Gold and Platinum processors will also support the most memory at a little over 18TB using 24x 512GB Intel Optane memory modules paired with 24x 256GB 3DS Registered memory modules. Using just DDR4 registered or Load-Reduced memory modules will provide a maximum memory capacity of up to 12TB. For both of these maximum memory deployments you will need processors with an “L” suffix that support 4.5TB each for a total of up to 18TB. Gold and Platinum processors are recommended given they are the only ones that support 3x Ultra-Path interconnects or UPI channels for vastly improved CPU-to-CPU communications. Other processors are supported, but only offer two UPI paths. The system also supports something called through-silicon via DIMMs or TSV DIMMs, which are memory modules too.
Now, back to the drive bays. Cisco definitely has their own way of doing things with drive trays that are integrated with a RAID controller and install in one of the two front Mezzanine connectors. There are three drive trays to choose from. Two FlexStorage trays, one of which offers 2GB flash-backed write cache, and one without. The third is a FlexStorage Tray with Passthrough for NVMe drives. The FlexStorage drive Tray/RAID controller we have here, supports one or two SAS SSDs or HDDs and plugs into one of the two mezzanine connectors in the front of the chassis. Those same connectors can be used for the supported GPUs. It offers 12Gb/s SAS support and RAID configurations of 0 and 1. Another blade server configuration has no local storage and the server module is only used for compute, perhaps as part of a SAN environment.
The B580 M5 blade has five mezzanine slots with two in front and three in the rear of the server node. Mezzanine 1 in the rear supports the LAN on Motherboard and is dedicated for support of the 1340 Virtual Interface Card. Mezzanine slots 2 and 3 support other options like the other Virtual Interface Card we have here, plus one Port Expander Card. The Virtual Interface Card or VIC, provides an extremely flexible 40Gb interface to create multiple network Interface controllers (NICS) and Host Bus Adapter (HBA) devices.
It also supports up to 256 separate and unique PCIe adapters and interfaces, plus virtual machine visibility from the physical network. We have both the UCS VIC 1380, and VIC 1340 which provides a LAN on Motherboard for blade servers. The Port Expander Card enables an additional 4 ports on the VIC 1340.
A mini storage connector on the motherboard can be outfitted with two SD cards or two M.2 storage devices to support your OS in a hardware RAID. Both require their own modular adapter, but it’s either-or because they cannot be installed together. An internal USB device is also supported. Lastly, you can install up to four Nvidia P6 Grid GPUs using the two front mezzanine connectors and rear slots 2 and 3. Strangely, the GPUs that mount in those front PCIe connectors have a different form factor to those supported in the rear.
At 6U, you can stuff up to 4x full-width servers like our B480 for up to 112 cores of processing power per node or mix it up with a few half-width servers. With support for several generations of blades, you definitely have options with the UCS 5108 blade enclosure.