HPE ProLiant DL380 Gen11 Server Review
May 7, 2024The HPE ProLiant DL380 Gen11 (SHOP HERE) server is one of HPE’s workhorse multi-environment platforms. It sits at 2U and offers more storage and expansion options. It too is outfitted with dual 4th generation Intel Xeon Scalable processors, code name Sapphire Rapids, for improved performance across the board compared to the previous 3rd generation Intel Xeon Scalable Ice Lake CPUs. There are so many storage options on this 2U chassis that it does get a little confusing.
This platform can be used for highly virtualized environments, Cloud, Big DATA, AI applications, and a host of other applications. New to this system is support for the NVIDIA H100, L40 and L4 GPUs.
It is also the world record holder for TPC-H at 10,000GB scale factor non-clustered. TPC-H is a transaction processing and database benchmark specific to decision support. At the 10,000 GB scale factor we are talking several of these babies lined up and performing analytics. It also achieved world record performance at the 2-processor level. On another note, it is also the first to surpass 2 million QphH, which is a measure of Composite Queries Per hour for performance also through TCP-H for database performance.
They also go on to say there is a 42.79% increase in performance for this platform compared to previous generation Dell servers that won’t be named at this time. It’s the Dell Power Edge R940xa, which is a 4U platform with 2x or 4x sockets depending on if you add the CPU mezzanine for processors 3 and 4.
We find the comparison somewhat lacking as that is an older system running 2nd generation Intel Xeon Scalable processors with up to 28 cores each and up to 6TB of memory. Not the “new” so to speak. 4th generation Intel Xeon Scalable processors with support for DDR5 memory (1.6 times performance increase) PCIe 5.0 and improved Ultra Path Interconnect 2.0. Specifically, the DL380 Gen11 test platforms were running Intel Xeon 8490H+ processors with 60 cores per socket to achieve those numbers. 2nd gen CPUs only had a top core count of 28 cores. Impressive? Yes, but we find these performance reviews are so cryptic and somewhat unenlightening. Let’s take a look at the hardware.
The HPE ProLiant DL380 Gen11 Server offers a number of different storage options with front, mid-tray, and rear drive cages. There’s a 3.5-inch chassis with 12x drive bays. A 12x bay and 20-bay EDSFF drive version. A 2.5-inch drive chassis with 24-bay, 8-bay and a 30-bay option if you add in 6 drives at the rear of the chassis, oh and you can even add 8 more in the mid-tray for up to 38x drive bays.
Of course, rear drive cages can be installed in the front 8-bay version too bringing the total to 14x 2.5-inch drives bays. Drive types supported include SATA, SAS, NVMe U.3, and EDSFF high-density storage devices. There is an 8-bay LFF chassis but that one cannot be upgraded to the 12-bay LFF version, at least on the front. There are various Tri-Mode bays depending on the backplane that will support any drive type in that particular bay. There might be one, two, or four Tri-Mode bays per storage box or you can just load up on a specific drive type, again depending on the backplane. Our system doesn’t have any of the good stuff, unfortunately, so no show-and-tell on all these drive cage options. We do still have our imagination. Kidding! We’re using Photoshop for this exercise! Did we mention there is an optional bezel…
On the front of the chassis there might be a universal media bay but only on the 8-bay 2.5-inch chassis. One thing that is pretty consistent, is the control panel on the right server ear of the chassis. Well maybe not if we look at the 8-bay LFF chassis but let’s ignore that for now. Starting at the top. Power On/standby button LED, Heath status LED, Network Interface Controller LED, and Unit ID button LED. Right next to those an integrated lights Out service port at the top with USB 3.0 port below that. On the left server ear there is a serial number pull tab and drive support label. That drive support label really doesn’t provide much information. If your unit comes with that media bay there is an optional optical device, and dual USB 2.0 ports. Our unit came with two blank storage boxes and one 8-bay 2.5-inch storage box.
On the back of the server, there are two OCP card slots one of which we outfitted with a storage controller, specifically the HP MR408I-O OCP 3.0. It’s a Gen 11 controller. HPE says this storage controller is ideal for data center environments supporting either RAID or HBA/Passthrough. It is designed for PCIe 4.0 lanes, and comes in two form factors. A standard PCIe half-height-half-length card or this OCP NIC 3.0 form factor like this one. This one has 4GB cache and can support SATA at 6GB/s, SAS at 12GB/s or NVMe at 16GB/s with 8 SlimSAS ports. There’s even a 24GB/s SAS option listed in Quickspecs but we have not seen any of those yet.
You may have noticed there are no embedded network interfaces on this system. That being the case, the other OCP card is an HPE Broadcom BCM5719 OCP 3.0 network Interface Controller with 4x 1GbE RJ45 ports. This is a low-power option, ideal for virtualization. Not really much else to say about that… That OCP slot will take several other OCP cards offering 10Gb, 10 and 25Gb, 100Gb, and even 200Gb card, each with various ports and connection options including Small Form Factor (SFP), Small Form Factor + (SFP+) Small Form Factor Pluggable 28, and for that 100Gb card, dual quad Small Form Factor Pluggable 28, offering 28Gb/s transfer speed per channel for a total speed of 100Gb/s once you round it off a bit. More options if you go with a PCIe based network card.
Squeezed in between the OCP card slots is the Unit ID indicator LED, two USB 3.0 connectors, a dedicated iLO port for managing the system remotely and an optional serial port, with VGA port next to the dual redundant PSUs. Above the PSUs an optional NS204i-u Boot device can be installed and supports dual M.2 NVME drives in a hardware mirror RAID 1 configuration. There are other options for booting the system but this one has rear-accessible M.2 drives, which can be useful. There are still options for using an SD card or a USB device to boot the system but going forward there has been a change in the way VMware can be installed.
Beyond VMware ESXi 7.0 an M.2 or other local persistent storage device will be required. PSUs supported include 1600W, 1000W and 800W Platinum or Titanium. There are several PCIe 5.0 slots above. Those slots also can be outfitted with optional drive cages, which would take out a number of the PCIe slots.
That integrated Lights out port, or iLO port, on the back of the system enables Administrators to access the system remotely using a standard web browser. This platform is embedded with iLO 6 Standard with Intelligent Provisioning to monitor, configure, and update the system from anywhere. That would be to manage this single system. If this system is part of a cluster or a whole enclosure, then there is HPE OneView also Standard but downloadable. For more features with both of these management suites, you can purchase Advanced versions which do require a license.
With the cover removed, you can see just how pared down this particular system is. Only a single riser with 3x slots and that’s pretty much it for PCIe slots. There are options for the risers as well. This primary riser comes with either 2x x8 slots and single x16 slot or 3x x16 slots. Both of those options have a x16 electrical connector width. An optional secondary riser that goes in that slot right next to the Primary riser. That secondary riser supports all PCIe 5.0 slots too with either all x16 slots and x16 connector widths or a combo dual x8 and a single x16 slot with all x16 connector widths. There is also a Tertiary riser that installs just above the PSUs and that one has 2x slots with either a PCIe 5.0 and PCIe 4.0 x16 slot or all PCIe 5.0 and combo x8 and x16 slots. Again, if rear drive cages are installed, they will take out a few of those PCIe slots.
Supporting the storage up front is a single drive backplane for the 8x drive bays, and behind that a row of fans. If you install any GPUs, any mid-tray drive cages, performance NVMe drives, 3x front drive cages, or CPUs with a TDP greater that 205W then you will need the high-performance fan kit. For reference this platform features the C741 chipset and can be outfitted with CPUs with a thermal design power range of up to 350W. Most of those processors will require a high-performance heatsink. “Q” processors will require a “maximum” performance heatsink. And now you are wondering what are “Q” processors? Those particular processors basically require liquid cooling so we can safely assume the high-performance heatsink is liquid cooled. There are suffixes on many of the processors which indicate better performance for specific workloads.
The platform supports Platinum, Gold, A few Silver varieties, and even one Bronze level CPU. Memory supported includes only Registered DDR5 memory modules. DDR5 provides a 60% increase in performance over DDR4. Supported memory speeds of 4800MT/s are supported with all slots filed with 256GB memory modules, which will provide up to 8TB of memory. However, we will note that memory speed will be affected by the processor. That lone Bronze processor only supports memory speeds of 4000MT/s which is still faster than DDR4 speeds which top out at 3200MT/s. Some of the Silver and Gold CPUs also support a slower memory speed than 4800MT/s.
This system can be outfitted with a virtual RAID on CPU key for support of SATA or NVMe drive types. VROC offers software RAID or can be used for HBA direct connected drives. The RAID solution is designed specifically for SSDs. An embedded controller will support up to 14 SATA storage bays with limited RAID options. Several other Smart Array Controllers or Tri-Mode Controllers provide additional options for managing the crazy storage options on this system and you may need 2 depending on the storage/RAID conditions you are trying to achieve. Of course, you could also go with the embedded storage controller augmented with a Smart Array controller. Controllers with a cache will require a Smart Storage Battery and cable kit, surprisingly, like the one we have installed on this system.
The platform will support a maximum of 3x NVIDIA H100 80GB, L40 48GB, A16 64GB, A100 80GB PCIe accelerators. Alternatively, you can install up to 8x NVIDIA L4 24GB PCIE accelerators. The only one that is designed for PCIe 5.0 is the H100 80GB Accelerator and if you have three installed you will need the direct liquid cooling kit. It’s not even supported if you have the 24 SFF or 12 LFF chassis. It is designed for transformational AI training. In a nut shell that GPU accelerates pretty much everything and anything for high-performance computing, AI, machine learning, scientific analysis and whatever you can throw at it. It can also be partitioned with second generation multi-instance GPU technology to support up to 7x separate instances. There is a laundry list of notes associated with the installation of GPUs regarding fans, PCIe slots, cable kits, storage options, and other stuff.
Tons of storage options. Support for dual 4th generation intel Xeon Scalable CPUs and with that DDR5 memory and PCIe 5.0. Oh, and also GPU support with either 3x double wide high-performance versions or up to 8x single-wide GPU accelerators. This system has a lot to offer. But if it’s a bit too much then there’s always the HPE ProLiant DL360 Gen11 at 1U.
We have the ProLiant DL380 Gen11 in stock and that DL360 Gen11 too! It’s ok to do a little window shopping too for other stuff you might need, so if you’re interested, contact IT Creations.