Review: Dell EMC PowerEdge MX840c Server Sled

August 5, 2021 0 By Lorena Mejia

We have another server sled for the Dell EMC MX7000 chassis! You know, the one that’s replacing the Dell M1000 enclosure. Dell says it‘s not really a replacement for that chassis, but then later on we notice they do. To put an end to your doubts, we’re going to look at another server node for the MX7000, the Dell EMC PowerEdge MX840c (SHOP HERE). 



We did the single-width form factor Dell MX740c a few weeks back and you can see that here. In that video it was hard to escape the fact that it was housed in an enclosure. Is this a blade server? Yes, it is. And so far there are only three modules for this chassis

-The MX740c, a dual-socket server sled 

-The MX5016s storage sled, for direct attached storage with 16x 2.5-inch HDDs or SSDs that can be shared across server sleds

-Our Dell EMC PowerEdge MX840c compute sled, which has a double-wide form factor and up to 4x intel Xeon Scalable processors. 

This modular system is a great fit for software defined storage, data base and dense virtualization workloads. So far there are no AMD processors or GPUs supported, but those are definitely in the works. The key takeaway here is flexibility and Dell built this system to not only support today’s technology but future developments including the next three generations of processors. 

First, a little about the MX7000 chassis. It’s a 7U chassis and on the front of the system we have 6x 3000W PSUs at the bottom plus four large fans vertically aligned down the center of the chassis. The server or storage sleds are also stacked vertically with room for four single width nodes or two double width nodes to either side of the hot swappable cooling fans. You can place up to 8x single-wide server sleds, up to 4x double-wide sleds, or a combination of the two. You could even go all storage sled on this system, but you will need at least one of those compute modules and it must be mapped to the storage. 

The far left-hand server ear has a control panel, and there are three options for that. One is an LED panel option that includes a few telltale LEDs including system health, temperature, I/O health, fan health, whether the system is part of a stack or group, a general status bar for at a glance health status, and a system ID button. The other two have liquid crystal displays and one has optional QuickSync for at-chassis management of the system using a tablet or smart phone—the other does not. But keep in mind that if you want QuickSync, you will need to order that at time of purchase. QuickSync is not a field upgrade. The right side has a control panel too. That one is a little simpler and offers a power ON button, 2x USB 2.0 ports and a mini-display port, plus a management port squeezed in between.

On the back of the server you’ll notice everything is horizontally arranged and that’s so those mezzanine cards on the back of the server and storage sleds can connect with fabric A, B and C. Working our way down from the top there are two slots for Fabric A I/O modules, followed by a row of 5x large fans. Below those are two I/O module slots for Fabric B, and below those two are the Fabric C modules for Storage. Then, two slots for the Management modules, plus some power inlet connections for those massive PSUs, which include LEDs for at a glance health status. 

Dell PowerEdge MX7000 management modules

The MX7000 chassis has no backplane so the server and storage nodes connect directly to the I/O fabric modules on the back of the chassis through the mezzanine cards on the server and storage sleds.

Dell PowerEdge MX7000 chassis no backplane

This is an upgrade as it removes the midplane connection to the server and storage sleds, so only the I/O module on the chassis and perhaps the mezzanine cards in the blades need to be switched to accommodate updated or newer technologies. While the MX7000 chassis supports up to 4x fabric modules you can definitely get away with just one of each Fabric A and B because the other two are for redundancy. The third Fabric C modules, and there are two of those, connects the compute modules to the direct attached storage, AKA the MX5016 storage module, but also to the compute modules through mezzanine cards.

Utilizing the MX5016s storage module requires the MX5000s I/O module for Fabric C, but if you plan on connecting to that storage module from one of your compute blades you will need to install a mezzanine card to access Fabric C, either the PERC HBA 330MMZ or the MX745P.  

If the Dell PowerEdge MX840c was an actual blade, it would cut deeply. A little Forged in Fire reference if you too have been sucked into that series like we have.

8x storage bays on the front of the Dell PowerEdge MX840c can be virtualized with those in the MX5016s storage module. There is also a power ON button, and USB port on the top right next to the iDRAC port for at-chassis management. Below the ON button, a system status LED, solid blue is good, and then a blue button with release latch to slide the sled from the chassis. 

Dell PowerEdge MX840c panel

The MX840c is like two of those MX740c’s stacked on top of each other with two processors and associated memory modules on the top and on the bottom. There is what can only be described as another latch which secures the top system board or Processor Expansion Module PEM, to the system board below through up to 4 UPI connectors on the lower system board. Once removed, you can see the lower set of CPUs and associated memory modules. 

You can install 2x or 4x of Intel’s second-generation Xeon Scalable processors in this baby, which will support up to 28 cores each for up to 112 cores and 224 threads with all processors installed. Of course, the new, Intel Xeon Scalable Ice Lake processors will support up to 40 cores each and PCIe 4.0, but let’s not go there yet. Dell isn’t either. If you only go with two processors you will not have access to about half the performance from the memory and PCIe lanes not to mention half the compute, but that was kind of a given. 

Dell PowerEdge MX840c compatible cpus

Each processor supports 12 memory slots for a total of 48 active memory module slots with all 4x processors installed. With 6x memory channels per processor, you can load two memory modules per memory channel, but for best performance, you would install only a single module in each memory channel. Supported memory includes Registered and load reduced, plus NVDIMM-N in 12x slots and up to 24x slots can be filled with data centric persistent memory modules also known as Intel Optane at least until other manufacturers start knocking out DCPMMs. 

Dell MX840c motherboard

Using standard Registered DIMMs, the Spec Sheet lists a maximum capacity of 3TB, or 6TB using load reduced memory modules. If you need a bit more resiliency, you can install up to 192GB of non-volatile DIMMs. NVDIMMs do require a battery in the event of a power failure so that data being processed in your memory modules is safely stored if there is a power failure. DCPMMs will definitely provide the highest capacity and lowest latency at up to 15.36TB—and do not require a battery. 

Only 12.2TB is actually supplied by the DCPMMs or PMems, while the other memory portion is supplied by Registered or Load Reduced DIMM memory modules, which work in conjunction with the DCPMM. There are a lot of acronyms…These are not to be confused with the ROUSs or rodents of unusually large size, which inhabit the dreaded Fire Swamp… 

The two management modules (MMs) on the back of the chassis work with the OpenManage Enterprise Modular application to manage the system. It basically keeps track of your SAS storage subsystem, health status, inventory, event logs, plus drive and enclosure assignments. Think of OpenManage Enterprise as the overlord or King of the servers, which feature Dell’s integrated Dell Remote Access Controllers with LifeCycle controller as the knights or vassals in a feudal pyramid. Does that mean we are the peasants?

At the back of the Dell PowerEdge MX840c compute sled are 4x PCIe 3.0 x16 slots for the I/O mezzanine cards, which will connect to the I/O fabrics A and B. Each of the mezzanine cards have two connections and you can install two mezzanine cards for each of the fabric connectors A and B. There are also PCIe 3.0 x16 mini mezzanine card slots for storage fabrics C on both the PEM board and on the system board below. 

The Dell PowerEdge MX840c system board has a few more dedicated PCIe slots. One for an iDRAC card, plus another for either a Boot Optimized Storage System (BOSS) or an Internal Dual SD Module (IDSDM) for support of a hypervisor. The BOSS has 2x M.2 storage devices for redundancy and the IDSDM has two SD card slots on one side that can be used in mirror mode for redundancy and another flash card on the back for use by iDRAC. Lastly, there’s a dedicated slot for a PowerEdge RAID Controller or PERC card if you plan on installing SAS drives or just want more control over your storage. You also have the option of using an internal USB key (towards the back of the chassis near the mezzanine card slots) as a boot device, security key ,or for general storage.

This is only the beginning for the MX7000 platform. Dell has plans for not only compute sleds with AMD processors, but also GPUs and FPGAs! If you are in the market for a high density system capable of supporting the same performance in 7U as a bunch of individual servers might be capable of reproducing in 8U and that doesn’t even take into account the I/O modules which would take up even more space. 

If you’re interested in purchasing this server, click here! Or, if you’re interested in other servers or components, click here for IT Creations’ homepage.