Dell PowerEdge XR2 Server ReviewMay 5, 2020
The Dell PowerEdge XR2 server (SHOP HERE) is the new generation of rugged PowerEdge XR enclosures and the successor to the PowerEdge R420XR.
This may come as a surprise to you, but not every server room is in a building.
These are developed for rugged environments and made to withstands shock, vibration, dust humidity, and electromagnetic interference. Whether that equates to an EM burst I don’t know, but these are used by the military and for maritime applications or other hot, humid, and generally disagreeable environments. In a nut shell, it supports two Scalable processors, 8x drive bays, and up to 30TB of storage!
We found that it looks very much like a standard 1U server. However, for such a small size, it does feel quite heavy with maybe a thicker gauge steel for the chassis surround. Similar to the HPE Edge systems, this ruggedized enclosure is for field applications in remote areas where communication with a data center may not be available.
It will operate in temperatures up to 45 degrees Celsius, or for the rest of the world that hasn’t made the conversion to the metric system 113 degrees Fahrenheit. It will also handle 55 degrees Celsius, which is 131 degrees for up to 8 hours. Hmmm…what happens after 8 hours…?
This 1U server has a short depth of 20-inches for space constrained environments. It features a nondescript bezel with a control panel on the right for system health, which we don’t have. Apparently, it has an integrated filter for dusty environments. Underneath that bezel, you’ll find eight 2.5-inch drive bays with an on/off button, USB 3.0 port, iDRAC Direct port and iDRAC LED on the right, plus a control panel on the left.
Next to the control panel is the system health status button, which will help isolate failed hardware components with LED icons on the left that include drive, temperature, electrical, memory, and PCIe systems status. The status button may or may not be integrated with an optional QuickSync button for at chassis management of the system using smart phones and tablets. Another panel between the drive and the control panel offer a VGA port, eSATA port, Common Access Card Reader, and information tag.
Common Access Cards (CAC cards) provide another layer of authentication for data encryption. They allow employees access to company buildings, databases, and other facilities. The Common Access Card is also used by the Department of Defense and provides multi-factor authentication for those serving in the military and for civilian contractors who need access to military hardware. The card stays in the card reader as long as the system is being used by the card holder. Once removed, the system is inaccessible until another CAC card is inserted.
On the back of the system starting on the left, you’ll see a system identification button, a system status indicator cable port, then there’s a VGA port with a serial port above that. Next, there’s a dedicated iDRAC port, 2x USB 3.0 ports, and two ethernet ports. Followed by two low-profile PCIe slots with a slot for the LAN on Motherboard just below, for more network interface connection options. On the far end are the dual redundant power supplies which in this case is limited to 550W Platinum AC power units.
Under the cover, there are two sockets for one or two Intel Xeon Scalable processors up to 150W. That first processor provides access to the most memory. The PowerEdge XR2 has a rather unconventional memory arrangement with four of the six memory channels supporting two memory modules and two of the memory channels with only a single memory module, at least for the first processor. CPU 2 on the other hand supports six memory modules each in their very own memory channel. CPUs are limited to a 150W thermal design point to reduce heat buildup.
The guys at AnandTech reviewed this system about two years ago. At that time, it only supported 512GB using memory modules certified for better resistance to temperature variation and moisture. Only 32GB Certified modules were available. However, Dell now states the system will support a maximum of 2TB, and that would be using 128GB memory modules in all slots. I guess the limit of 32GB on certified memory modules has been lifted or they’re just fine just using standard Registered or Load-Reduced memory modules. We won’t know the answer to that question until they release a new spec sheet! Either way the capacity currently listed at 2TB has increased by a factor of four. Memory speed will run at 2666MTs using first generation Intel Xeon Scalable processors.
There are actually two different storage conditions for this classis. The chassis we have here supports 8x 2.5-inch hot-plug SAS or SATA SSDs. The other, is a high-performance chassis and supports 4x 2.5-inch SAS or SATA SSDs plus 4X hot-plug NVMe drives. SATA and the NVMe drives are supported by a hybrid backplane. With a single processor, an NVMe cable connects to the optional mini-PERC controller and then to one of the riser slots on the system board for support of two NVMe drives. With two CPUs, all four NVMe drives can be installed, and requires the previously mentioned connection plus a second cable from the backplane to the SATAIII/NVMe Hybrid port on the motherboard.
vROC, or virtual RAID on CPU is supported for SSD RAID configurations and is available with three options depending on your choice of vROC key. If you decide to go with SAS drives there is a dedicated PCIe riser for an optional PERC controller. A PERC controller will also give you more control over your storage. RAID controllers supported on this system include the mini PERC H330, H730P for performance, or the HBA330. One of the interesting things you can get is either an OEM de-branded or rebranded bezel to include something near and dear to all companies, their logo.
For hypervisor support, an optional internal Dual SD card module supports one or two micro SD cards on one side with an optional flash memory card on the other. The second micro SD card is dedicated for redundancy, while the flash memory card can be used by iDRAC for storage. The IDSDM plugs into a dedicated slot on the motherboard.
There’s also an integrated Boot Optimized Storage Subsystem or BOSS that supports two M.2 modules and can be used by the system to support the OS in mirror mode for redundancy or for additional superfast storage. The dedicated port on the back of the system connects to the Intelligent Platform Management interface.
The system is IPMI compliant and just like Dell’s other next generation servers, this one has iDRAC 9.0 with Lifecycle Controller for at chassis and remote management of the system. You can also connect directly to the system using the optional QuickSync 2 wireless module. The Quick Sync button allows you to temporarily pair a blue tooth enabled smart phone or tablet to the server through Dell’s Open Manage Mobile app or OMM. Operating systems supported included iOS and Android operating systems.
Three PCIe 3.0 slots support optional PCIe cards, either full-height, half-length or low-profile-half length. There are two integrated 1Gb/s NIC ports on the back panel but you can use an optional LOM Riser cardfor a multitude of connection options. By the way, I’m not sure why this is not referred to as the Network Daughter card like on all of their other chassis, but let’s move on. The system features an integrated M.2 module that supports two M.2 drives in a hardware RAID to support your operating system. You can also take up one of those PCIe slots with a BOSS that also supports two M.2 SSDs in a hardware RAID for additional super-fast storage or for the OS. Just one 75W single-wide Nvidia Tesla T4 GPU is supported, and only in riser 2. If you will not be installing the GPU then leave the plastic bracket.
These ruggedized chassis have several key features that differentiate them from your run-of-the-mill servers. Ability to withstand extreme temperatures, extreme elevations, designed for compact spaces with a 20-inch server depth. If you need to load that server into your van or need to take it around the world in your paddle boat, look to the Dell EMC PowerEdge XR2.