Dell PowerEdge R750 Server ReviewNovember 16, 2021
The Dell PowerEdge R750 server (SHOP HERE) is one of the new 15th generation Dell PowerEdge servers. It offers hybrid storage and several other modern features. This system takes over from Dell’s previous generation R740 server. It supports 3rd generation Intel Xeon Scalable processors with up to 40 physical cores in a standard 2U chassis. It may look a lot like the previous version, but a lot has changed on the inside to make this system stronger, faster, and better.
There are three things we’re loving about this platform. First, all NVMe U.2 drives as an option up front. Second, a dual hot-swap M.2 drive caddy accessible from the rear of the chassis. Third, the Dell network daughter card or NDC, has now been firmly replaced with an OCP card for flexible I/O. The I/O with the NDC was flexible, but there are just more options now using OCP. I forgot to include that 3rd generation Intel Xeon Scalable processor, which probably should be at the top of the list. Oh, and PCIe 4.0 compatibility. Then again, maybe we shouldn’t have limited it to three things… There are four different chassis options for this platform.
PowerEdge R750 Models
A 24x 2.5-inch drive bay version, 8-bay 2.5-inch chassis, 12x 3.5-inch drive bay chassis, and our platform today with 16x all flash NVMe drive bays lining the front of the chassis. We should also mention that two other R750 platforms are also available, the R750xa and the R750xs. Maybe that makes 6x platforms? The R750xa supports up to 4x full-width, full-length GPUs, the good ones, or up to 6x single width cards like the Nvidia T4 for distributed environments. The chassis is also a bit longer to support those GPUs.
On the other hand, the R750xs offers the least performance of the new SKUs. Perhaps the XS model is for “extra small” because it will only support CPUs with lower cores counts, less memory modules, and offers fewer storage options. But just like the porridge Goldilocks lifted from those bears out for a morning powerwalk, the Dell PowerEdge R750 server hits the midrange between the two other options and may be “just right” for your workload.
For the control panel on the left side server ear, you have two options. One with, and one without QuickSync. Ours does not have QuickSync. The panel includes several status LEDs for system drives, temperature, electrical, memory and PCIe expansion slots. The big one serves a dual function as a system ID button and health status indicator that blinks and changes colors based on system health. The QuickSync version is very similar with the same icons on the left plus two buttons where the System ID and status button is located on the other panel. One is for QuickSync to temporarily pair your smart phone or tablet using blue tooth. Just to be clear, QuickSync 2 is not available on all of the configurations, but is a nice feature for at-chassis management of the system. It provides aggregate hardware and firmware inventory, plus system level diagnostic and system error information.
All of that information is at your fingertips when using that smart phone in your pocket or a tablet outfitted with the OpenManage Mobile App, freely available from the App store. On the right-hand server ear, you have the customary control panel for a crash cart if you won’t be using QuickSync 2. It includes a power ON button, a USB port, an iDRAC Direct micro port and an iDRAC status LED.
Management of the Dell PowerEdge R750 server is through the integrated Dell remote access controller 9.0, also known as iDRAC. A dedicated RJ45 port on the back of the system provides a Gb LOM access point and provides for remote management of the server using a standard browser. At-chassis management also goes through iDRAC and is accessible from the front using a crash cart or the QuickSync 2 wireless module and the OpenManage application.
Dell’s OpenManage application also plays well with a number of other mainstream and open-source apps for more granular control of the system including Microsoft Systems Center, Red Hat Ansible Modules, plus VMware vCenter and vRealize Operations Manager, to name a few…
If you go with all NVMe drives on the 16x bay chassis, then there are sufficient ports on the system board to support all of them, including the optional Rear drive cage on the system board! That means you get to use all of those PCIe 4.0 expansion slots in back to support other things, like high-performance I/O modules. However, if you go with the 24-bay version and want all NVMe then the optional rear drive cage will only support SATA or SAS drives. It will also require a PCIe mounted card to support the 8 additional front mounted NVMe drives. Still, not a bad tradeoff in that the previous version R740 didn’t even support NVMe, you had to go with the R740XD for that. So, a definite PCIe bandwidth improvement with Intel’s 3rd generation Xeon Scalable processors.
We also have a small front mounted H745 PowerEdge RAID Controller or PERC. They actually refer to it as the fPERC in the documentation presumable because it mounts in the front. An additional “Adapter” PERC can be installed in either risers 1 or 2 on the riser card.
The H745 is kind of a transitional 10th generation card, but delivers the goods for SAS and SATA implementations. You can install 2x of these front mounted PERCs using dedicated connectors on the backplane. And that includes the H755n PERC, an 11th generation RAID controller. It’s PCIe 4.0 compatible and specifically designed for supporting up to 8 NVMe drives per controller with a full range of RAID options. Other 11th generation controller options are available, like the Host Bus Adapter 355 “I” and “E”.
Rear Ports on Chassis
The back of the system is where it gets interesting. One of the major differences is the placement of the PSUs, which are now to either side of the chassis for better air flow within the chassis. You may have also noticed the smaller power supply unit. A power supply unit adapter is needed in this case to make it fit, but if you have a standard 86mm PSU then that PSU adapter is not required. Everything else looks pretty similar to the old one.
However, one new feature on the Dell PowerEdge R750 server is that little slot for a hot swap BOSS S2. You know, a boot optimized storage subsystem to boot the system using M.2 drives, in this case 2x of them. The controller board offers non-RAID and RAID 1, which mirrors the drives for redundancy. These M.2 drives can be removed directly from the back of the server without removing the cover panel, a definite improvement over the previous BOSS which was embedded in the system.
Aside from that, we don’t have the optional storage cage with integrated fans that supports either 2x 2.5-inch or 4x 2.5-inch drives, either SATA, SAS or NVMe. The 4-by rear drive cage will occupy Risers 1 and 3, while the 2-by drive cage will take out Riser 3. Both have the M.2 cage incorporated into the bracket. It’s important to mention that NVMe in the rear cage is not an option with 24x NVMe U.2 drives up front. Then there are the standard PCIe expansion slots.
LOM, OCP, I/O
The OCP card is right in the middle of the chassis and to either side of the OCP card are two other removable circuit boards, the LOM card and the I/O Board. The LOM card includes a dedicated Gb ethernet management port to access the integrated Dell remote access controller (iDRAC). The LOM card can also be switched out to support a liquid cooling I/O board. There is also an I/O board that includes a few NIC ports. These removeable boards provide the flexibility to upgrade the I/O and management features, quick and easy!
The liquid cooling option on the Dell PowerEdge R750 server is there to support increased power and thermal requirements and is available for an in the field upgrade using the liquid cooling modules, which dissipate heat from the CPUs plus a few tubes and another LOM card module, which serves as the access point for the liquid cooling hardware.
Intel Xeon CPUs
The new “Ice Lake” CPUs deliver up to 40 physical cores and 80 threads, plus more PCIe lanes at 64 lanes compared to only 48 lanes per CPU using the earlier Gen 2 Intel Xeon Scalable processors. We might have mentioned this in one of the sections above, but PCIe 4.0! That means twice the bandwidth compared to PCIe 3.0. Not to mention, 128x PCIe lanes with both processors installed, which is how all of the NVMe storage is supported directly on the system board. This particular chassis came in a minimalist configuration with dual processors, a single 480GB SATA3 SSD in front and two 16GB 2400MHz memory modules, one for each CPU.
The Intel Xeon Scalable Gold 6348 processors operate at a base frequency of 2.6Ghz with a maximum boost of 3.5GHz. Each CPU has 28 physical cores and 56 virtual threads plus significantly more cache at 42MB compared to the top Gold offerings from the 2nd generation.
These processors also support more memory, including data centric persistent memory modules (DCPMM) or for an even shorter acronym for Intel Optane series 200 persistent memory, there’s PMem. Memory speed of up to 3200MHz are also supported compared to only 2933MHz prior. At maximum capacity, this system will support up to 12TB of memory using a combination of 8TB of persistent memory, Optane, paired with 4TB of load reduced memory modules.
With the CPU update you have a multitude of expansion options. There are 4x risers possible in this system with several options to choose from based on your potential workload and a few other dedicated PCIe connections coming directly off the motherboard. Our system does not have Riser 2 and there is a blank where Riser 3 would pop in.
Using a special GPU kit, you can install 2x full height, full length 300W GPUs or 4x single-width 150W GPUs or up to 6x 75W accelerators like the Nvidia T4. The kit includes a different air shroud, a GPU air shroud filler, potentially a few cables, T-type heat sinks for CPUs 1 and 2, and high-performance fans, from either the gold and silver tiers.
If installing GPUs then you will be using the 6x PCIe 4.0 x16 slot option instead of 8x PCIe 4.0 x8 configuration. This system will also support up to 6x PCIe mounted SSDs for more and faster storage. As already mentioned, there are the removeable circuit boards at the back of the chassis with the OCP card, I/O board, and the LOM card. Other items that have their own dedicated slot include the M.2 caddy, and an internal dual SD card module or IDSDM but that slot is also what would be used to support an internal USB key slot, so it’s one or the other.
The IDSDM provides dual SD cards for support of a hypervisor and a flash card on the other side for storage of firmware updates and the like. Lastly, the H745 RAID controller has its own little perch on the back backplane. In fact, Dell calls this their rear mounting, front PERC module. It features 16x internal ports to support those 16x external front drive bays. This particular card is PCIe 3.0 compatible and does not support NVMe drives, but does have flash-back cache or NVcache technology to protect data in the event of power loss. For support of NVMe drives in a RAID, administrators can install the brand spanking new H755n controller.
The Dell PowerEdge R750 server features all sorts of new technology they even updated the fans to be more efficient, not to mention this chassis also supports optional liquid cooling for higher end systems with GPUs, maximized memory, and storage. This is Dell’s workhorse system that can be applied to a number of different workloads including database and analytics, high-performance computing, virtual desktop infrastructure, and many others. There is a lot packed into this system and we probably missed or glossed over a few new features, so if you have any questions just post them in the comment section below. If you made it this far, you are truly a masochist!