Dell PowerEdge R570 Rack Server Review

April 28, 2026 0 By Lorena Mejia

We have another one of Dell’s 17th generation servers, the Dell PowerEdge R570 Rack Server. Maybe not as densely packed as say the R770 or R670, but still a more than worthy candidate to power your business needs. Still packed with options, it offers a single Intel Xeon 6500 or 6700-series processor paired with ample storage up front, from either 2.5-inch or EDSFF E3.S drive options.  

With only a single processor, this system does provide power efficiency for the data center, cloud scale web services, plus medium-density virtualization, scale-out database, and software defined storage to name a few. This system can also be outfitted with up to 3x 400W double-wide GPUs. Like it’s other siblings, the Dell PowerEdge R570 Server can be configured with front or rear I/O. 

As an option, you can install a security bezel. The front I/O configuration places some of the risers as well as the I/O panel you would ordinarily find in back on the front of the chassis. There are basically two storage configurations for the front I/O configuration. One with 8x EDSFF E3.S drive bays and the other with up to 16x EDSFF drive bays.

To either side are 3x PCIe slots. Below those slots on the right, is the primary OCP NIC card slot. On the other side, a secondary OCP 3.0 mezzanine card slot that can be outfitted with a NIC or a BOSS-N1 DC-MHS. Next to that slot is a serial comm port with a dedicated integrated Dell Remote Access Controller or iDRAC Ethernet port for remote management. 

The server ears also have a small control panel. On the right, is the primary control panel. It features a power button, Type C USB port, a host status LED, and a combo system health LED and System ID with that long blue LED.

On the left, another control panel. This one can be a blank module with no ports or LEDs. Another option has a USB 2.0 port and below that a mini-Display port. Lastly, an option with a Quick Sync 2.0 button for at chassis management using a smart phone or tablet. 

You have options for management of the system. Sure, you can use a smart pone or tablet with the Quick Sync feature on the front but if you decide not to go with that option, you can still plug in using the optional mini-Display port and connect a laptop or crash cart. There’s also that iDRAC port on the back of the system. With this generation, Dell has also updated to iDRAC 10. Ordinarily, that would be in the back of the chassis. It enables remote management of the system through that dedicated port. Other management utilities include OpenManage. Both iDRAC and OpenManage have a few different licensing levels depending on your needs for utilities that are supported and there are a lot!  

With EDSFF drives you get low latency, increased density, and better airflow and cooling as they don’t connect with a traditional backplane, instead it sits flat. You can also hot swap. Capacities listed for Enterprise-class EDSFF drives use a PCIe 5.0 interface and are listed at up to 61.44TB at the top end for up to 1.9 Petabytes of storage with all drive bays loaded in the 32x bay chassis.

Chassis configurations with EDSFF E3.S drives include that front I/O option, and a few more with a rear I/O configuration. With the more traditional rear I/O configuration you can get either 8, 16 or 32 EDSFF bays. With that last one, the central bay is just a perforated panel for air flow. 

Next, a number of 2.5-inch drive bay options. For 2.5-inch SATA and SAS there are 8, 16, or 24x bays. Also, an LFF chassis with 12x 3.5-inch drive bays. You will need a PERC or PowerEdge RAID Controller if you plan on installing SAS drives or just want more control over your storage. There are options for a front-mounted storage controller and those that can be installed in the PCIe slots.  With the LFF chassis, you also have the option of installing 4x EDSFF drive bays in back. 

Quite a few options for the back of chassis too. With the front I/O condition, there are blanks installed over the PCIe slots, which are available in the front of the system. Also, covers over the OCP mezzanine card slots as those too are located in front. However, you still have the PSUs to either side of which, again, more options from 800W all the way up to 1800W. There is also a small I/O panel on the bottom next to the left PSU. It features a VGA port then 2x USB 3.0 ports, and the dedicated iDRAC RJ45 port. 

With the rear I/O configuration, the upper portion of the chassis has 6x PCIe slots below the I/O panel on the left, which is the same as that for the front I/O chassis, then there are 2x OCP 3.0 slots on the right. The middle one can be outfitted with a BOSS-N1 card to boot the system. The other will accept a number of different ports and connection speeds for network communications. If this was that 12x LFF bay configuration, 4x EDSFF drives can be installed but will take up the two PCIe slots on the left above the PSU. 

You can see our chassis has a rear I/O configuration with the risers in back. It definitely looks a little different with the front I/O configuration as the main risers are located in front as mentioned earlier. We don’t have that one so moving on… Those risers can be outfitted with a number of optional expansion devices including, NICs, storage controller, and GPUs. The system can be outfitted with up to 6x PCIe slots with a x16 connector, 4 of which are PCIe 5.0. However, the number and type of connector is dependent on the installed risers. That does not include the OCP 3.0 slots, each of which also have a x16 link. 

Also supported are up to 2x NVIDIA BlueField-3 DPUs or data processing units, which can act as a high-performance NIC in standard mode or as a super-NIC using its integrated ARM processor. With integrated DPUs, the NVIDIA BlueField-3 units can take the pressure off the CPU by offloading, accelerating, and isolating networking, storage and security for AI, cloud and HPC workloads.

GPUs vetted on this system include the NVIDIA L4 featuring 24GB, NVIDIA L40S with 48GB, and the NVIDIA H100 NVL with 94GB. Up to 3x double-wide 400W GPUs can be installed or up to 4x single-width GPUs, which includes the NVIDIA L4. Only the NVIDIA H100 NVL supports 400W maximum and draws additional power from an 8-pin power connector.

The H100 NVL does not support DirectX 11 or 12, which is usually reserved for gaming cards. The other two do. Also, there are memory requirements for the number of GPUs installed in the system, as well as power requirements.

A single Intel Xeon 6700 or 6500 series CPU with either Performance cores or Efficiency cores is supported on this system. With E-cores processors can have up to 144 cores and 144 virtual threads. Using a CPU with P-cores will deliver up to 86 physical cores and 172 virtual threads.

With 16x memory module slots, the system will support up to 4TB of DDR5 Registered memory modules. One thing we will mention is unlike the other 17th generation systems, the PowerEdge R770 and R670, the R570 does not support CXL 2.0 devices for scalable memory pooling. If you would like to learn more about that, then check out the Dell PowerEdge R670 video we did a little while back. 

Once again, the Dell PowerEdge R570 has entirely too many options to fully discuss but we do think we hit the main features. For data centers, cloud VDI, analytics and a number of other applications, Dell has your back for performance and energy efficiency. If you want more information on this, or any other system, contact us today!