Dell PowerEdge R760 Server Review
May 21, 2024 0 By Lorena MejiaThe 2U Dell PowerEdge R760 server (SHOP NOW) is the big brother to the 1U PowerEdge R660 server but supports more storage and expansion options! Featuring the latest 4th generation Intel Xeon Scalable CPUs with up to 56 cores at the high end, you also get support for DDR5 memory and PCIe 5.0! It will also take 2x double-wide GPUs or up to 6x single-wide units and offers optional direct liquid cooling. There’s also an impressive list of options for this platform.
The 2U Dell PowerEdge R760 server is definitely one of Dell’s workhorse platforms. Featuring dual 4th gen processors, it can have up to 112 physical cores and 224 virtual threads thanks to Intel’s hyperthreading technology. With support for DDR5 memory modules, and PCIe 5.0 expansion slots, the R760 delivers some serious performance gains over the previous generation. Depending on the CPUs, storage, and addition of GPUs, it can support high-performance computing applications like AI and machine learning, general data center applications, virtualization and cloud, VDI or virtual desktop infrastructure, database analytics, and use as a software defined storage node, just to name a few.
With twice the volume compared to the R660, there are a bunch of storage options for this system. Not to mention more storage options for the back of chassis. On the front, you can choose from 12x 3.5inch, 24-2.5-inch SAS, SATA and 3rd or 4th generation NVMe SSDs or HDDs. Of course, with that 24-2.5-inch bay option there are also options with 8x and 16x bays, and that is where scalability comes in to add more storage up to 24 bays.
Not mentioned in the technical guide, but definitely found in the spec sheet, is support for up to 16x NVMe E3.S drives. That would be the same number of EDSFF drives available on the R660. We were expecting more supported on this system but again there are only a certain number of PCIe lanes available. Especially if you want to add some GPUs or high-performance NICs to the mix. That configuration will be released in the 3rd, which is now, or 4th quarter this year. The E3.S drives will provide the highest storage density compared to the other alternatives.
A control panel on the left supports the system health and system ID button but may also have an iDRAC QuickSync 2 wireless indicator for use with Dell’s OpenManage mobile app—or not. The OpenManage mobile app allows you to use an iOS or Android based tablet or smart phone to access iDRAC for status, inventory, troubleshooting, remediation, and a number of other Administrator tasks.
Beside that, status LED indicators for Drives, temperature, electrical, memory, and PCIe slots. On the other side you get the power ON button with LED indicator, USB 2.0 port, Micro-AB USB iDRAC Direct port, which you might need if you don’t go with the QuickSync 2 option on the other side, and a VGA port to connect a monitor should you desire a larger screen than your iPhone or tablet.
Looking at the back, you can see the PSUs stacked to either side of the chassis. There is a range of PSU options depending on how the system is configured. Hot-swap PSUs at the low end can provide up to 700W all the way up to 2800W at the high end and various levels in between. We have the 1400W PSUs installed on this system. The back of the server can be outfitted with additional storage options too. Choose from 2.5-inch 2x and 4x bay drive cages.
Ours has a minimal configuration with blanks in the place of the PCIe slots just above each of the PSUs Corresponding to Riser 1 and Riser 4. In that middle section, Riser 2 is on the bottom and Riser 3 is on top. In total, the system is capable of supporting up to 8x PCIe slots but we’ll get to those.
Along the bottom between the PSUs, there is an I/O board that can be removed and replaced if need be. It has a System Identification Button with integrated LED light, one RJ45 port to access the integrated Dell Remote Access Controller for remote management of the system, 2x USB ports with one USB 2.0 and one USB 3.0 plus a VGA port. If this system was equipped with liquid cooling, then the tubes would be exiting right where that VGA port is located. An OCP 3.0 card slot in the middle provides more options for Network Interface Controllers (NICs) that won’t use up any of those PCIe slots. Ours has a dual SFP28 transceiver slots, offering 25GbE (Small Form-Factor Pluggable). Next to the PSU on the left, is another slot for an optional LAN on Motherboard circuit board with two NIC ports that connects directly to the system board. One more thing, there is an optional Boot device located in the Riser 1 carriage that supports either one or two M.2 drives.
iDRAC 9.0 Express comes standard on this system. However, the system can be upgraded to support DataCenter, Enterprise or Enterprise Advanced which offer additional features to manage the system. Of course, there is an additional licensing fee associated with the enhanced management features. Dell’s OpenManage is also an option for multiple servers and comes in several licensing levels too. Not to mention the Dell OpenManage mobile app which will connect with the optional OpenManage QuickSync 2 module on the front of the system using a smart phone or tablet.
With the cover removed you can see that Boot device in riser 1. The BOSS-N1 monolithic controller can support dual 480GB NVMe M.2 drives that supports RAIDs of 0 and 1 and is specifically used to boot the system. The unit has a PCIe Gen 3 x4 host interface. The M.2 drive caddies are accessible from the back of the chassis for easy replacement in the event one of them goes south. We only have one drive for this unit with a blank in the other slot. For those with one drive you can use RAID 0. Nope, not going into how that works with one drive but with two drives you can use RAID 1 for mirroring. There are no additional PCIe slots in our specific riser, or lack of riser, but there are optional risers available with two slots corresponding to slots 1 and 2 on the back of chassis. Riser 2 goes just underneath Riser 3 in the middle, with Riser 4 above the other PSU. We also have a standard air shroud directing fresh air over the CPUs and memory module slots from just behind the fans then out the back through the PCIe slots.
There is another air shroud for GPUs, plus 3x different fan types depending on how the system is configured. Gold (VHP) fans are recommended for systems outfitted with optional GPUs. A full kit for installing GPUs includes Gold fans, a different fan shroud, heat sinks, and specific risers. Several NVIDIA GPUS are supported on this system including the A2, A30, A40, A16 A100, A800, and H100. Only the entry-level A2 doesn’t require any additional cables. That one only draws 40-60W and can be powered directly from the PCIe slot. The others require an 8-pin cable or a 12-pin cable if installing those H100 GPUs. The H100 is definitely not an entry-level card and is designed for AI Training applications, like AI chatbots and AI Recommendation Engines that anticipate your needs, creepy but useful in a business!
Removing the air shroud exposes the CPUs with heatsinks and 36x memory module slots. If this system had Direct Liquid Cooling (DLC), you would see the tubes coming in under riser 4 and connecting with the two CPUs. Each of the CPUs have 16 memory module slots. With 12x-memory channel architecture, two DDR5 DIMMs can be loaded in per channel. Supported DIMMs include Registered ECC DDR5 modules with a memory speed of up to 4800MT/s. CPUs with up to 56 cores and a thermal design power rating of up to 350W are supported. We have dual 4th generation Intel Xeon Scalable Silver 4410Y 12-core processors installed on this system with a TDP of 150W and 30MB cache.
With a “Y” suffix on the end of that product number, this CPU features Intel’s Speed Select Technology or SST and is optimized for general workloads. Surprise! Other suffixes include H for Database and Analytics, M for AI and media processing workloads, N for networks, 5G and Edge computing, and P CPUS which are optimized for Infrastructure as a Servcie or IaaS cloud high-frequency Virtual Machine environments. There are a few more, including S which is for Storage and includes full accelerators enabled (DSA, QAT, DLB). Intel’s 4th gen CPUs have integrated Accelerators, which is something that AMD has not added on to their EPYC CPUs just yet.
Additional options include an internal USB card or micro-SD Card module that can be installed in a dedicated connector on the systems board. Those aren’t nearly as effective as the BOSS though. If you will be installing SAS drives or simply want the ability to create a RAID beyond that provided by the integrated RAID controller, then you will need a PowerEdge RAID Controller or PERC. There is a PERC for the PCIe slots and also a Front PERC that mounts at the front of the chassis just behind the backplane. Dell 12th Generation PERC cards provide RAID options or both SAS and NVMe drive types at 24Gb/s or twice as fast as the 11th generation PERC controllers.
That OCP card is an optional feature and will support a maximum port speed of up to 100GbE and a maximum of 4 ports. BTW, that max port speed is combined. In other words, a single 100GbE port or 4x ports, each with a 25GbE connection. There are a number of port options to choose from and these are especially useful if you have other plans for the PCIe slots like additional hard drives, GPUs, maybe a PERC or some other high-speed network Interface controllers.
Clearly there are a lot of options for the Dell PowerEdge R760 server. If you have any questions on this or any other server, check out IT Creations!