Dell EMC PowerEdge R6525 ReviewMay 7, 2020
As you may have guessed from the 5 on the end of the Dell EMC PowerEdge R6525 server, this platform features AMD EPYC processors. Not only that, but two of the new 2nd generation AMD EPYCs in a 1U chassis!
In addition to the EPYC and Ryzen line of processors, AMD is the first company to offer PCIe 4.0 compatibility on an x86 platform. PCIe 4.0 doubles the I/O bandwidth compared to that supported by PCIe 3.0. What, does it all mean? It’s faster, stronger, better than it was before.
You also get 128 PCIe lanes to work with and that’s using a single EPYC processor or two. You’re thinking why don’t I get all 256? You know 128 plus 128 = 256. It does sound a little strange until you consider the general architecture of the CPUs which is more of a micro-processor with everything integrated instead of a CPU or central processing unit that sits back and controls the chipset scattered around the motherboard like how a conductor might manage his assorted musicians.
So, in this case 128 lanes are used for CPU-to-CPU communications, and the other 128 PCIe lanes are for expansion cards, like network controllers, HD/RAID controllers, super fast storage, and GPUs. The interesting thing is even if there was just a single processor, you still get all 128 PCIe 4.0 lanes! Compare that to the 96 PCIe 3.0 with dual processors supported on Intel-based platforms.
The bezel on the front of the system is familiar with a honeycomb pattern and optional control panel. The control panel on the left features a status indicator with tell-tale lights indicating drive status, temperature, electrical, memory, and PCIe indicator. You can also get this system with the optional QuickSync 2.0 button next to the Information button for at chassis management of the system using a smart phone or tablet.
The right server ear has a power on button plus a USB 3.0 port and a mini USB-C port for direct access to the integrated iDRAC module, and VGA port for at chassis management of the system.
Underneath that bezel, you’ll find 4x 3.5-inch storage bays, at least on our system. Other storage configurations include 8x 2.5-inch, 10x 2.5-inch drives, and even a 12-bay configuration with 10x up front and two in back.
On the back of the system you’ll see a hot swap 800W Platinum PSU to either side of the chassis, with options for either 1400W Platinum or 1100W Titanium PSUs. And right next to that 1st PSU is a dedicated for a special hot-swap M.2 carrier for use as a dual-redundant boot drive.
From left to right, we have two RJ-45 network ports (part number M3Y03, Dell 1GbE LOM Card), optional OCP NIC port (WW2NX OCP Dell Mezzanine Card), System identification button, dedicated RJ-45 iDRAC port, plus a USB 3.0 port on the bottom, a USB 2.0 on top, and a VGA port (11F1N). These ports are all supported on small mezzanine cards that plug directly into the motherboard.
On top of those are expansion card riser slots. That will vary depending on the specific riser options you install. We’ll get to that in a moment. Once we remove the cover panel you can see this is a very compact system.
What makes this system unique? Dell didn’t just use the mother board they had already designed for the first-generation AMD EPYC processors. Oh no. That would have been too easy. In order to take full advantage of the capabilities offered by AMD EPYC Gen2 processors and the associated heat generation, plus PCIe 4.0, they designed an entirely new board.
This new one is “T” Shaped with PSUs to either side of the chassis. The new design serves to mitigate heat build up by providing better air circulation. It’s got 8 high-performance fans sucking fresh air past those hard drives up front and pushing it over the memory modules and CPU heatsinks, and then out the back. The PCIe lanes have also been moved closer to the CPUs for better performance and less latency.
This system came with a pair of EPYC 7205 processors, which may not feature the 64 cores and 128 threads supported by say the EPYC 7H12 or the 7742 CPUs, but it does have 32 cores and 64 threads to work with. From my last count, it’s still 4 more than the Platinum Intel Xeon Scalable 8180 CPUS at 28 cores.
The R6525 will support 64 core processors in each socket, but only up to 225W TDP. This one supports a Thermal Design Point TDP of 180W, which is just sipping power compared to that 7H12 EPYC processor with a TDP of 280W. That rivals the AMD Ryzen Threadripper CPU we had in our Fractal workstation from the last review, and it had a gigantic fan.
You get 4x pairs of memory module slots to either side for a total of 32 active memory modules slots! It’s capable of supporting up to 4TB of memory using 128GB load-reduced memory modules in each slot. Registered DIMMs will provide a maximum of 2TB of memory. Data Centric Persistent Memory Modules (DCPMM), or otherwise known by their brand name Intel Optane memory modules, are not supported. Not really surprising.
Memory does take a hit compared to the maximum levels available on the Intel Xeon platforms because no “L” suffix memory modules at 4.5TB per CPU like on a select few of the Intel Xeon Scalable processors. Instead, they all support 2TB each, but at speeds of up to 3200MHz. That said, the LRDIMMs only support 2666MHz with all slots loaded and dual processors. For maximum throughput at the full 3200MHz you can only install half the memory with one module in each memory channel.
The great thing about these AMD systems is that they will work seamlessly with the rest of your Intel-based hardware. This R6525 uses the integrated Dell Remote Access Controller, or iDRAC 9.0 with Lifecycle controller for at chassis and remote management of the system just like the other platforms. The Cyber-resilient architecture is managed at every moment of the product lifecycle and allows for automated management with scripting using iDRAC Restful API, with RedFish compatibility. Dell EMC OpenManage provides another layer of efficiency in the data center, streamlining operations with automation resulting in up to 72% less IT effort. There are also several security features, including cryptographic isolation between the hypervisor and VMs, plus a bunch of other cool sounding features.
You can install SATA, SAS or NVMe drives up front depending on your need for speed, capacity or a little of both. An optional Boot Optimized Subsystem (BOSS) will support 2x M.2 SSDs in a hardware RAID for redundancy. For Hypervisor support, an optional internal Dual SD Card Module can be installed. There are dedicated slots for support of the two modules. SATA drives are supported natively using the integrated S150 storage controller with SAS/SATA backplane, like on this system. SAS at 12GB/s and NVMe drives will require a PCIe controller and the NVMe backplane.
It looks like there are a few new PERCs to choose from, including a few for internal storage that have familiar names, but with the number “5” on the end. The H345, the H745, and the HBA345. Looks like the H840 is still an option for support of external drives. These are different controllers than even those used in the fairly new R6415, which also used mini PERCs. Ours is outfitted with the new H345 miniPERC Controller and feature a very compact size with a dedicated connection port to help preserve your limited PCIe slots.
A variant on the 10-bay chassis will also accept two more 2.5-inch drives in a rear drive cage, but that will take up some of your PCIe slots. There are four different riser types to choose from depending on your workload. However, only a riser type 1 will work with this system in a single processor configuration. Along with the two integrated RJ-45 ports there is an option for more ports using an optional OCP card. It supports a number of connections including up to 100GbE, without using the PCIe lanes.
With all the graphics intensive applications these days, you might want to install some GPUs to support those VDI applications. This system will support a maximum of 1 full height, single-width GPU like the Nvidia T4 Tensor core GPU, which is ideal for distributed computing environments. Or you can install up to three low-profile single-width cards at 75W a piece.
There’s a lot going with this 1U server. These systems are great for High-Performance Computing, dense virtualization, and also for VDI deployments, demanding workloads and applications, such as data warehouses, ecommerce, and database. The Dell EMC PowerEdge R6525 has a great capacity for flexibility, and with AMD processors, you can reap the rewards or some serious core counts.
If you liked this video and would like to see more, then subscribe to our YouTube channel. Give it the thumbs up too while you’re at it.
If you have any questions on this or any other server, just post them in the comments section below. We have this server and many, many others that we would love to build for you.