Dell PowerEdge C6620 Server Node and C6600 Chassis Review

August 15, 2024 0 By Lorena Mejia

We have the successor to the Dell PowerEdge C6520 server node, the Dell PowerEdge C6620 server node (SHOP HERE) supported on the Dell PowerEdge C6600 chassis. The C6620 is a half-width nearly 1U server sled with support for dual 4th or 5th generation Intel Xeon Scalable processors, up to 4TB of memory, and the C6600 chassis supports a number of storage options.  

Offering high compute performance with dual sockets for scale-out workloads, the Dell PowerEdge C6620 is a shoe-in for deep learning and data analytics. The PowerEdge C6600 chassis provides the storage, cooling, and power. There is another server node supported by the C6600 Chassis, the PowerEdge C6615 server node. That one offers a single AMD EPYC CPU with up to 64 cores and 576GB of memory, max.

With both options you can install up to 4x hot-swap server nodes per 2U Chassis. Outfitted with dual 5th generation Intel Xeon Scalable CPUs with up to 64 cores each, each Dell C6620 server node can support up to 128 physical processing cores and 256 virtual threads using Intel’s hyperthreading technology. System wide, that would be up to 512 physical cores and up to 16TB of DDR5 memory in a 2U enclosure with options for SAS, SATA and NVMe storage. This seems like a good time to mention, server nodes have to be the same model number. No mix and match supported.

The front of the chassis is where all the storage is located, aside from the optional boot optimized storage subsystem or BOSS to boot the server nodes but we’ll get to that. The storage bays on the front of the chassis are equally divided between the storage nodes. Storage options for the Dell C6620 server node include up to 16x SAS, SATA drives using the standard backplane or up to 16 SAS, SATA and NVMe drives using the universal backplane. That would provide for 2x SAS, SATA drives and 2x Universal slots that could host SAS, SATA or NVMe drives, per node.

Another configuration includes up to 8x E3.S NVMe SSDs, with 2x assigned to each server node. Not surprisingly, we have the one with no backplane and as a result no storage in front with the server nodes used primarily for compute, analysis or perhaps as the host for a storage array. Additional 80mm fans occupy some of what would be the drive cage areas.

Dell PowerEdge C6620 server node E3.S NVMe SSDs

Each control panel has an ON button and System status/ID button for 2x server nodes. The left supports server sleds 1 and 2, while the right server ear is for sleds 3 and 4 with an information pull-out right beside it. On the back of the C6620 Chassis you can see the 4x server nodes with sleds 1 and 2 on the right and 3 and 4 on the left. One of the first things you will notice is the tubes coming out of one of the PCIe slots. That’s because this one supports an optional liquid cooling solution with an inlet and outlet pipe. To the left, another PCIe slot.

Dell PowerEdge C6620 server node rear liquid cooling

Under those there is a USB 3.1 port, RJ45 port that can be configured for combo mode with iDRAC or as a network interface controller. Next a mini-display port, power button for the sled, microAB-USB port also used for iDRAC access, the unit ID button and the release lever for the sled. That other slot supports an OCP 3.0 mezzanine card.

Dell PowerEdge C6620 server node rear ports

That optional OCP mezzanine card supports a number of network controller options with various speeds and ports. Power supplies supported on this unit include 1800W, 2400W, 2800W like we have here, and 3200W options.

Dell PowerEdge C6620 server node compatible PSUs

Managing each server using the integrated Dell Remote Access Controller or iDRAC is through that management port on the back of each of the server nodes. Yes, there is only one RJ45 port but that one can be set up with Shared LAN on Motherboard mode enabled. You can also access iDRAC through the micro-AB-USB port. We did do a short video on iDRAC 9.0 Enterprise which you can see here. While iDRAC is good for one or two servers, Dell’s OpenManage utility is for managing multiple servers, chassis, storage and network switches. OpenManage provides a more comprehensive overview of the entire network. 

Taking the cover off the chassis you can see the top two server nodes, the fans, and the back of where the drive backplane and drive cage would be located. That is, if, this configuration had drives and a backplane. To either side of the 60mm cooling fan cartridges you can see where the server node slides in and connects to the midplane, which is that small circuit board on the side of each node.

A small 40 mm fan provides air circulation over the midplane board and out the back through the PSUs. Notice the heavy power cables (RED) connecting to the chassis management board to the midplane connectors.

Opposite the Midplane boards, is an Examax connector, which provides a high-speed backplane and I/O interconnect scalable from 25Gb/s up to 56Gb/s. The chassis management board is right in the middle between the server sleds with the power distribution board (PDB) closer towards the back of the chassis to connect with those stacked PSUs in back.

The actual server sled has the potential for two risers in back and an OCP card slot. Risers supported include options for x16 and x8 PCIe 5.0 slots. With direct liquid cooling, the PCIe slot above the OCP card is taken up with a liquid cooling rubber tube cover. That leaves only a single riser slot and the OCP mezzanine card slot for expansion options.

Dell PowerEdge C6620 server node PCIe slots

Using a SNAP I/O module in one of the PCIe slots can increase I/O performance in a dual socket platform, like the C6620. SNAP I/O offers balanced performance by enabling both processors to share the network adapter without involving the Ultra Path Interconnect or UPI for communications between the processors thereby maximizing bandwidth and performance. It’s like having two NICs installed with each feeding one of the dual CPUs. With SNAP I/O you only need the one. With limited PCIe slots available it’s quite useful. Not to mention you may need one of those slots for an HD controller if you go with SAS drives.

Dell PowerEdge C6620 server node snap I/O module

Depending on which risers you have installed and assuming no DLC, Direct Liquid Cooling, to muck it all up, you can install 2x single wide low-profile GPUs with a 75W power draw. You know, the usual candidates. The tried-and-true NVIDIA T4 with Turing Architecture or the NVIDIA A2 GPU featuring Ampere architecture. Both are fine options with the T4 Draining 75W of power and the A2 GPU consuming just 60W of power and only requires a PCIe 4.0 x8 slot. The T4 will need a PCIe 3.0 x16 slot but you can probably get away with an open end PCIe 4.0 x8 slot too. There are several limitations on using GPUs as a result of heat buildup so consult the installation and service guide for specific applications.  

Dell PowerEdge C6620 server node compatible GPUs

The LOM or Lan on Motherboard can be switched out as it too is a separate circuit board. Towards the front of the server node where it connects to the chassis, you can see the power connector on one side which mates up with the Midplane connector and those red power cables. On the other is the Examax backplane connector enabling the server node to access those drives up front. Along the side of the chassis is a dedicated x8 PCIe 4.0 slot for a BOSS-N1 module that can support 1x or 2x M.2 SSDs in a hardware RAID specifically to support the OS or a hypervisor. Installing that optional bit of hardware will enable you to use those upfront drives for target applications instead of OS support.

Outfitted with 5th generation Intel Xeon Scalable CPUs, each socket can provide up to 64 cores of processing power and 128 virtual threads. With 4th gen Intel Xeon Scalable processors installed, you can get up to 56 physical cores and 112 virtual threads. Those 5th gen CPUs support 8 memory channels just like the 4th gen CPUs. But with the 5th gen, memory speed has increased from a top speed of 4800MT/s with gen 4 up to 5600MT/s. With only 8x memory channels per socket, and 8x memory module slots, memory speed is optimized. Only Registered ECC DDR5 memory modules are supported.   

This platform can be used for a variety of general-purpose applications, for eCommerce, database, data warehousing, and of course high-performance computing, to name a few. With dual socket server nodes capable of supporting up to 4TB of memory plus a choice of network connectivity this platform can easily scale-out to meet your needs going forward. If you’re interested in this or any other system, contact IT Creations! Visit our website here.