Lenovo ThinkSystem SR650 V2 Server Review

January 17, 2023 0 By Lorena Mejia

The Lenovo ThinkSystem SR650 V2 is a 2U high-performance server that can be adapted to just about any HPC workload (SHOP HERE). This platform features dual 3rd generation Intel Xeon Scalable processors, up to 8TB of memory and depending on how you configure it, there is room for up to 40x 2.5-inch drive bays, or up to 8x single-width GPUs, or 3x double-width GPUs and various combinations in between. It will even take that Nvidia A100 80GB GPU, ideal for compute acceleration, AI and machine learning.



We almost want to switch those first two letters around and call it the RS650 for a Racing Sport. Like if this was a high-performance vehicle. It’s designed for machine learning, cloud workloads, virtualization, and cloud-based analytics. It can be outfitted with 3rd generation Intel Xeon Scalable CPUs with up to 40x cores each, for a total of 80 cores.

This is an expansion of the base system SR650 and features high-density storage options using Lenovo’s AnyBay backplane design to allow SAS, SATA or NVMe U.2 or U.3 drives in any bay. If computational acceleration is needed, this platform can support 8x single-width PCIe GPUs like the Nvidia A2 or up to 3x double-width 300W GPUs, like the Nvidia A100 80GB and A6000. However, we will mention these various configurations are not without compromise because you can’t have it all.

Forgetting about the additional storage options for an internal drive cage and rear storage cages for a moment, there are 3.5-inch and 2.5-inch storage options supporting SAS, SATA and NVMe storage devices. There’s even a backplane-less version for compute only.

Lenovo ThinkSystem SR650 V2 chassis options

The front of the system can have up to 24x 2.5-inch drive bays but there are also configurations with 8x and 16x 2.5-inch drive bays, with or without an optional LCD diagnostics panel. The 3.5-inch drive bay chassis comes with either 8x or 12x storage bays in front. In the end the 3.5-inch chassis will support up to 20x 3.5-inch drive bays taking into account internal and rear storage cage options. That is if you decide to maximize storage. Depending on which storage condition you choose on the front, there are either two small control panels in each of the server ears or just the left server ear control panel with a front I/O assembly on the media bay. 

The control panel on the right offers an On/Off button, network activity, system ID and system error ID button each with LED lights for status. Right below those, two USB ports. One 2.0 connector for use with the xClarity Controller mobile app and the other, a USB 3.2 gen 1 port that can be used to connect a keyboard mouse or USB storage device. You can also set the 2.0 port to connect to just USB devices, the xClarity controller management function or both. You get much of the same on the optional front I/O assembly in the media bay.

Lenovo ThinkSystem SR650 V2 front panel ports

An optional Insight Display Panel provides even more granular data on system status but it’s only available on certain configurations that support the optional media bay. It offers a status Dashboard and LED Screen with a few buttons to scroll through the various menus. There is even an external diagnostic connector that attaches to the system with a cable. It can be plugged into one of the ports on the left server ear and connects to the xClarity Controller for at-chassis management of the system. The magnetic base lets you attach it to the server rack and connect it to the other Lenovo systems in the network without fumbling around with the actual controller. 

Lenovo’s xClarity controller (is designed) to standardize, simplify and automate management tasks. An optional upgrade is the xClarity Controller Advanced for remote control keyboard, video, mouse functions.  And then there is also xClarity Enterprise enabling remote media files, boot capture and power capping. xClarity Administrator provides all the tools for Administrators to manage the system ensuring it stays up and running.  

Going around to the back of the system, there are a few more options for adding more storage either 2.5-inch, 3.5-inch or two 7mm drives that can be used to boot the system.

Some of the Rear PCIe slots can be conscripted to support an 8-bay 2.5-inch drive cage or another with support for up to 4x 2.5-inch drives. A 2-bay 3.5-inch drive cage can also be installed above the PSUs or you can go with a 4-bay 3.5-inch drive cage for even more flexibility that will straddle the entire back panel. It was at this point that we thought perhaps Lenovo was just showing off or something and we haven’t even gotten to the internally mounted optional drive cages. Before we do, we’ll take a quick look at the other items on the back of the system. 

The dual redundant PSUs can support several different power outputs depending on power needs and are stacked on the far-right, side by side.

Next to those, there is a Non-Maskable Interrupt button, USB 3.0 port, VGA port connector a few more USB ports (5Gb/s) and a system error LED. The dedicated port to access the baseboard management controller or BMC is next followed by the system ID button and an OCP 3.0 Ethernet adapter.

The OCP 3.0 adapter is optional and provides 2x to 4x ports and various connection speeds depending on your needs. Quite handy too if you plan on installing GPUs in all of those 8x rear accessible PCIe slots above, as there are no integrated LAN ports.  With up to 8x PCIe slots available, you can add more storage as we have already discussed, networking cards, or GPUs for compute acceleration or distributed environments.

Lenovo SR650 V2 OCP 3.0 adapter

The flexibility this system offers is truly impressive. Inside the case there is even more potential for configuration options, including, you guessed it, more storage. A drive cage supporting 8x 2.5-inch or another one with 4x 3.5-inch drives that can be installed adding to an already impressive storage capacity. The 2.5-inch chassis will support the most storage with up to 40x drive bays, including 24x in front, 8x internally mounted drives, and 8x more in back. That 8x bay drive cage can also support the battery to power the system for roughly a minute until data in memory can be transferred to the main drives. The regular air baffle has a specific slot for that too. Of course, if you configure it with all that storage there won’t be room for much of anything else. No GPUs and no Network Interface Controllers aside from the OCP 3.0 card.

Of course, there is a dedicated socket just behind the backplane for an HBA/RAID controller so you won’t have to use up one of those PCIe slots. This system will take up to 32x NVMe storge devices but only using switch adapters in the PCIe slots. 32x NVMe drives equates to a 2:1 oversubscription, which could impact network performance but only if you plan on flooding the network with maximum traffic at the same time. If you install only 16x NVMe drives, then you have 1:1 connectivity maximizing throughput and minimizing latency. These are things you should probably figure out before you configure the system.

You may be thinking about cooling at this point and all that storage potentially blocking air flow but that’s not the case. The CPUs, which would reside under the internal drive cage are connected to radiators that connect just behind the 5x fans and then to the heatsinks on the CPUs. Kind of what you might see on a high-performance vehicle with an additional oil cooler to take the extra heat load. However, everything on this system is very well engineered to maintain a suitable environment for calculating reams of data and supporting some high-performance heat-generating hardware. A truly elegant design.    

Those heatsinks provide cooling for one or two 3rd generation Intel Xeon Scalable processors with 4x to 40x cores and a Thermal Design Point of up to 270 watts each. With PCIe 4.0, those processors deliver the bandwidth needed to provide the best throughput for I/O and NVMe storage options, not to mention GPU support, which we will get to in a minute.

3rd generation Intel Xeon Scalable processors require a slightly larger socket than the first- and second-generation processors, which are completely incompatible with this platform because of that larger socket. Another upgrade from the previous generation is 8x memory channels compared to only 6x on the previous CPU versions. 8x memory channels mean more memory, and faster too. Actually, the memory speed has no relation to the number of memory channels but 3200MHz memory is now supported.

Each socket is in charge of 16x memory module slots for 32x total on the system board. The SR650 V2 will support for up to 12TB at maximum capacity using a combination of 4TB of Registered DIMMs and 8TB of Persistent memory modules used for storage. Another option provides up to 10TB but this time with Persistent memory providing system memory while the RDIMMs provide a layer of Cache memory. Without using Persistent memory, you can install up to 2TB of standard Registered DIMMs at 64GB each or up to 8TB of 3Ds Registered DIMMs at 256GB each.

ThinkSystem SR650 V2 memory module slots

A warning, this is the boring part. One of the benefits of PMem modules is it has 2 accessibility modes either Memory Mode where it performs the same function as DRAM modules and Persistence is really not part of the equation or Application Direct mode. Actually, there are three because there is a mixed Mode combining the two but let’s stick to the basics.  Application direct mode is more like traditional storage but in a memory module form factor. Depending on how the persistent memory is configured in BIOS it can either be used for storage or system memory. PMem is slower than DRAM modules in that the access speed is about 350 Nanoseconds compared to only 14 nanoseconds with standard DRAMs. However, what it lacks in speed it makes up for in persistence. Without power, standard DRAMs lose their storage potential and whatever is running through the circuits. With the persistence offered by PMem the modules can be treated almost like permanent storage drives. It’s designed to keep more data closer to the CPU if used as Direct access, with PMem providing a slower memory tier while DRAM provides the faster memory tier. 

PCIe 4.0 provides twice the bandwidth of PCIe 3.0 and not only improves I/O and NVMe storage performance but also enables more PCIe lanes. In A Nut Shell: PCIe 4.0 has a 16 GT/s data transfer rate, while PCIe 3.0 has 8 GT/s. With 3rd Gen processors there are actually 64 PCIe 4.0 lanes compared to only 48 PCIe 3.0 lanes on the previous generation, so significantly more bandwidth. There are 8x x16 PCIe slots on three separate risers in addition to an OCP 3.0 mezzanine card slot for network communications. There’s even two x4 PCIe slots on the system board to support one or two M.2 drives in mirror mode for redundancy to support the Operating System.

8x PCIe slots allow you to install up to 8x single-width GPUs like the Nvidia T4, which let’s face it is kind of old at this point, but still a great option for distributed environments and reasonable compute acceleration. In this less than 75W category there is also the option of the A2 and A4 GPUs from Nvidia both based on Ampere technology. You really will need that OCP 3.0 card slot if you max out with 8x of these though. Alternatively, there are mid-range options like the A10 and then some super-performers like the A100 and A6000, also based on Ampere architecture but you can only install 3 of those double-wide cards and you may still be out of slots given the card width. Many of these GPU configurations will affect the amount and type of storage too. Perhaps the memory and the type of heat sink you need to install. Just a few things to keep in mind.  

Lenovo ThinkSystem SR650 V2 top view

For a 2U system, this one truly does it all and can support not only a little Podunk business but a large enterprise business depending on how it’s configured. It’s quite impressive and definitely built to scale as your business grows or your needs change. In many ways we feel like we’ve only scratched the surface of the capabilities of the Lenovo ThinkSystem SR650 v2 Server and it does support a lot but there are limits. It is only 2U and if you max out on storage no GPUs or if you load up on GPUs not as much storage potential. We’re sure there is a happy medium in there too. 

As you contemplate all these features, check out our website. We have this server and many others. If you have any questions on this system or if you’re having technical difficulties with some used up ready for replacement unit, give us a call!