Intel Server System M50CYP2UR208 Review

May 23, 2023 0 By Lorena Mejia

We have one of Intel’s workhorse servers, the Intel Server System M50CYP2UR208 (SHOP HERE). It is designed to support a number of business-critical workloads including storage, database, use as a web server, for ecommerce, data analytics and some high-performance computing workloads. This 2U server is part of Intel’s single-node server category and boasts 3rd gen Intel Xeon Scalable processors. It can field up to 24x hybrid drive bays supporting SAS, SATA or NVMe drive formats and can support up to 12TB of memory using Intel’s Optane Persistent 200-series memory modules. 



The one thing that was definitely not user friendly was finding information on this server on Intel’s site. Each manufacturer does do things differently but this almost seemed intentionally embedded and disjointed. It is part of the M50CYP2UR family of servers, formerly code named, Coyote creek or Ugly Coyote, or something without the attractive women? Our chassis is the M50CYP2UR208 but you would never know that because it doesn’t say that anywhere on the system! Yes, we did order this system with that name but come on! How about branding and model numbers. Standard marketing practices…

Intel Server System M50CYP2UR208 front panel

Enough of that…it supports up to 24x 2.5-inch drives in front and there is another system that supports 12x 3.5-inch drives, the M50CYP2UR312 and two more in the family at 1U that are also defined by the supported storage. The M50CYP1UR212 and M50CYP1UR204, respectively. At least some kind of rational behind this naming convention. 

This platform can be configured with 8x 16x or 24x SAS, SATA or NVMe drives and of course any number in between depending on how it’s ultimately configured.

The left server ear has 2x USB ports one is a 2.0 port and the other a 3.0 port. A control panel on the right has a power ON button with integrated LED and System ID button next to that with integrated LED light too. A System Cold Reset button and a non-maskable interrupt button which both require a pin or paperclip to use, and below those, a system status and drive activity LED. All drive bays have LEDs for activity and status. 

Intel Server M50CYP2UR208 front buttons

On the back of the system there are 8x PCIe slots corresponding to the 3x risers inside. A PSU to either side allows fresh air to run right down the center of the chassis and prevents heat buildup by separating them instead of stacking them on one side. PSU options include 1300W, 1600W, and 2100W, with the highest power supply recommended when the system is configured with GPGPUs. Along the bottom, a video port, remote management port, serial port A, OCP 3.0 mezzanine card slot, and 3x USB 3.0 ports. The dedicated management port accesses an ASPEED AST2500 server baseboard management controller for at-chassis and remote management of the system. 

It offers integrated video, KVM support, redirection, fan speed control, and voltage monitoring. Also available to manage the system is Intel Server Manager software, which provides continuous health monitoring and asset inventory. The BMC is IPMI 2.0 and Redfish compliant for windows and Linux based operating system. No surprises there. Additionally, Intel/s Data Center Manager or DCM console helps with power monitoring and thermal management in data centers. This family of servers is also certified with Nutanix Enterprise Cloud, VMware vSAN, and Microsoft Azure Stack HCI and can be delivered as Intel Data Center Blocks, drastically reducing time to deployment, and reducing risk with a number of pre-tested configurations. 

Intel M50CYP2UR208 server rear ports

The cover panel is removed in two sections. Inside, you can see a logical layout with drive cage, 6x 60mm fans, and dual CPU sockets each in charge of 8x memory module slots to either side for a total of 32x active memory module slots with both processors installed. You can go with just a single processor but you will only have access to half the goods on the system, including the memory and those PCIe slots in the back.

With 3rd Gen intel Xeon Scalable processors installed on this system, supported memory includes standard Registered and Load reduced plus the 3DS varieties which have a higher density of DRAM modules and persistent Memory series 200 modules. 3DS RDIMMs or LRDIMMs will provide up to 8TB, while the PMem 200 series at full capacity when paired with RDIMMs will provide up to 12TB of memory. Previous generation PMem series 100 are not supported. PMem modules provide a performance boost for in-memory analytics and database applications plus content delivery networks, think video streaming and high-performance computing applications.

With PMem you have two options, Direct APP mode and Memory Mode. Direct App mode treats the memory modules like SSD storage devices that are closer to the CPU for improved data caching. Memory Mode which does offer increased memory capacity but all memory is volatile, meaning there is no persistence. You can only utilize the persistence feature in Direct APP mode. 

Intel M50CYP2UR208 cpu tdp

With 3rd generation Intel Xeon Scalable processors provide 8x memory channels per CPU. Also, PCIe 4.0 with 64 PCIe lanes per CPU for 128 PCIe 4.0 lanes total with both processors installed. That is a significant increase over the 96 lanes available in a dual processor configuration using previous generation Scalable CPUs. The Thermal Design Power listed for this platform is 270W, which includes the entire range of Intel Xeon Scalable processors with a few exceptions. Another feature is 3 ultra-path interconnects or UPI delivering enhanced I/O between processors. 

Platinum, Gold 6300 or 5300 series or Silver categories, each provide different maximum cores counts. With the 3rdgeneration CPUs, Bronze-series processors have been discontinued, at least so far… Platinum processors provide the highest core count at up to 40 physical cores and up to 80 virtual threads using Intel’s hyper-threading technology. Gold processors will provide up to 32 cores maximum, while silver delivers up to 22 cores. Platinum and Gold 6300-series processors also support the fastest memory modules with speeds of up to 3200MHz. A standard heat sink is included with the system but if accelerators or GPGPUs are installed, a standard 1U heatsink is required along with a GPU air duct and bracket, plus a few other hardware bits.   

Intel System M50CYP2UR208 compatible drives

Storage up front can be allocated with either 3x separate backplanes each supporting up to 8x drive bays 2.5-inch SAS, SATA or NVMe SSD drives. The backplanes support a 64Gb/s interface for U.2 NVMe, 12Gb/s for SAS and 6Gb/s for SATA drive formats. All hot-swap compatible. Supported drive formats for the SAS, SATA NVMe combo backplane include SAS or SATA only, NVMe only, or a combination of both SAS and NVMe drives. The backplane features 4x PCIe 4.0 x8 SlimSAS connectors on the bottom, each of which connects to two NVMe U.2 drives on the other side. 2x multi-port Mini-SAS HD cable connectors on top provide signals for up to 4x SAS, SATA devices each. In this case, all of them route to a SAS expander board mounted between the backplane and fans.

Optionally the PCIe connectors can be routed to a tri-mode controller or to an NVMe riser card with Slim SAS connectors. The system can also be outfitted with an internal fixed mount solid state drive support with two SSD drive bays that installs just above the PSU on the right-hand side. 

System M50CYP2UR208 m.2

2x PCIe x4 slots on the system board can be outfitted with M.2 NVMe drives for additional superfast storage or to support the OS. They can also be RAIDed together in mirror mode for redundancy using either a Premium or Standard Intel VROC key.

An OCP network adapter mezzanine slot in the back of chassis conforms with the Open Compute Project OCP 3.0 specification and can be installed in an OCP mezzanine card slot on the end of the server board. The OCP card is installed at the back of chassis just by removing a small cover plate, and without removing the cover panel on the server. You get connection and bandwidth options with that OCP 3.0 card too.

For GPU installation, the riser cards will support a x16 mechanical and electrical lane with the slot providing up to 75 watts of power. 2X supplemental 12-V power connectors on the system provide up to 225W of additional power to cards that exceed the 75W maximum power provided by the slot. Assuming I did my math right, this system should support 2x 300W GPUs. I mean it doesn’t specifically mention which GPUs have been vetted on this system or how many, like anywhere in the documentation. That would include the Integration and Service guide, The Technical Guide, the Product Family System Integration and Service Guide, and the M50CYP Family Configuration Guide. Until further notice, let’s just go with that number… Two. Furthermore, we suspect it will support the GPUs of choice from NVIDIA like the A100 40GB and 80GB versions plus that new one with the Ada Lovelace architecture. This system will support a variety of I.O devices and I would suspect PCIe mounted SSD drives too depending on your business needs, how it’s configured and thermal constraints. 

Aside from the hard-to-find information on the 2U Intel Server System M50CYP2UR208, there is no denying it delivers some powerful performance. As one of Intel’s workhorse servers are we supposed to expect anything less? 

If you’re looking for a server, workstation, processors, storage, memory, GPUs or some critical component to keep your network from imploding then check out IT Creations. Chances are we have what you’re looking for and if we don’t, we can get it. Since we’re located on the West Coast think of us as your last bastion of hope.