Tyan Transport HX FT65TB8030 B8030F65TV8E2H-2T-N Tower Server Review
May 18, 2022We will be looking at what Tyan calls a Pedestal Workstation in some areas and in others a pedestal server, the Tyan Transport HX FT65TB8030 B8030F65TV8E2H-2T-N. Looking at the specs for this system, it’s clearly more suited for use as a fairly high-performance tower server and will present our case for that. It can be outfitted with a single AMD EPYC 7002 or 7003-series CPU, plus multiple Nvidia or AMD GPUs—the good ones!
The supported OS kind of seals the deal with Windows Server 2019 and Linux as the only ones mentioned in the compatibility list. There are a bunch of these Pedestal servers in Tyan’s Arsenal, each with a long serial number type identifier, and all look essentially the same. We really wish Tyan and a few other manufacturers would use something a little less complex in the naming convention. In this particular family there are 3x versions including the one we are looking at today. On another note, HX Thunder supports Intel Xeons and HX Transports have AMD EPYCs.
The differences between these platforms are—slight. Our unit, the B8030F65TV8E2H-2T-N, and we’ll just call it the -2T-N for simplicity, has 2x 10GbE ports, plus 2x 1GbE ports and these two fans barnacled on the back of the unit. The -N version only has 2x 1GbE ports plus the same fan bracket on the back. Lastly the -G version, which also supports 2x 1GbE ports, does not have the additional 2-fan bracket in back. And that would be where the differences end. All support a single 2nd generation ROME, or 3rd generation MILAN AMD EPYC series processor. Apparently, the fans are only for support of the passively cooled Nvidia Tesla GPU cards, so maybe no Nvidia Teslas on the -G version. And, with that being said, we suppose you could take off that fan bracket and use it as a workstation since then you would have access to the PCIe cards on the exterior of the system for monitor support. But there is that pesky OS issue and we’re sure associated drivers might be impacted as well.
Above the LAN ports just mentioned, there is a legacy COM port and VGA port, plus 2x USB 3.1 ports. Next, a dedicated RJ45 port to access the integrated platform management interface or IPMI. That last one on top is the ID button so if you were to rack mount this system, which you can, you can easily identify it from the other servers in the rack.
You may have noticed on top of all the ports in back, there’s that strange plug and the non-removable non-redundant, single PSU. Only with the 200-240V outlet will you get the full power from the 2000W PSU. Pretty standard for server rooms. You’ll get 1500W with a 115-200V outlet or only 1200W using a 100-115V outlet.
Along with the PCIe slots behind the extra fan bracket there is another set of PCIe slots to the right of all the ports and we’ll look at those after we pop the cover off, which is done by removing those two thumb screws on the right.
On the front of this system, starting at the bottom, there are 8x SAS, SATA 3.5-inch hot-swap drive bays. If you plan on using SAS drives, you will need a SAS HBA/RAID controller (LSI 9400-8i SAS3408). SATA is supported at 6Gb/s while SAS would provide data transfer rates at up to 12Gb/s.
On top of those are 2x small form factor 2.5-inch drive bays for native support for NVMe U.2 drive formats with a PCIe connection. That said, it can also be used to support SATA drives with 2x SATA connectors. Both the SFF and LFF bays have tool-less drive trays for easy access. If you want to go with 2.5-inch drives in the 3.5-inch bays, you will be using 4x screws for each of the 8 drive trays, also included with the purchase.
Next, a removable panel, which looks as though it could support some additional drive bays, a tape backup drive or some other storage media. However, there is no mention of possible uses in the manual or anywhere else for that matter. Consider it a mystery, like the naming convention. Then a space for an optional optical drive. Above that, the control panel has the on/off button, tell-tale lights for LANs 1-3, IPMI LED, HDD LED and ID LED. Next, a reset button, non-maskable interrupt button, and ID button, then a few USB 3.1 ports.
Opening the case, there is a lot of space to work with. Right in the middle, 3x large fans are standard equipment on all three chassis configurations. As mentioned, that CPU socket supports either an AMD EPYC 7002 ROME or 7003 MILAN CPU. Both, will support 8 to 64 physical cores and 16 to 128 virtual threads. You also get 128 PCIe 4.0 lanes. 7003 series MILAN processors offer a 19% increase in performance over ROME and have better integrated security features plus a higher clock speed and wattage in general. This system is limited to CPUs with a wattage of up to 280W, which pretty much leaves it open to anything you want to add. Keeping it cool is a heatsink with attached fan for active cooling. EPYCs with a “P” are specifically designed for single processor implementations which can also help you save a few bucks. You can install any of the others CPUs in the family too but if you are looking to save a little something, go with the “P” options. It’s not like you can add a second CPU anyways…
Supported memory capacity for Gen 3 EPYC MILAN CPUS is a little higher at 4TB per socket compared to only 2TB per socket on 7002 series. Both deliver 8x memory channels. On this system, we have 8x memory module slots totaland are limited to 2TB for either ROME or MILAN. Either Registered or Load-Reduced DDR4 memory modules can be installed on this system operating at speeds of up to 3200MT/s.
There are 5x PCIe 4.0 x16 slots each with a x16 link on the system board and another PCIe 4.0 x16 slot with a x8 link supported in this pre-installed Tyan riser. All made possible by 128 PCIe 4.0 lanes supported by the AMD EPYC CPUs. It’s designed to support of up to 4x double-width high-performance Nvidia GPUs and comes with all the power and signal cables plus the GPU brackets to mount them. Just a matter of note, if you do install 4x double-wide GPUs, one of the 5x PCIe slots on the system board will be blocked.
Like we mentioned earlier, that fan bracket in back is required if you plan on installing the passively cooled Nvidia Tesla options, which include the Nvidia Tesla T4 at the low end, great for distributed environments, and the Tesla V100S and V100-32Gig GPUs at the upper end. We don’t think you’ll need the extra fans for those single-width T4 GPUs.
However, the V100S is billed as the “Worlds most advanced data center GPU ever” for high-performance computing, plus graphics and AI applications, it’s a faster version of the V100-32G and comes with 32Gigs too. You will need the fans for those. Mind you, these GPUs are only rated for PCIe 3.0., which is really not an issue given the newness of PCIe 4.0 and the actual amount of data being transferred.
We will add one more to that PCIe 3.0 list, which is the Nvidia Quadro RTX 6000. The Quadro RTX 6000 offers superb performance for advanced computer graphics, hardware accelerated ray tracing, also great for Deep Learning, plus realistic shading enabling fast content creation. Very good for a server-based workstation appliance supporting a few high-performance Virtual machines. You can even strap a NVLink bridge on these for multi-GPU configurations and data transfer speed of up to 100GB/s and a combined 48GB of GDDR6 memory for highly complex renderings. Also supported, the Nvidia Quadro A4000 and A5000 GPUs offering Ampere architecture and support for PCIe Gen 4.0. these ones are created for designers, engineers, and artists. They too, also support Nvidia’s NVLink for large data sets and rapid visualization. We’re missing a few of the GPUs that are compatible with this system but most likely it will support several other Nvidia GPUs as well. They just haven’t tested them!
That pre-installed Tyan riser straddles the CPU and memory module slots from the fan supports to the rear of the chassis. The single PCIe 4.0 slot can be used to support a SAS HBA or RAID controller card. It can also be used for a performance I/O card especially if you install that 4th GPU and block off that board-accessible PCIe slot. If installing Tesla GPUs, it wouldn’t matter anyways because the fan bracket on the back is required in that case. Not to mention some of the other supported GPUs require the 3 internal fans and potentially the rear fans, running at full bore.
2x PCIe 4.0 x4 slots on the system board can be outfitted with NVMe M.2 drives that will extend between the lower 3x PCIe slots. Those can be used in in mirror mode to support the OS. Right in front of those M.2 drives you have the Aspeed AST2500 module for remote and at chassis management of the system.
This system uses a modified MegaRAC SP-X solution to manage the system, which enables compatibility with a number of other 3rd-party hardware and software manufacturers. The interface itself has a standard interface without any colorful graphics and provides the basics for system health and managing the system plus remote iKVM at no additional cost.
As a tower server, or for edge applications, and providing support for up to 4x high-performance GPUs, this is a great little/big system. It’s not small, and it may need its own space if you plan on configuring this platform with all four GPUs and all fans running rampant. It does get a little noisy. But still, for certain applications, this platform excels. Not to mention it provides a lower cost alternative to a dual processor platform, plus significant expansion capabilities and no-cost management software. All things to consider for a small to medium-sized business.
If you have any questions about this platform, or any other system, contact us today!