The Cisco HyperFlex HX240c M5 server, in its 24-bay small form factor (SFF) configuration, is designed for high-performance, storage-intensive workloads. Powered by Intel® Xeon® Scalable processors, it supports up to two CPUs and 3 TB of DDR4 memory across 24 DIMM slots, delivering exceptional computational and memory performance.
This 2U chassis accommodates up to 24 hot-swappable SFF drives, offering flexible storage configurations, including all-NVMe, all-flash, or hybrid SSD/HDD setups. With six PCIe expansion slots, the server supports additional GPUs or network cards, enhancing its adaptability for virtualization, AI, and data-intensive applications.
Need help with the configuration? Contact us today!
Front bay 1 - 12: Hybrid: (6 to 12) HDDs Drives
Rear bay 13: Caching NVMe SSD
Rear bay 14: Housekeeping SSD for SDS logs only
Cisco HyperFlex Software Minimum Level
3.0.1 or later
Front bay 1: Housekeeping SSD for SDS logs only
Front bays 2 - 24: Hybrid: (6 to 23) HDDs Drives
Rear bay 25: Caching NVMe SSD
Cisco HyperFlex Software Minimum Level
2.6.1a or later
Front bay 1: Housekeeping NVMe for SDS logs only
Front bays 2 - 24: Hybrid: (6 to 23) NVMe Drives
Rear bay 25: Caching NVMe only
Cisco HyperFlex Software Minimum Level
2.6.1a or later
Mixing HX240c Hybrid SED HyperFlex nodes with HX240c All-Flash SED HyperFlex nodes within the same HyperFlex cluster is not supported.
NVMe Drive RulesNVMe SSDs are supported only in the Caching SSD position, in drive bay 13 for LFF versions or bay 25 for SFF versions. NVMe SSDs are not supported for persistent storage or as the Housekeeping drive.
The deployment mode in the Cisco HyperFlex HX240c M5 SX Node is critical as it defines how the server integrates into the network and manages workloads. The deployment mode determines whether the server operates with or without Cisco Fabric Interconnect (FI), impacting scalability, compatibility, and feature availability.
This deployment option connects the server to Cisco Fabric Interconnect. The installation for this type of deployment can be done using the standalone installer or from the Intersight. This deployment mode has been supported since launch of HyperFlex.
This deployment option allows server nodes to be directly connected to existing switches. The installation for this type of deployment can be done from the Intersight only
The mLOM on the HX240c 2U Node server from Cisco has its own dedicated slot. This is ideal way to connect to the Cisco FI on the network. If you selected the HX-DC-no-FI these options are not supported.
HX-PCIE-C40Q-03 (40G VICs), HX-PCIE-C25Q-04 and HX-PCIE-OFFLOAD-1The mLOM VIC 1387 has a 40Gb QSFP port but can be converted to a 10Gb SFP with the CVR-QSFP-SFP10G. This is a prefect solution when you need to provide a 10Gb connection to the Cisco FL 6200 series.
An internal slot is reserved for the Cisco 12G SAS HBA (HX-SAS-M5HD). This HBA is managed by the Cisco Integrated Management Controller (CIMC).
Supports JBOD mode only (no RAID functionality. Ideal for SDS (Software Defined Storage) applications. It is also ideal for environments demanding the highest IOPs (for external SSD attach), where a RAID controller can be an I/O bottleneck.
Supports up to 26 internal SAS HDDs and SAS/SATA SSDs
Must install 2 CPUs for GPU support and for Riser 2 to be supported.
slot 2 requires CPU2.
Only T4, RTX supported with 1 CPU, max 3 with HX-RIS-1B-240M5, Riser 1B 3PCIe slots (x8, x8, x8); all from CPU1.
slot 2 requires CPU2.
PCIe cable connectors for rear NVMe SSDs
GPUs cannot be mixed.
Must install 2 CPUs for GPU support and for Riser 2 to be supported. All GPU cards require two CPUs and a minimum of two power supplies in the server. 1600 W power supplies are recommended.
Only T4 supported with 1 CPU, max 3 with HX-RIS-1B-240M5, Riser 1B 3PCIe slots (x8, x8, x8); all from CPU1, For T4
HX-GPU-T4-16 require special riser cards (HX-RIS-1-240M5 and HX-RIS-2B-240M5) for full configuration of 5 or 6 cards.
NVIDIA M10 GPUs can support only less than 1 TB of total memory in the server. Do not install more than fourteen 64-GB DIMMs when using an NVIDIA GPU card in this server
Product ID (PID) | PID Description | Card Height | Maximum Cards Per Node |
---|---|---|---|
HX-GPU-M10 | NVIDIA M10 | Double wide (consumes 2 slots) | 2 |
HX-GPU-T4-16 | NVIDIA T4 PCIE 75W 16GB | Low Profile Single-Width | 6 |
HX-GPU-RTX6000 | NVIDIA QUADRO RTX 6000, PASSIVE, 250W TGP, 24GB | Double Wide (consumes 2 slots) | 2 |
HX-GPU-RTX8000 | NVIDIA QUADRO RTX 8000, PASSIVE, 250W TGP, 48GB | Double Wide (consumes 2 slots) | 2 |
Each power supply is certified for high-efficiency operation and offer multiple power output options. This allows users to "right-size" based on server configuration, which improves power efficiency, lower overall energy costs and avoid stranded capacity in the data center.
All GPU cards require two CPUs and a minimum of two power supplies in the server. 1600 W power supplies are recommended.
Product ID (PID) | PID Description |
---|---|
HX-PSU1-1050W | 1050W AC power supply for C-Series servers |
HX-PSUV2-1050DC | 1050W DC Power Supply for C-Series servers |
HX-PSU1-1600W | 1600W AC power supply for C-Series servers |
HX-PSU1-1050EL | Cisco UCS 1050W AC Power Supply for Rack Server Low Line |
For all orders exceeding a value of 100USD shipping is offered for free.
Returns will be accepted for up to 10 days of Customer’s receipt or tracking number on unworn items. You, as a Customer, are obliged to inform us via email before you return the item.
Otherwise, standard shipping charges apply. Check out our delivery Terms & Conditions for more details.