HyperFlex Edge clusters can be configured in 2, 3 or 4 node configurations. Single node clusters and clusters larger than 4 nodes are not supported with HyperFlex Edge.
The Cisco HyperFlex HX240c Edge Node (HX-E-240c-M6SX) is purpose-built for remote offices, branch locations, and edge environments, delivering high-capacity storage and compute capabilities in a compact 2U form factor. Designed to simplify operations in distributed locations, the HX240c Edge Node supports hybrid storage configurations with up to 24 front-facing SAS/SATA drives and optional rear-facing drives, enabling flexibility for varying workloads. It operates seamlessly without requiring Cisco UCS Fabric Interconnects, relying on top-of-rack Ethernet switches for streamlined deployment and management. Managed through Cisco Intersight, this node provides centralized, cloud-based oversight, making it an ideal choice for businesses seeking reliable and efficient edge computing solutions.
Need help with the configuration? Contact us today!
Up to 24 front SFF hard drives (HDDs) and solid state drives (SSDs). 24 Drives are used as below:
Up to 4 SFF SAS/SATA rear drives (Optional)
I/O Centric ConfigurationUp to 8 PCIe 3.0 Slots
Supports a boot-optimized RAID controller carrier that holds two SATA M.2 SSDs.
Up to 24 front SFF solid state drives (SSDs). 24 Drives are used as below:
Up to 4 SFF SAS/SATA rear drives (Optional)
I/O Centric ConfigurationUp to 8 PCIe 3.0 Slots
All Flash means Only SSD Drives are supported.
NVMe drives are not supported
This HBA supports up to 16 SAS or SATA drives (HX-E-240-M6SX and HXAF-E-240-M6SX server has 24 front drives and 2 or 4 rear drives) operating at 3 Gbs, 6 Gbs, and 12Gbs. It supports JBOD or pass-through mode (not RAID) and plugs directly into the drive backplane. Two of these controllers are required to control 24 front drives and 2 or 4 rear drives.
Order two identical M.2 SATA SSDs for the boot-optimized RAID controller. You cannot mix M.2 SATA SSD capacities.It is recommended that M.2 SATA SSDs be used as boot-only devices.
The Cisco HyperFlex Edge Network Topologies define the architecture for connecting the nodes within the HyperFlex HX240c M6 Edge cluster to the network, emphasizing scalability, flexibility, and reliability for edge deployments. These topologies support configurations using existing top-of-rack switches, with options for single-switch or dual-switch setups. Single-switch topologies are cost-effective and ideal for smaller environments, while dual-switch configurations enhance redundancy and fault tolerance, ensuring continuous operations in the event of a network failure. The flexibility to use 1GE, 10GE, or 25GE switching enables organizations to optimize bandwidth and performance based on their needs. These topologies are integral to the cluster's ability to provide seamless, high-performance hyperconverged infrastructure in edge environments, such as remote offices or branch locations, where infrastructure simplicity and reliability are paramount.
Selecting HX-E-TOPO2 will include the Intel i350 quad port PCIe NIC for 1GE topologies. Two ports on the NIC are used for HyperFlex functions. The remaining two ports may be used by applications after the HyperFlex deployment is completed.
TOPO3, also referred to as the 1 Gigabit Ethernet (1GE) Single Switch Topology, is a configuration designed for smaller-scale Cisco HyperFlex Edge deployments where simplicity and cost efficiency are priorities.
Cisco strongly recommends HX-E-TOPO4 for all new deployments
Selecting HX-E-TOPO4 will include the Cisco UCS 1467 quad port 10/25G SFP28 mLOM card (HX-M-V25-04) for 10/25GE topologies. Two ports on the 10GE are used for HyperFlex functions. The remaining two ports may be used by applications after the HyperFlex deployment is completed.
TOPO 5 | HX-E-TOPO5
Hyperflex NIC Connectivity Mode
Starting with HyperFlex 5.0(2a), the TOPO5 option is supported. Minimum 4 NIC Ports required, If NIC connectivity mode is selected, cannot select Riser1 HH X16 Slot or Riser2 HH X8 Slot Options.
Click To View TOPO 5
Product ID (PID) | Description |
---|---|
HyperFlex NIC Connectivity Mode | |
R2 Slot 4 x8 PCIe NIC | |
HX-PCIE-ID10GF | Intel X710 dual-port 10G SFP+ |
HX-PCIE-IQ10GF | Intel X710 quad-port 10G SFP+ |
HX-P-I8D25GF | Cisco-Intel E810XXVDA2 2x25/10 GbE SFP28 PCIe NIC |
HX-P-I8Q25GF | Cisco-Intel E810XXVDA4L 4x25/10 GbE SFP28 PCIe NIC |
R2 Slot 6 x8 PCIe NIC | |
HX-PCIE-ID10GF | Intel X710 dual-port 10G SFP+ |
HX-PCIE-IQ10GF | Intel X710 quad-port 10G SFP+ NIC |
HX-P-I8D25GF | Cisco-Intel E810XXVDA2 2x25/10 GbE SFP28 PCIe NIC |
HX-P-I8Q25GF | Cisco-Intel E810XXVDA4L 4x25/10 GbE SFP28 PCIe NIC |
The I/O centric version shows all PCIe slots
The storage centric version shows a combination of PCIe risers and storage bays.
GPUs supported in x16 Slots 2, 5 & 7.
Must install 2 CPUs for Riser 2 & 3.
Controlled with CPU1
Controlled with CPU1
Controlled with CPU2
Controlled with CPU2
Controlled with CPU2
Controlled with CPU2.
GPUs cannot be mixed.
Riser 1B does not accept GPUs.
Riser 3B does not accept GPUs.
When a GPU is ordered, the server comes with low-profile heatsinks PID (HX-HSLP-M6=) and need to select special air duct PID (HX-ADGPU-245M6=) for double-wide GPUs
GPU Product ID (PID) | PID Description | Card Size | Max GPU per Node | Riser 1A (Gen 4) | Riser 1B (Gen 4) | Riser 2 (Gen 4) | Riser 3A (Gen 4) | Riser 3B (Gen 4) | Riser 3C |
---|---|---|---|---|---|---|---|---|---|
HX-GPU-A10 | TESLA A10, PASSIVE, 150W, 24GB | Single-wide | 5 | slot 2 & 3 | N/A | slot 5 & 6 | N/A | N/A | slot 7 |
HX-GPU-A30 | TESLA A30, PASSIVE, 180W, 24GB | Double-wide | 3 | slot 2 | N/A | slot 5 | N/A | N/A | slot 7 |
HX-GPU-A40 | TESLA A40 RTX, PASSIVE, 300W, 48GB | Double-wide | 3 | slot 2 | N/A | slot 5 | N/A | N/A | slot 7 |
HX-GPU-A100-80 | TESLA A100, PASSIVE, 300W, 80GB | Double-wide | 3 | slot 2 | N/A | slot 5 | N/A | N/A | slot 7 |
HX-GPU-A16 | NVIDIA A16 PCIE 250W 4X16GB | Double-wide | 3 | slot 2 | N/A | slot 5 | N/A | N/A | slot 7 |
Power supplies share a common electrical and physical design that allows for hot-plug and tool-less installation into M6 HX-Series servers. Each power supply is certified for high-efficiency operation and offer multiple power output options.
The 2300 W power supply uses a different power connector that the rest of the power supplies, so you must use different power cables to connect it.
Product ID (PID) | PID Description |
---|---|
PSU (Input High Line 210VAC) | |
HX-PSU1-1050W | 1050W AC PSU Platinum (Not EU/UK Lot 9 Compliant) |
HX-PSUV2-1050DC | 1050W -48V DC Power Supply for Rack Server |
HX-PSU1-1600W | 1600W AC PSU Platinum (Not EU/UK Lot 9 Compliant) |
HX-PSU1-2300W1 | 2300W AC Power Supply for Rack Servers Titanium |
PSU (Input Low Line 110VAC) | |
HX-PSU1-1050W | 1050W AC PSU Platinum (Not EU/UK Lot 9 Compliant) |
HX-PSUV2-1050DC | 1050W -48V DC Power Supply for Rack Server |
HX-PSU1-2300W | 2300W AC Power Supply for Rack Servers Titanium |
HX-PSU1-1050ELV | 1050W AC PSU Enhanced Low Line (Not EU/UK Lot 9 Compliant) |
For all orders exceeding a value of 100USD shipping is offered for free.
Returns will be accepted for up to 10 days of Customer’s receipt or tracking number on unworn items. You, as a Customer, are obliged to inform us via email before you return the item.
Otherwise, standard shipping charges apply. Check out our delivery Terms & Conditions for more details.