-
Vendor:NVIDIA
NVIDIA MMS4X00-NS-FLT | OSFP | 2xNDR 400G (800G agg) | InfiniBand NDR | 1310nm | 100m SMF | 2x MPO-12 APC | flat-top
MMS4X00-NS-FLT
Vendor:NVIDIANVIDIA MMS4X00-NS-FLT | OSFP | 2xNDR 400G (800G agg) | InfiniBand NDR | 1310nm | 100m SMF | 2x MPO-12 APC | flat-top
Twin-port OSFP optical module delivering an aggregate 800G via 2x 400G NDR lanes for InfiniBand environments. 1310nm parallel SMF optics with MPO-12 APC connectors provide low-latency, short-reach data center interconnects up to 100m. Flat-top OSFP design for hosts that require a low-profile heat interface. -
Vendor:NVIDIA
NVIDIA MMS4X00-NS400 | OSFP | NDR 400G | InfiniBand NDR | 1310nm | 100m SMF | MPO-12 APC | flat-top
MMS4X00-NS400
Vendor:NVIDIANVIDIA MMS4X00-NS400 | OSFP | NDR 400G | InfiniBand NDR | 1310nm | 100m SMF | MPO-12 APC | flat-top
Single-port OSFP transceiver delivering 400G NDR for InfiniBand fabrics. Utilizes 1310nm parallel SMF with MPO-12 APC for up to 100m reach, ideal for dense leaf–spine connectivity. Flat-top OSFP design fits hosts requiring a low-profile thermal solution. -
Vendor:NVIDIA
NVIDIA MMS4X50-NM | OSFP | 2xFR4 400G (800G agg) | InfiniBand NDR | 1310nm CWDM4 | 2km SMF | 2x LC duplex | finned
Twin-port OSFP optical module delivering an aggregate 800G via 2x 400G FR4 channels, each using CWDM4 wavelengths on duplex LC. Suited for medium-reach InfiniBand NDR fabric links up to 2km on SMF. Finned OSFP for enhanced thermal dissipation in high-power hosts. -
Vendor:NVIDIA
NVIDIA MQM8510-H | 40x 200G QSFP56 HDR | InfiniBand | PoE: none | Uplinks: via any QSFP56 | License: base
Quantum HDR InfiniBand leaf-class data center switch with 40 QSFP56 HDR 200G ports for high-radix, low-latency fabrics. Ideal for HPC and AI clusters requiring deterministic performance and advanced congestion control. Offers telemetry and fabric management integrations for at-scale deployments. -
Vendor:NVIDIA
NVIDIA MQM8520-H | 40x QSFP56 HDR (up to 200Gb/s) spine blade, license: base
MQM8520-H
Vendor:NVIDIANVIDIA MQM8520-H | 40x QSFP56 HDR (up to 200Gb/s) spine blade, license: base
High-density Quantum HDR InfiniBand spine blade delivering 40 QSFP56 ports for modular fabric directors. Ideal for HPC, AI/ML, and low-latency scale-out clusters where uniform 200G HDR connectivity is required. Managed via the host chassis for centralized control and telemetry. -
Vendor:NVIDIA
NVIDIA MQM8700-HS2F | 40x QSFP56 HDR (up to 200Gb/s), 2x AC PSUs, P2C airflow, std depth 1U, rails, license: base
Quantum HDR InfiniBand top-of-rack/leaf-spine switch with 40 QSFP56 ports for high-bandwidth, low-latency fabrics. Standard-depth 1U design with front-to-back (P2C) airflow, dual hot-swap AC PSUs, and rail kit included for rapid deployment in data centers and HPC clusters. -
Vendor:NVIDIA
NVIDIA MQM8700-HS2R | 40x QSFP56 HDR (up to 200Gb/s), 2x AC PSUs, C2P airflow, std depth 1U, rails, license: base
Quantum HDR InfiniBand switch delivering 40 QSFP56 ports for 200G-class, low-latency fabrics. This standard-depth 1U model features cable-to-port (C2P) airflow, dual hot-swap AC PSUs, and rail kit for seamless data center installation. Includes x86 dual-core control plane for robust management and orchestration. -
Vendor:NVIDIA
NVIDIA MQM8790-HS2F | 40x QSFP56 HDR (up to 200Gb/s), 2x AC PSUs, P2C airflow, std depth 1U, rails, unmanaged fabric
Quantum HDR InfiniBand switch with 40 QSFP56 ports for uniform 200G-class connectivity in large-scale fabrics. This standard-depth 1U variant uses port-to-cable (P2C) airflow and includes dual hot-swap AC PSUs and a rail kit. Unmanaged model for streamlined, fabric-centric deployments. -
Vendor:NVIDIA
NVIDIA MQM8790-HS2R | 40x HDR 200G QSFP56, Unmanaged, 2x AC PSU, C2P airflow, 1U
MQM8790-HS2R
Vendor:NVIDIANVIDIA MQM8790-HS2R | 40x HDR 200G QSFP56, Unmanaged, 2x AC PSU, C2P airflow, 1U
High-density HDR 200Gb/s InfiniBand switch delivering ultra‑low latency and non‑blocking throughput for AI/HPC fabrics. 40 QSFP56 HDR ports in a compact 1U chassis with hot‑swap PSUs and fans, optimized for cold‑aisle front exhaust (C2P) deployments. Unmanaged model requires an external Subnet Manager (e.g., UFM) for fabric control. -
Vendor:NVIDIA
NVIDIA MQM9700-NS2F | 64x NDR 400G via 32x OSFP, Managed, 2x AC PSU, P2C airflow
MQM9700-NS2F
Vendor:NVIDIANVIDIA MQM9700-NS2F | 64x NDR 400G via 32x OSFP, Managed, 2x AC PSU, P2C airflow
Quantum‑2 NDR InfiniBand switch delivering extreme scale for AI and HPC clusters. 64 NDR 400Gb/s ports presented through 32 OSFP cages, supporting 400G optics and 2x200G breakouts. Managed model includes an embedded Subnet Manager with advanced telemetry, adaptive routing, and congestion control. Optimized for hot‑aisle containment with P2C (rear‑to‑front) airflow. -
Vendor:NVIDIA
NVIDIA MQM9700-NS2R | 64x NDR 400G via 32x OSFP, Managed, 2x AC PSU, C2P airflow
MQM9700-NS2R
Vendor:NVIDIANVIDIA MQM9700-NS2R | 64x NDR 400G via 32x OSFP, Managed, 2x AC PSU, C2P airflow
Quantum‑2 NDR InfiniBand switch engineered for next‑generation AI/HPC fabrics. Provides 64 ports of 400Gb/s NDR through 32 OSFP cages with support for 2x200G breakouts. Managed system includes an embedded Subnet Manager, advanced telemetry, and congestion control. C2P airflow aligns to cold‑aisle intake and hot‑aisle exhaust. -
Vendor:NVIDIA
NVIDIA MQM9790-NS2F | 64x NDR 400G via 32x OSFP, Unmanaged, 2x AC PSU, P2C airflow
MQM9790-NS2F
Vendor:NVIDIANVIDIA MQM9790-NS2F | 64x NDR 400G via 32x OSFP, Unmanaged, 2x AC PSU, P2C airflow
Quantum‑2 NDR InfiniBand switch delivering massive 400Gb/s port density for AI/HPC fabrics where an external controller manages the fabric. 64 NDR ports are exposed via 32 OSFP cages with support for 2x200G breakouts. Unmanaged variant requires an external Subnet Manager (e.g., UFM). P2C airflow supports hot‑aisle containment designs. -
Vendor:NVIDIA
NVIDIA MQM9790-NS2R | 64x 400G NDR InfiniBand, 32x OSFP; unmanaged; dual AC PSUs; C2P airflow
MQM9790-NS2R
Vendor:NVIDIANVIDIA MQM9790-NS2R | 64x 400G NDR InfiniBand, 32x OSFP; unmanaged; dual AC PSUs; C2P airflow
High-density Quantum-2 NDR InfiniBand switch for AI/HPC fabrics. Delivers ultra‑low‑latency, non‑blocking 400G NDR per port via 32 OSFP cages presenting 64 NDR ports. Standard‑depth chassis with cold‑to‑power (C2P) airflow, dual hot‑swap AC power supplies, and rail kit for rapid rack integration. Unmanaged switching for straightforward fabric deployment alongside external fabric management/orchestration. -
Vendor:NVIDIA
NVIDIA MSN2010-CB2FC | 18x 10/25G SFP28 | non-PoE | 4x 100G QSFP28 uplinks | license: N/A
MSN2010-CB2FC
Vendor:NVIDIANVIDIA MSN2010-CB2FC | 18x 10/25G SFP28 | non-PoE | 4x 100G QSFP28 uplinks | license: N/A
Ultra‑low‑latency, fixed 1U switch for top‑of‑rack and compact aggregation, with 18x 10/25G SFP28 and 4x 100G QSFP28 uplinks. Based on Spectrum silicon for line‑rate L2/L3, VXLAN/EVPN (with compatible NOS), and open networking via ONIE, supporting Cumulus Linux or MLNX‑OS/Onyx. Ideal for modern leaf/ToR deployments and spine connectivity. -
Vendor:NVIDIA
NVIDIA MSN2010-CB2RC | 18x10/25G SFP28 access, 4x40/100G QSFP28 uplinks, non-PoE, NOS: Cumulus
MSN2010-CB2RC
Vendor:NVIDIANVIDIA MSN2010-CB2RC | 18x10/25G SFP28 access, 4x40/100G QSFP28 uplinks, non-PoE, NOS: Cumulus
Compact half-width 1U data center switch designed for top-of-rack aggregation with mixed 10/25G access and 40/100G uplinks. Line‑rate L2/L3 switching, low latency, and flexible breakout support enable dense server connectivity and high‑speed uplinks in leaf deployments. C2P airflow aligns with front‑to‑back cooling. -
Vendor:NVIDIA
NVIDIA MSN2100-CB2FC | 16x100G QSFP28, non-PoE, uplinks: —, NOS: Cumulus
MSN2100-CB2FC
Vendor:NVIDIANVIDIA MSN2100-CB2FC | 16x100G QSFP28, non-PoE, uplinks: —, NOS: Cumulus
Ultra-compact half-width 1U 100GbE switch ideal for leaf/spine or high-density aggregation. Provides 16x QSFP28 ports with support for 40/100G and flexible breakouts to 4x25G or 4x10G per port. P2C airflow aligns with back-to-front cooling in certain hot/cold aisle designs.
Why Choose XS Network Tech?
We are a trusted partner for businesses of all sizes, including government departments, data centres, and service providers. Our expert team in sales, logistics, and customer service ensures that your needs are met with speed and precision, from pre-sales consultation to order fulfilment. With a commitment to fast delivery, sustainability, and a wide inventory, XS Network Tech is your go-to source for IT hardware. Trust us for quality, fast responses, and a hassle-free experience every time.