-
Vendor:NVIDIA
NVIDIA 920-9B110-00RH-0M0 | 40x QSFP56 200Gb HDR IB | 2x PSU | C2P airflow | Rail kit | Managed (x86 CPU)
920-9B110-00RH-0M0
Vendor:NVIDIANVIDIA 920-9B110-00RH-0M0 | 40x QSFP56 200Gb HDR IB | 2x PSU | C2P airflow | Rail kit | Managed (x86 CPU)
High‑performance 1U HDR InfiniBand switch with embedded x86 CPU for on‑box fabric management. Ideal for AI/HPC clusters requiring non‑blocking 200Gb/s per port, ultra‑low latency, adaptive routing, and SHARP‑class in‑network acceleration. This configuration provides C2P airflow, dual hot‑swap PSUs, and includes rack rails. -
Vendor:NVIDIA
NVIDIA 920-9B110-00RH-0D0 | 40x QSFP56 200Gb HDR IB | 2x PSU | C2P airflow | Rail kit | Unmanaged
920-9B110-00RH-0D0
Vendor:NVIDIANVIDIA 920-9B110-00RH-0D0 | 40x QSFP56 200Gb HDR IB | 2x PSU | C2P airflow | Rail kit | Unmanaged
Ultra‑low‑latency 1U HDR InfiniBand switch for dense AI/HPC fabrics. The QM8790 unmanaged variant delivers non‑blocking 200Gb/s per port and advanced IB congestion control while relying on an external Subnet Manager (UFM/OpenSM). This configuration uses C2P airflow, includes 2x hot‑swap PSUs, and ships with rails. -
Vendor:NVIDIA
NVIDIA 920-9B110-00FH-0MD | 40x QSFP56 200Gb HDR IB | 2x PSU | P2C airflow | Rail kit | Managed (x86 CPU)
920-9B110-00FH-0MD
Vendor:NVIDIANVIDIA 920-9B110-00FH-0MD | 40x QSFP56 200Gb HDR IB | 2x PSU | P2C airflow | Rail kit | Managed (x86 CPU)
Enterprise‑class 1U 200Gb/s HDR InfiniBand switch for AI/HPC fabrics with embedded x86 CPU for on‑switch management. Provides non‑blocking throughput, ultra‑low latency, adaptive routing, and SHARP‑class in‑network acceleration to reduce job completion times. This model features P2C airflow, dual hot‑swap PSUs, and an included rail kit. -
Vendor:NVIDIA
NVIDIA 920-9B110-00FH-0D0 | 40x QSFP56 200Gb HDR IB | 2x PSU | P2C airflow | Rail kit | Unmanaged
920-9B110-00FH-0D0
Vendor:NVIDIANVIDIA 920-9B110-00FH-0D0 | 40x QSFP56 200Gb HDR IB | 2x PSU | P2C airflow | Rail kit | Unmanaged
High‑density 1U 200Gb/s HDR InfiniBand switch for AI/HPC leaf or spine deployments. Delivers line‑rate throughput with ultra‑low latency and advanced congestion control to maximize application performance. This configuration uses port‑side exhaust (P2C) airflow, includes dual hot‑swap PSUs and rack rails, and is the unmanaged variant requiring an external Subnet Manager (e.g., UFM or OpenSM). -
Vendor:NVIDIA
NVIDIA 920-9B020-00RA-0D0 | UFM 3.0 Appliance | 1U | 2x ConnectX‑7 (NDR/400GbE capable)
920-9B020-00RA-0D0
Vendor:NVIDIANVIDIA 920-9B020-00RA-0D0 | UFM 3.0 Appliance | 1U | 2x ConnectX‑7 (NDR/400GbE capable)
Compact 1U appliance for UFM Telemetry or UFM Enterprise deployments. Equipped with next‑generation adapters to interface with cutting‑edge fabrics, it streamlines monitoring, analytics, and automation for high‑performance clusters. -
Vendor:NVIDIA
NVIDIA 920-9B020-00FH-0D0 | UFM 4.0 Appliance | 2U | 2x HDR 200Gb/s IB (ConnectX‑6)
920-9B020-00FH-0D0
Vendor:NVIDIANVIDIA 920-9B020-00FH-0D0 | UFM 4.0 Appliance | 2U | 2x HDR 200Gb/s IB (ConnectX‑6)
Turnkey fabric-management and cyber‑analytics platform for InfiniBand environments. Ships as a 2U appliance with dual HDR 200Gb/s adapters, enabling rapid rollout of UFM Enterprise and Cyber‑AI for real‑time telemetry, anomaly detection, and proactive remediation across large fabrics. -
Vendor:NVIDIA
NVIDIA 920-9B020-00FA-0D2 | 8x IB + 8x Ethernet ports | 2U IB↔Ethernet gateway | line‑rate bridging
920-9B020-00FA-0D2
Vendor:NVIDIANVIDIA 920-9B020-00FA-0D2 | 8x IB + 8x Ethernet ports | 2U IB↔Ethernet gateway | line‑rate bridging
Purpose-built gateway appliance to bridge InfiniBand fabrics with Ethernet networks for storage, services, and external connectivity. Delivers line‑rate translation with consistent latency and simple operations, enabling mixed‑fabric data centers to scale without re‑architecting core networks. -
Vendor:NVIDIA
NVIDIA 900-9X81E-00EX-ST0 | HHHL PCIe SuperNIC | 800Gb/s XDR IB (default) or 2x400GbE | single-cage
900-9X81E-00EX-ST0
Vendor:NVIDIANVIDIA 900-9X81E-00EX-ST0 | HHHL PCIe SuperNIC | 800Gb/s XDR IB (default) or 2x400GbE | single-cage
High-performance SuperNIC for AI/HPC fabrics with flexible InfiniBand/Ethernet operation. Deploy as 800Gb/s XDR InfiniBand (default mode) or reconfigure to dual 400GbE for converged data center networking. Compact HHHL form factor fits dense GPU and compute nodes while delivering ultra‑low latency and RDMA acceleration for scale-out workloads. -
Vendor:NVIDIA
NVIDIA 900-9X7AX-004NMC0 | ConnectX-7 | 400GbE QSFP112 | PCIe Gen5 x16 | RoCEv2 | SR-IOV | GPUDirect | PTP 1588v2
900-9X7AX-004NMC0
Vendor:NVIDIANVIDIA 900-9X7AX-004NMC0 | ConnectX-7 | 400GbE QSFP112 | PCIe Gen5 x16 | RoCEv2 | SR-IOV | GPUDirect | PTP 1588v2
Next-generation Ethernet adapter offering up to 400GbE over QSFP112 for dense compute, storage, and AI fabrics. With PCIe Gen5 x16 and extensive offloads (NVMe-oF/TCP, OVS, TLS/IPsec, RoCEv2), it delivers extreme throughput while reducing CPU overhead. Supports IEEE 1588v2 PTP, advanced telemetry, and breakout modes such as 4x100G where supported. Compatible with QSFP family optics and DAC/AOC matched to the selected speed. -
Vendor:NVIDIA
NVIDIA 900-9X7AX-003NMC0 | ConnectX-7 | 400GbE QSFP112 | PCIe Gen5 x16 | RoCEv2 | SR-IOV | GPUDirect | PTP 1588v2
900-9X7AX-003NMC0
Vendor:NVIDIANVIDIA 900-9X7AX-003NMC0 | ConnectX-7 | 400GbE QSFP112 | PCIe Gen5 x16 | RoCEv2 | SR-IOV | GPUDirect | PTP 1588v2
Ultra-high-performance Ethernet adapter delivering up to 400GbE over QSFP112 for AI clusters, storage, and cloud scale networking. PCIe Gen5 x16 host interface and extensive hardware offloads (NVMe-oF/TCP, OVS, TLS/IPsec, RoCEv2) maximize throughput while minimizing CPU usage. Supports IEEE 1588v2 PTP timing, advanced congestion control, and breakout options such as 4x100G where supported. Interoperable with QSFP family optics/cables rated for the required speed. -
Vendor:NVIDIA
NVIDIA 900-9X7AO-00C3-STZ | ConnectX-7 | 50GbE SFP56 | RoCEv2 | SR-IOV | OVS Offload | PTP 1588v2 | NVMe-oF/TCP
900-9X7AO-00C3-STZ
Vendor:NVIDIANVIDIA 900-9X7AO-00C3-STZ | ConnectX-7 | 50GbE SFP56 | RoCEv2 | SR-IOV | OVS Offload | PTP 1588v2 | NVMe-oF/TCP
High-performance Ethernet adapter enabling up to 50GbE connectivity over SFP56 for scale-out virtualized and storage-rich deployments. Delivers RDMA (RoCEv2), OVS, and NVMe-oF/TCP offloads to minimize CPU load and latency. Provides IEEE 1588v2 PTP timing and advanced telemetry for precise control and visibility. Works with SFP56 DAC/AOC and optical modules; can negotiate 25GbE where supported. -
Vendor:NVIDIA
NVIDIA 900-9X7AO-0003-ST0 | ConnectX-7 | 50GbE SFP56 | RoCEv2 | SR-IOV | PTP 1588v2 | NVMe-oF/TCP | GPUDirect
900-9X7AO-0003-ST0
Vendor:NVIDIANVIDIA 900-9X7AO-0003-ST0 | ConnectX-7 | 50GbE SFP56 | RoCEv2 | SR-IOV | PTP 1588v2 | NVMe-oF/TCP | GPUDirect
Enterprise-class Ethernet adapter delivering up to 50GbE over SFP56 for low-latency fabrics, storage, and virtualization. Hardware offloads accelerate NVMe-oF/TCP, OVS, TLS/IPsec, and RDMA (RoCEv2) to free host CPU cycles. Supports IEEE 1588v2 PTP for precise timing and rich telemetry/congestion control for scale-out data centers. Compatible with SFP56 DAC/AOC and optical modules; interoperates at 25GbE where supported. -
Vendor:NVIDIA
NVIDIA 900-9X7AH-0079-DTZ | 200GbE 1x QSFP56, PCIe 5.0 x16, HHHL
Data‑center‑class 200GbE PCIe 5.0 adapter built for intensive east‑west traffic, storage acceleration, and GPU‑centric workloads. Delivers RDMA, inline crypto, and overlay offloads to maximize application performance and efficiency. -
Vendor:NVIDIA
NVIDIA 900-9X7AH-0078-DTZ | 200GbE 1x QSFP56, PCIe 5.0 x16, HHHL
Enterprise‑grade 200GbE adapter delivering exceptional throughput and latency performance for modern AI/ML, virtualization, and NVMe‑oF storage deployments. Rich hardware offloads reduce CPU overhead and optimize east‑west traffic at scale. -
Vendor:NVIDIA
NVIDIA 900-9X7AH-0039-STZ | 400GbE 1x QSFP112, PCIe 5.0 x16, HHHL (Full-Height Bracket)
900-9X7AH-0039-STZ
Vendor:NVIDIANVIDIA 900-9X7AH-0039-STZ | 400GbE 1x QSFP112, PCIe 5.0 x16, HHHL (Full-Height Bracket)
Next‑generation 400GbE server adapter engineered for maximum throughput and ultra‑low latency in AI clusters, storage fabrics, and cloud-scale data centers. Delivers advanced RDMA, inline crypto, and rich overlay offloads over a single QSFP112 port. -
Vendor:NVIDIA
NVIDIA 900-9X766-003N-ST0 | 200GbE 1x QSFP56, PCIe 5.0 x16, HHHL
High-performance 200GbE server adapter delivering ultra‑low latency, advanced RDMA (RoCE v2), and rich offloads for AI/HPC, storage, and cloud networking. Ideal for leaf/spine and GPU-accelerated platforms with PCIe 5.0 bandwidth headroom and backward compatibility.
Why Choose XS Network Tech?
We are a trusted partner for businesses of all sizes, including government departments, data centres, and service providers. Our expert team in sales, logistics, and customer service ensures that your needs are met with speed and precision, from pre-sales consultation to order fulfilment. With a commitment to fast delivery, sustainability, and a wide inventory, XS Network Tech is your go-to source for IT hardware. Trust us for quality, fast responses, and a hassle-free experience every time.