Cisco

Cisco UCSC-P-M5D100GF | 2x 100GbE QSFP28 | PCIe 3.0 x16 | ConnectX-5 EN | RoCEv2, SR-IOV, GPUDirect RDMA

SKU:UCSC-P-M5D100GF

Regular price POA
POA
POA
Sale Sold out
Shipping calculated at checkout.

Description

Dual-port 100GbE QSFP28 Ethernet adapter built on Mellanox ConnectX-5 for maximum throughput and low latency. Ideal for AI/ML clusters, storage fabrics, and cloud virtualization with advanced offloads, GPUDirect RDMA, and scalable virtualization.

Features

- Dual 100GbE for high-throughput fabrics and AI/ML clusters
- Low-latency RDMA with RoCEv2 and congestion control
- GPU acceleration workflows with GPUDirect RDMA
- Comprehensive virtualization and overlay offloads
- Flexible speed options with QSFP28 ecosystem

Warranty

All products sold by XS Network Tech include a 12-month warranty on both new and used items. Our in-house technical team thoroughly tests used hardware prior to sale to ensure enterprise-grade reliability.

All technical data should be verified on the manufacturer data sheets.

View full details
  • You may also like
  • Best sellers
  • Related Products

specs-tabs

Collapsible content

FAQs

Technical Specifications

FAQs

Q: Is 40GbE supported on QSFP28 ports?
A: QSFP28 cages can interoperate with QSFP+ modules; 40GbE support depends on firmware and transceiver compatibility.

Q: What PCIe slot is required?
A: Use a PCIe 3.0 x16 slot to provide full bandwidth for dual 100GbE operation.

Q: Does it support GPUDirect RDMA?
A: Yes, ConnectX-5 supports GPUDirect RDMA for low-latency GPU networking.

Q: Which cables and optics can I use?
A: Use compatible QSFP28 DACs/AOCs or optical transceivers rated for 100/50/40/25GbE as required by your deployment.

Q: Is SR-IOV available for virtualization?
A: Yes, SR-IOV with multiple virtual functions is supported for high-density virtualization.

Technical Specifications

- Ports: 2x QSFP28 (100GbE; supports 50/25GbE and 40GbE where enabled by firmware/transceivers)
- ASIC: Mellanox ConnectX-5 EN
- Host interface: PCI Express 3.0 x16
- RDMA: RoCE v1/v2; GPUDirect RDMA for GPU-to-NIC transfers
- Virtualization: SR-IOV, OVS offloads, multi-queue RSS
- Overlay offloads: VXLAN, NVGRE, Geneve
- Storage: NVMe-oF/RDMA, iSER support in appropriate stacks
- Telemetry: Hardware counters, congestion control (DCQCN)
- Cabling: QSFP28 optics, AOCs, and DACs supported per speed
- OS support: Major Linux, Windows Server, VMware ESXi

Recently Viewed

  • Request for a Quote

    Looking for competitive pricing? Submit a request, and our team will provide a tailored quote that fits your needs.

  • Contact Us Directly

    Have a question or need immediate assistance? Call us for expert advice and real-time support.

    Call us Now  
  • Contact Us Directly

    Have a question or need immediate assistance? Call us for expert advice and real-time support.

    Contact us