Skip to product information
1 of 1

Cisco

Cisco CAI-GPU-H100-NVL | H100 NVL | 94GB HBM3 ECC | 400W TDP | 2-slot FHFL | Passive | PCIe x16

SKU:CAI-GPU-H100-NVL

Regular price POA
POA
POA
Sale Sold out
Taxes included. Shipping calculated at checkout.

Description

Data center PCIe accelerator for large-scale AI inference and training, featuring 94GB HBM3 with ECC in a 2-slot FHFL, passively cooled form factor designed for OEM servers. NVL variant enables high-throughput interconnect for multi-GPU deployments. Ensure adequate chassis airflow and power headroom.

Features

- 94GB HBM3 with ECC for large models and datasets
- NVL-oriented design for high-throughput multi-GPU scaling
- Passive, server-optimized thermal design
- Hopper-class tensor acceleration for LLMs and generative AI
- Enterprise reliability features including ECC memory

Warranty

All products sold by XS Network Tech include a 12-month warranty on both new and used items. Our in-house technical team thoroughly tests used hardware prior to sale to ensure enterprise-grade reliability.

All technical data should be verified on the manufacturer data sheets.

View full details
  • You may also like
  • Best sellers
  • Related Products

specs-tabs

Collapsible content

FAQs

Technical Specifications

FAQs

Q: Will this GPU work in my server?
A: It requires an FHFL PCIe x16 slot, sufficient power budget, and strong front-to-back airflow. Check your server’s GPU QVL and power/thermals before purchase.

Q: Is an NVLink/NVL bridge included?
A: NVLink/NVL bridging hardware is typically sold separately or integrated by the OEM. It is not included unless explicitly stated in the bundle contents.

Q: Does this card have video outputs?
A: No. Data center accelerators generally provide no external display outputs; they are intended for compute workloads.

Q: Which drivers are supported?
A: Use the vendor’s data center GPU drivers for supported Linux/Windows distributions as documented for the H100 NVL.

Q: Can I mix it with other GPU models?
A: Mixed-GPU configurations may be limited by framework and driver support. For best stability and performance, deploy matched GPUs.

Technical Specifications

- GPU architecture: Hopper (H100 NVL)
- Memory: 94GB HBM3 with ECC
- Thermal design power: 400W
- Form factor: 2-slot, Full-Height Full-Length (FHFL)
- Cooling: Passive (requires server airflow)
- Interface: PCIe x16
- Target workloads: AI inference/training, HPC
- Multi-GPU: NVL/NVLink-oriented design

Recently Viewed

  • Request for a Quote

    Looking for competitive pricing? Submit a request, and our team will provide a tailored quote that fits your needs.

  • Contact Us Directly

    Have a question or need immediate assistance? Call us for expert advice and real-time support.

    Call us Now  
  • Contact Us Directly

    Have a question or need immediate assistance? Call us for expert advice and real-time support.

    Contact us