Skip to product information
1 of 1

Cisco

Cisco HCI-GPU-H100-80 | H100 PCIe GPU | 80GB HBM3 ECC | PCIe 5.0 x16 | Passive, FHFL dual-slot | 350W TDP | MIG-enabled AI/HPC

SKU:HCI-GPU-H100-80

Regular price POA
POA
POA
Sale Sold out
Taxes included. Shipping calculated at checkout.

Description

Next-generation data center accelerator for AI training, inference at scale, and HPC. 80GB HBM3 ECC, PCIe 5.0 x16, passive FHFL dual-slot, 350W TDP, with MIG for partitioning and QoS.

Features

- 80GB HBM3 ECC for extreme bandwidth and large models
- PCIe 5.0 x16 for next-gen host connectivity
- Passive FHFL dual-slot design for dense server deployments
- MIG-enabled partitioning for multi-tenant and QoS
- Optimized for state-of-the-art AI training and inference
- Enterprise-ready for HPC and large analytics pipelines

Warranty

All products sold by XS Network Tech include a 12-month warranty on both new and used items. Our in-house technical team thoroughly tests used hardware prior to sale to ensure enterprise-grade reliability.

All technical data should be verified on the manufacturer data sheets.

View full details
  • You may also like
  • Best sellers
  • Related Products

specs-tabs

Collapsible content

FAQs

Technical Specifications

FAQs

Q: Is this the PCIe version of H100?
A: Yes. It is the PCIe variant with 80GB HBM3 and a 350W TDP.

Q: Does it support MIG?
A: Yes. Multi-Instance GPU (MIG) is supported to partition the GPU into isolated instances.

Q: What platform requirements apply?
A: A server with adequate PCIe 5.0 x16 (backward compatible), sufficient power delivery, and strong chassis airflow.

Q: Is it passively cooled?
A: Yes. It relies on server chassis airflow; ensure proper thermal design of the host system.

Q: What workloads benefit most?
A: Large-scale AI training, high-throughput inference, HPC simulations, and data analytics.

Technical Specifications

- GPU model: NVIDIA H100 PCIe (passive)
- Memory: 80GB HBM3 with ECC
- Bus interface: PCIe 5.0 x16
- Thermal design power (TDP): 350W
- Cooling: Passive heatsink; requires robust server airflow
- Form factor: Full-height, full-length (FHFL), dual-slot
- MIG: Multi-Instance GPU supported for workload isolation
- Target workloads: AI training/inference, HPC, large-scale analytics

Recently Viewed

  • Request for a Quote

    Looking for competitive pricing? Submit a request, and our team will provide a tailored quote that fits your needs.

  • Contact Us Directly

    Have a question or need immediate assistance? Call us for expert advice and real-time support.

    Call us Now  
  • Contact Us Directly

    Have a question or need immediate assistance? Call us for expert advice and real-time support.

    Contact us