Skip to product information
1 of 1

Cisco

Cisco UCSX-GPU-A100-80 | Tensor Core GPU, 80GB HBM2e, PCIe Gen4 x16, passive 300W, incl. power cable

SKU:UCSX-GPU-A100-80

Regular price POA
POA
POA
Sale Sold out
Taxes included. Shipping calculated at checkout.

Description

Data center Tensor Core GPU module with 80GB HBM2e for AI training, inference, data analytics, and HPC acceleration in UCS servers. Passive-cooled, PCIe Gen4 x16 form factor with auxiliary power cable, supporting CUDA and mixed-precision acceleration with enterprise vGPU options for partitioning and multi-tenant workflows.

Features

- 80GB HBM2e with ECC for large AI and HPC workloads
- PCIe Gen4 x16 interface for high-throughput connectivity
- Passive cooling optimized for data center server airflow
- Tensor Core acceleration for mixed-precision compute
- MIG support for predictable multi-tenant performance
- Broad software ecosystem compatibility (CUDA, cuDNN, TensorRT)

Warranty

All products sold by XS Network Tech include a 12-month warranty on both new and used items. Our in-house technical team thoroughly tests used hardware prior to sale to ensure enterprise-grade reliability.

All technical data should be verified on the manufacturer data sheets.

View full details
  • You may also like
  • Best sellers
  • Related Products

specs-tabs

Collapsible content

FAQs

Technical Specifications

FAQs

Q: Does this GPU require external power?
A: Yes. It is a 300W passive card and includes an auxiliary power cable for server integration.

Q: Is this suitable for both training and inference?
A: Yes. Ampere Tensor Cores accelerate training and inference across mixed precisions.

Q: Does it support GPU partitioning?
A: Yes. Multi-Instance GPU (MIG) allows secure partitioning of the GPU into multiple isolated instances.

Q: What servers is it intended for?
A: Designed for compatible UCS servers that provide sufficient PCIe Gen4 slots, airflow, and power.

Q: What software frameworks are supported?
A: CUDA-based frameworks and libraries including cuDNN and TensorRT are supported.

Technical Specifications

- GPU architecture: NVIDIA Ampere Tensor Cores
- Memory: 80GB HBM2e with ECC
- Interface: PCIe Gen4 x16
- Cooling: Passive (server airflow required)
- Power: Up to 300W TGP, auxiliary power via included cable
- Compute: Mixed precision acceleration (FP64/FP32/TF32/FP16/INT8/INT4)
- Partitioning: Multi-Instance GPU (MIG) support
- Software: CUDA, cuDNN, TensorRT, NVIDIA AI/SDK ecosystem
- Use cases: AI training/inference, data science, HPC, analytics
- Form factor: Full-height, data center GPU module

Recently Viewed

  • Request for a Quote

    Looking for competitive pricing? Submit a request, and our team will provide a tailored quote that fits your needs.

  • Contact Us Directly

    Have a question or need immediate assistance? Call us for expert advice and real-time support.

    Call us Now  
  • Contact Us Directly

    Have a question or need immediate assistance? Call us for expert advice and real-time support.

    Contact us