Skip to product information
1 of 1

Cisco

Cisco HX-GPU-P100-16G | Tesla P100 16GB HBM2, PCIe 3.0 x16, Passive Cooling, 250W TDP

SKU:HX-GPU-P100-16G

Regular price POA
POA
POA
Sale Sold out
Taxes included. Shipping calculated at checkout.

Description

Datacenter-grade accelerator with expanded 16GB HBM2 memory for larger datasets and deeper models. Built for HPC and AI workloads where memory bandwidth, ECC, and sustained compute are critical. The passive, fanless design relies on server chassis airflow, enabling quiet and dense rack deployments. PCIe 3.0 x16 connectivity ensures compatibility across leading enterprise servers.

Features

- 16GB HBM2 with ECC for larger models and datasets
- Reliable passive-cooling design for rack servers
- High sustained compute for HPC and AI
- PCIe 3.0 x16 for broad server compatibility
- Optimized for CUDA-accelerated frameworks
- Enterprise stability for 24/7 operation

Warranty

All products sold by XS Network Tech include a 12-month warranty on both new and used items. Our in-house technical team thoroughly tests used hardware prior to sale to ensure enterprise-grade reliability.

All technical data should be verified on the manufacturer data sheets.

View full details
  • You may also like
  • Best sellers
  • Related Products

specs-tabs

Collapsible content

FAQs

Technical Specifications

FAQs

Q: Is this suitable for AI training?
A: Yes. It offers high memory bandwidth and ECC-protected HBM2 ideal for training and HPC.

Q: What power and cooling are required?
A: Up to 250W TDP and a server with adequate front-to-back airflow for passive cooling.

Q: Will it work in a desktop PC?
A: It is intended for datacenter servers; typical desktop cases may not provide the required airflow.

Q: Does it have any monitor outputs?
A: No. This is a compute accelerator with no external display outputs.

Technical Specifications

- GPU model: Tesla P100 (Pascal architecture)
- Memory: 16GB HBM2 with ECC support
- Interface: PCIe 3.0 x16
- Cooling: Passive, server airflow required
- Power: Up to 250W TDP
- Form factor: Datacenter PCIe add-in card
- Display outputs: None (compute-focused)
- Use cases: HPC, AI training, scientific computing
- Software ecosystem: CUDA, cuDNN, NCCL (platform-dependent)
- NVLink: Not supported on PCIe variant

Recently Viewed

  • Request for a Quote

    Looking for competitive pricing? Submit a request, and our team will provide a tailored quote that fits your needs.

  • Contact Us Directly

    Have a question or need immediate assistance? Call us for expert advice and real-time support.

    Call us Now  
  • Contact Us Directly

    Have a question or need immediate assistance? Call us for expert advice and real-time support.

    Contact us