Cisco

Cisco UCSC-GPUV100SXM32 | GPU accelerator | 32GB HBM2, SXM2 300W | NVLink 2.0 | Tensor/FP16/FP32 compute

SKU:UCSC-GPUV100SXM32

Regular price POA
POA
POA
Sale Sold out
Taxes included. Shipping calculated at checkout.

Description

High-density SXM2 GPU accelerator with 32GB ECC HBM2 for AI/ML, HPC, and data analytics. NVLink 2.0 interconnect enables low-latency, high-bandwidth multi-GPU scaling in compatible servers. SXM2 form factor integrates with server baseboards for optimal power and cooling; not a standalone PCIe card. Supports modern accelerated computing frameworks and mixed-precision workloads.

Features

- 32GB ECC HBM2 for extreme memory bandwidth
- SXM2 form factor for dense, power-efficient deployments
- NVLink 2.0 for high-bandwidth, low-latency multi-GPU scaling
- Optimized for AI/ML training, inference, and HPC
- Mixed-precision acceleration with Tensor Cores

Warranty

All products sold by XS Network Tech include a 12-month warranty on both new and used items. Our in-house technical team thoroughly tests used hardware prior to sale to ensure enterprise-grade reliability.

All technical data should be verified on the manufacturer data sheets.

View full details
  • You may also like
  • Best sellers
  • Related Products

specs-tabs

Collapsible content

FAQs

Technical Specifications

FAQs

Q: Is this a standard PCIe add-in card?
A: No. It is an SXM2 module that mounts to a compatible server GPU baseboard and uses system-provided power and cooling.

Q: Can I mix this module with other GPUs?
A: Yes, when the host system supports NVLink/SXM2 configurations. Mixing across different GPU models may be limited by the server design and software stack.

Q: What software stacks are supported?
A: Typical stacks include CUDA-based frameworks and common AI/HPC libraries. Ensure driver and framework versions align with the server platform.

Technical Specifications

- Memory: 32GB HBM2 with ECC
- Form factor: SXM2 module (server-integrated, not PCIe AIC)
- TDP/Power: 300W
- Interconnect: NVLink 2.0 for multi-GPU scaling
- Cooling: Passive module; uses host server thermal solution
- Target workloads: AI/ML training & inference, HPC, data analytics
- Precision support: Tensor, FP16, FP32, FP64
- Deployment: Requires compatible SXM2 baseboard/backplane in supported servers

Recently Viewed

  • Request for a Quote

    Looking for competitive pricing? Submit a request, and our team will provide a tailored quote that fits your needs.

  • Contact Us Directly

    Have a question or need immediate assistance? Call us for expert advice and real-time support.

    Call us Now  
  • Contact Us Directly

    Have a question or need immediate assistance? Call us for expert advice and real-time support.

    Contact us