Skip to product information
1 of 1

Cisco

Cisco UCSC-885A-M8-H22 | Rack server | 8x H200 GPUs | IO: 8x B3140H + 1x B3220 | Memory: 23TB

SKU:UCSC-885A-M8-H22

Regular price POA
POA
POA
Sale Sold out
Taxes included. Shipping calculated at checkout.

Description

High-density AI/ML rack server configured for large-scale training and inference. This build pairs 8x NVIDIA H200 GPUs with BlueField-class IO (8x B3140H plus 1x B3220) and 23TB of installed system memory to accelerate demanding workloads in modern data centers.

Features

- Eight H200 GPUs for extreme parallel compute density
- BlueField-class DPUs to accelerate networking, storage, and security services
- High-memory footprint to feed multi-GPU training jobs
- Optimized for AI frameworks and CUDA-accelerated libraries
- Scalable design for evolving data center workloads
- Integration-ready with modern orchestration and MLOps toolchains

Warranty

All products sold by XS Network Tech include a 12-month warranty on both new and used items. Our in-house technical team thoroughly tests used hardware prior to sale to ensure enterprise-grade reliability.

All technical data should be verified on the manufacturer data sheets.

View full details
  • You may also like
  • Best sellers
  • Related Products

specs-tabs

Collapsible content

FAQs

Technical Specifications

FAQs

Q: What types of workloads is this configuration best suited for?
A: Large-scale AI training, GPU-accelerated inference, HPC simulations, and data analytics pipelines that benefit from multi-GPU acceleration and DPU-assisted networking.

Q: Can I expand memory or storage later?
A: Yes, the platform supports additional memory and storage options depending on available DIMM slots and drive bays in the chassis configuration.

Q: Does this support containerized AI stacks?
A: Yes. With appropriate NVIDIA drivers and runtimes, it supports common container platforms (e.g., Docker, Kubernetes) for GPU-accelerated workloads.

Q: Are BlueField modules used for network offload and security?
A: BlueField DPUs can offload networking, storage, and security functions to free CPU resources and improve isolation, depending on software configuration.

Q: Which operating systems are commonly used?
A: Enterprise Linux distributions such as Ubuntu and RHEL are commonly used for AI/ML stacks; ensure compatible kernel, NVIDIA drivers, and CUDA/cuDNN.

Technical Specifications

- Form factor: Rack-mount server platform
- Series/platform: UCS C885A M8
- GPU accelerators: 8x NVIDIA H200
- IO/accelerator modules: 8x B3140H + 1x B3220
- System memory installed: 23 TB
- Ideal workloads: AI training, inference, HPC, data science
- Networking: BlueField/ConnectX-class high-speed fabrics (configuration as listed)
- Management: UCS/Intersight platform support (model-dependent)
- OS support: Popular enterprise Linux distributions and CUDA-enabled stacks (driver-dependent)
- Rack integration: Standard 19-inch data center deployment

Recently Viewed

  • Request for a Quote

    Looking for competitive pricing? Submit a request, and our team will provide a tailored quote that fits your needs.

  • Contact Us Directly

    Have a question or need immediate assistance? Call us for expert advice and real-time support.

    Call us Now  
  • Contact Us Directly

    Have a question or need immediate assistance? Call us for expert advice and real-time support.

    Contact us