Skip to product information
1 of 1

Cisco

Cisco UCSC-885A-M8-H23 | Rack server | 8x H200 GPUs | Networking: ConnectX-7 | Memory: 2.3TB

SKU:UCSC-885A-M8-H23

Regular price POA
POA
POA
Sale Sold out
Taxes included. Shipping calculated at checkout.

Description

AI-optimized rack server with 8x NVIDIA H200 GPUs, ConnectX-7 networking, and 2.3TB of installed system memory. Built for multi-GPU training, inference at scale, and accelerated data pipelines in enterprise data centers.

Features

- Eight H200 GPUs for large-scale parallelism
- ConnectX-7 networking for low-latency, high-throughput fabrics
- Balanced memory capacity for multi-GPU training jobs
- Ready for CUDA-accelerated AI/ML frameworks
- Engineered for dense data center deployments

Warranty

All products sold by XS Network Tech include a 12-month warranty on both new and used items. Our in-house technical team thoroughly tests used hardware prior to sale to ensure enterprise-grade reliability.

All technical data should be verified on the manufacturer data sheets.

View full details
  • You may also like
  • Best sellers
  • Related Products

specs-tabs

Collapsible content

FAQs

Technical Specifications

FAQs

Q: Is this suitable for large language model training and fine-tuning?
A: Yes, the multi-GPU configuration is designed for LLM training, fine-tuning, and accelerated inference with appropriate frameworks and storage throughput.

Q: Does ConnectX-7 support high-bandwidth fabrics?
A: ConnectX-7 adapters enable next-generation, high-bandwidth networking suitable for GPU clusters and distributed training fabrics.

Q: Can I run Kubernetes with GPU scheduling?
A: Yes. With NVIDIA device plugins and compatible runtimes, Kubernetes can schedule GPU workloads across the cluster.

Q: What software is needed to leverage the GPUs?
A: Install NVIDIA drivers, CUDA, and relevant libraries (e.g., cuDNN, NCCL) along with your chosen AI frameworks.

Q: How scalable is the storage?
A: Local and networked storage can be expanded; ensure sufficient throughput for multi-GPU training, especially when scaling datasets.

Technical Specifications

- Form factor: Rack-mount server platform
- Series/platform: UCS C885A M8
- GPU accelerators: 8x NVIDIA H200
- Networking: ConnectX-7 adapters (as configured)
- System memory installed: 2.3 TB
- Workloads: AI training, inference, HPC, data analytics
- High-speed fabric ready for modern data center interconnects
- Management: UCS/Intersight platform support (model-dependent)
- OS/stack: CUDA-enabled AI frameworks on enterprise Linux (driver-dependent)
- Expansion: Supports additional memory and storage options (configuration-dependent)

Recently Viewed

  • Request for a Quote

    Looking for competitive pricing? Submit a request, and our team will provide a tailored quote that fits your needs.

  • Contact Us Directly

    Have a question or need immediate assistance? Call us for expert advice and real-time support.

    Call us Now  
  • Contact Us Directly

    Have a question or need immediate assistance? Call us for expert advice and real-time support.

    Contact us