Inspur KR6288X2-A0 AI Server | 8x NVIDIA HGX H200 | Dual Intel Xeon 8558P | 2TB DDR5
  • 제품 카테고리:서버
  • 부품 번호:Inspur KR6288X2-A0
  • 재고 상태:In Stock
  • 상태:새 상품
  • 제품 특징:즉시 배송 가능
  • 최소 주문 수량:1 개
  • 기존 가격:$467,869.00
  • 현재 가격: $411,765.00 절약액 $56,104.00
  • 지금 채팅하기 이메일 전송

안심하세요. 반품 가능합니다.

배송: 국제 배송 시 통관 절차 및 추가 요금이 발생할 수 있습니다. 자세히 보기

배송 기간: 국제 배송이 통관 절차의 영향을 받을 경우 추가 시간이 소요될 수 있습니다. 자세히 보기

반품: 14일 이내 반품 가능합니다. 판매자가 반품 배송비를 부담합니다. 자세히 보기

무료 배송. NET 30 Days 구매 주문을 받습니다. 신용에 영향을 주지 않고 몇 초 만에 승인받으세요.

대량 구매가 필요하신 Inspur KR6288X2-A0 제품은 Whatsapp: (+86) 151-0113-5020 무료 전화로 문의하시거나 라이브 채팅에서 견적을 요청해 주시면 영업 담당자가 곧 연락드립니다.

Inspur KR6288X2-A0 AI Server | 8x NVIDIA HGX H200 | Dual Intel Xeon 8558P | 2TB DDR5

Keywords

Inspur KR6288X2-A0, NVIDIA HGX H200, Intel Xeon 8558P, 2TB DDR5 RAM, AI Training Server, Generative AI, HPC Server, Buy Inspur Server

Description

Step into the future of hyperscale artificial intelligence with the Inspur KR6288X2-A0. This flagship AI server is engineered to train the world's most complex Large Language Models (LLMs), featuring the brand new NVIDIA HGX H200 8-GPU architecture. With a massive combined 1128GB of HBM3e memory across the HGX baseboard, this system shatters previous memory bottlenecks, allowing data scientists to run massive parameter models efficiently without requiring as many interconnected nodes.

At the heart of this compute giant are dual Intel Xeon 8558P processors. Each CPU boasts 48 cores, 260M cache, and a 2.7GHz base clock operating at a 350W TDP. This provides 96 total physical cores of premium x86 orchestration power to prepare data and manage the immense GPU workload. To keep the processing pipeline completely saturated, the system is populated with 32x 64GB DDR5-5600MHz ECC-RDIMMs, totaling a massive 2TB of ultra-fast system memory.

Storage is tiered for both reliability and extreme speed. The host operating system is secured on two 480GB SATA 6Gbps 2.5-inch Read Intensive SSDs. Meanwhile, training data and checkpoints are handled by two 3.84TB U.2 16GTps 2.5-inch NVMe solid-state drives, ensuring rapid data ingestion directly to the GPUs. Powering this immense hardware is a robust redundant power array featuring Titanium-rated high-efficiency PSUs (supporting 220VAC or 240VDC). With a comprehensive 3-year warranty, this server is a secure investment for enterprise data centers pushing the boundaries of AI Training Server capabilities.

Key Features

  • Next-Gen AI Acceleration: 1x Nvidia HGX-200-8GPU board delivering an unprecedented 1128GB of VRAM.
  • Elite Processing: 2x Intel Xeon 8558P processors (48 Cores, 2.7GHz, 260M Cache, 350W).
  • Massive Memory Bandwidth: 2TB Total System RAM configured via 32x 64GB DDR5-5600MHz ECC-RDIMMs.
  • High-Speed Data Tier: 2x 3.84TB U.2 NVMe SSDs (16GTps) for rapid checkpointing and data ingestion.
  • Reliable OS Boot: 2x 480GB SATA 6Gbps 2.5" SSDs.
  • Titanium Efficiency: Equipped with ultra-efficient Titanium power supplies (3200W/2700W 220VAC/240VDC configuration).
  • Enterprise Guarantee: Backed by a 3-Year Warranty.

Configuration

Component Specification Quantity
Brand / Model Inspur KR6288X2-A0 (H200 Complete Machine) 1
Processor (CPU) Intel 8558P Xeon 2.7GHz 48C 260M 350W 2
Memory (RAM) 64G DDR5-5600MHz ECC-RDIMM 32
System Disk 480G SATA 6Gbps 2.5in Read 2
Data Disk 3.84T U.2 16GTps 2.5in R-Standard 2
GPU Baseboard Nvidia HGX-200-8GPU 1128G 1
Power Supply 3200W / 2700W Titanium 220VAC or 240VDC
Warranty 3 Years 1

Compatibility

The Inspur KR6288X2-A0 is a premier platform designed for the NVIDIA AI Enterprise software stack. It natively supports the latest deep learning frameworks such as PyTorch, TensorFlow, and JAX. Operating system compatibility includes enterprise standards such as Ubuntu Server 22.04 LTS and Red Hat Enterprise Linux (RHEL) 9. The HGX H200 architecture utilizes NVLink interconnects internally and is designed to interface with high-speed NDR InfiniBand networking cards for massive cluster scaling.

Usage Scenarios

This server is specifically architected for Foundation Model Training. The 1128GB of total VRAM across the 8-GPU baseboard allows data scientists to load incredibly large LLMs directly into memory, enabling massive batch sizes and significantly cutting down training times for generative AI models.

It also serves as a dominant High-Throughput Inference Node. For customer-facing Generative AI applications requiring real-time text, image, or video generation, the sheer memory bandwidth of the H200 GPUs ensures multiple concurrent user requests are served with minimal latency.

Frequently Asked Questions

Q: What is the primary difference between an HGX H100 and this HGX H200 system?
A: The primary upgrade is memory capacity and bandwidth. While a standard 8-GPU H100 system features 640GB of memory, the Nvidia HGX-200-8GPU featured here includes 1128GB of faster HBM3e memory (approx. 141GB per GPU). This allows significantly larger models to run on a single node without encountering memory bottlenecks.

Q: Are the NVMe drives configured for redundancy?
A: The system includes two 3.84T U.2 NVMe drives. Typically in an AI training environment, these are configured in a RAID 0 stripe for maximum read/write performance to feed data to the GPUs as fast as possible, though they can be configured in RAID 1 if data redundancy is prioritized over speed.

이 상품과 관련된 제품
기업 워크로드를 위한 Tesla L2 GPU를 탑재한 Inspur NF5280M6 AI 지원 듀얼 Xeon 서버 추천
Inspur NF5466M6 듀얼 Intel Xeon 4314 엔터프라이즈 스토리지 및 컴퓨팅 서버 추천
Dell PowerEdge R760xs - 듀얼 Xeon Silver 4410Y 엔터프라이즈 구성 추천
Dell PowerEdge R760xs - Xeon Gold 6507P 고성능 서버 추천
Dell PowerEdge R660 1U 랙 서버 - 듀얼 Xeon Gold 6430, 1TB RAM, 25GbE 및 파이버 채널 HBA 추천
HPE ProLiant DL380 Gen11 2U 랙 서버 | 듀얼 Intel Xeon Gold 6542Y 48코어 | 1TB DDR5 RAM | 300GB SAS HDD 2개 추천
Inspur NF8480M5 4U 엔터프라이즈 스토리지 서버 — 24× LFF SAS, Quad Xeon Gold 6248R 고밀도 플랫폼 추천
Xeon Gold 6448H 및 엔터프라이즈 네트워킹을 갖춘 Lenovo ThinkSystem SR850 V3 고성능 4-CPU 서버 추천