NVIDIA DGX H200

SKU
DGX H200
Call for Price
EAN: 19394
Condition: new
Stock: available on order
Lead time: 25-35 days

The NVIDIA DGX H200 is a state-of-the-art AI system engineered for large-scale generative AI and high-performance computing (HPC) workloads. It integrates eight NVIDIA H200 Tensor Core GPUs, each equipped with 141 GB of HBM3e memory, totaling 1,128 GB of GPU memory. This configuration delivers up to 32 petaFLOPS of FP8 performance. The system is powered by dual Intel Xeon Platinum 8480C processors, offering 112 cores in total, and supports 2 TB of DDR5 system memory.

Equipment NVIDIA — These are highly productive data processing platforms, developed in accordance with modern requirements for computing systems in the fields of artificial intelligence, research, industrial automation and corporate analytics. Architecture used — Hopper, Grace Hopper, Blackwell — provide high calculation density, energy efficiency and scalability.

NVIDIA solutions are focused on comprehensive support for neural network training and inference tasks, including large language models, generative artificial intelligence, computer vision, natural language processing, modeling and virtualization of processes.

Key features of platforms:

  • Support for Hopper, Grace, Blackwell and other architectures Orin

  • The use of graphic processors with interfaces NVLink/NVSwitch for an accelerated interaction between GPU

  • Memory HBM3 or HBM3e with high bandwidth

  • Ability to build scaled clusters, combined into a single computing knot

  • Compatibility with NVIDIA AI Enterprise, CUDA, Triton Inference Server, TensorRT, RAPIDS and other libraries

Scope of equipment NVIDIA:

  • Training and Infession of Modeling and Artificial Intelligence Models

  • Modeling and simulating processes in scientific and engineering tasks

  • Automation of production processes and robotic complexes

  • Creating digital doubles of objects and technological chains

  • Treatment of large data arrays and highly loaded analytical calculations

  • Graphic rendering, 3D modeling, physical process simulation and visualization

A brief overview of the main lines of equipment:

  • HGX — Server platforms for data centers designed to build scalable artificial intelligence infrastructure. Used in rack solutions and dated centers.

  • DGX — Ready NVIDIA Systems for AI-Clea. These are pre -configured solutions with maximum productivity to teach large models. Applied in scientific centers, laboratories, corporate sector.

  • IGX Orin — Industrial-grade platform for embedded artificial intelligence systems. Used in medicine, automation, transport, safety systems.

  • GH200 / GB200 / GB300 — new generation superflies that combine CPU and GPU in a single computing module with coherent memory. Applied in AI cluster, cloud solutions, LLM-models and digital doubles.

  • RTX Workstation — Professional graphics processors for workstations. Designed for design and graphics professionals, 3D, CAD and visualization. Provide accelerated processing of professional applications and AI-tasks.

  • GeForce RTX for a note TbEven — mobile graphic processors designed for resource -intensive tasks: Games, Rendering, Modeling, Application AI at User level.

Advantages of use of equipment NVIDIA:

  • Maximum productivity for artificial intelligence and machine learning tasks

  • Infrastructure optimization to work with new generation models

  • High scalability — from one node to II cluster

  • Compatibility with leading frameworks and software

  • Support for corporate solutions and IT infrastructures

  • Perspective for building modern date centers and peripheral computing

Specifications NVIDIA DGX H200:

Component Specification
GPU 8x NVIDIA H200 Tensor Core GPUs
GPU Memory 1,128 GB total (141 GB per GPU)
Performance Up to 32 petaFLOPS (FP8)
CPU 2x Intel Xeon Platinum 8480C, 112 cores total, 2.00 GHz (Base), 3.80 GHz (Max Boost)
System Memory 2 TB DDR5
Interconnect 4x NVIDIA NVSwitch
Networking 4x OSFP ports serving 8x single-port NVIDIA ConnectX-7 VPI (up to 400 Gb/s InfiniBand/Ethernet)
Management Network 10 Gb/s onboard NIC with RJ45, 100 Gb/s Ethernet optional NIC, BMC with RJ45
Storage (OS) 2x 1.9 TB NVMe M.2
Internal Storage 8x 3.84 TB NVMe U.2
Software NVIDIA AI Enterprise, NVIDIA Base Command, NVIDIA DGX OS / Ubuntu / Red Hat Enterprise Linux / Rocky
Power Consumption Approximately 10.2 kW max
Dimensions (H x W x D) 14.0 in (356 mm) x 19.0 in (482.2 mm) x 35.3 in (897.1 mm)
System Weight 287.6 lb (130.45 kg)
Operating Temperature 5–30°C (41–86°F)

condition new
Your Rating