-
Video card NVIDIA H200 NVL Tensor Core GPU
EAN: 19340The NVIDIA H200 NVL Tensor Core GPU is a high-performance graphics processor designed for data centers, cloud computing, and high-performance computing (HPC) workloads. It is built on the NVIDIA Hopper architecture and equipped with 141 GB of HBM3e memory with a memory bandwidth of 4.8 TB/s. With a maximum thermal design power (TDP) of up to 600 watts, the H200 NVL delivers exceptional performance for tasks requiring intensive computation, such as training and inference for deep learning models, large data processing, and scientific computing.
-
Video card NVIDIA H100 SXM Tensor Core GPU
EAN: 19341The NVIDIA H100 SXM Tensor Core GPU is built on the advanced NVIDIA Hopper architecture, utilizing TSMC's 5nm process technology. Designed for high-performance computing and AI workloads, this GPU delivers an exceptional 16,896 CUDA cores and 528 Tensor Cores of the fourth generation, enabling groundbreaking performance in deep learning, machine learning, and scientific simulations. With 80 GB of HBM3 memory and a memory bandwidth of 3.36 TB/s, the H100 ensures efficient data processing and smooth execution of the most demanding tasks. Its high interconnect bandwidth of 900 GB/s, thanks to NVIDIA NVLink, allows for seamless multi-GPU scaling.
-
Video card NVIDIA H100 NVL Tensor Core GPU
EAN: 19342The NVIDIA H100 NVL Tensor Core GPU is engineered for high-performance AI inference, particularly excelling in large language model (LLM) applications. Built on the NVIDIA Hopper architecture and fabricated using TSMC's 4N process, it integrates two GH100 GPUs, each equipped with 94 GB of HBM3 memory, totaling 188 GB. This configuration delivers a combined memory bandwidth of 7.8 TB/s, facilitating rapid data processing for complex AI workloads. The GPU supports up to 3,341 TFLOPS of FP8 Tensor Core performance and 835 TFLOPS with TF32 sparsity, ensuring efficient computation for diverse AI models.
-
Video card NVIDIA L4 Tensor Core GPU
EAN: 19343The NVIDIA L4 Tensor Core GPU is a versatile, energy-efficient accelerator designed for AI inference, video processing, and graphics workloads in data centers and edge environments. Built on the Ada Lovelace architecture and fabricated using TSMC's 4N process, the L4 features 7,680 CUDA cores, 240 fourth-generation Tensor Cores, and 60 RT Cores. It comes equipped with 24 GB of GDDR6 memory on a 192-bit interface, delivering a memory bandwidth of 300 GB/s. Operating at a base clock of 795 MHz and boosting up to 2,040 MHz, the L4 achieves peak FP32 performance of 30.3 TFLOPS and FP8 Tensor performance up to 485 TFLOPS with sparsity.
-
Video card NVIDIA L40S
EAN: 19344The NVIDIA L40S Tensor Core GPU is a high-performance accelerator designed for AI, graphics, and high-performance computing workloads in data centers. Built on the NVIDIA Ada Lovelace architecture, it features 18,176 CUDA cores, 568 fourth-generation Tensor Cores, and 142 third-generation RT Cores. Equipped with 48 GB of GDDR6 memory and a memory bandwidth of 864 GB/s, the L40S delivers exceptional performance for a wide range of applications. It supports advanced precision formats, including FP8, FP16, BF16, and TF32, achieving up to 1,466 TFLOPS of FP8 Tensor performance with sparsity.
-
Video card NVIDIA A100 80GB PCIe
EAN: 19345The NVIDIA A100 80GB PCIe Tensor Core GPU is engineered to accelerate AI, data analytics, and high-performance computing (HPC) workloads. Built on the NVIDIA Ampere architecture, it features 6,912 CUDA cores and 432 third-generation Tensor Cores, delivering up to 19.5 TFLOPS of FP64 performance and 312 TFLOPS of FP32 performance. The GPU is equipped with 80 GB of HBM2e memory, providing a memory bandwidth of up to 2 TB/s, facilitating rapid data processing for complex AI models and large datasets. Its support for Multi-Instance GPU (MIG) technology allows partitioning into up to seven isolated GPU instances, enabling optimal resource utilization across diverse workloads
-
Video card NVIDIA A100 80GB SXM
EAN: 19346The NVIDIA A100 80GB SXM Tensor Core GPU is engineered to accelerate AI, data analytics, and high-performance computing (HPC) workloads. Built on the NVIDIA Ampere architecture, it features 6,912 CUDA cores and 432 third-generation Tensor Cores, delivering up to 19.5 TFLOPS of FP64 performance and 312 TFLOPS of FP32 performance. The GPU is equipped with 80 GB of HBM2e memory, providing a memory bandwidth of up to 2 TB/s, facilitating rapid data processing for complex AI models and large datasets.
-
NVIDIA HGX B300 Blackwell Platform
EAN: 19377The NVIDIA HGX B300 Blackwell Platform is NVIDIA’s most powerful AI computing platform, purpose-built for trillion-parameter AI model training and real-time inference at scale. Featuring 16 Blackwell Ultra GPUs interconnected via 5th-Gen NVLink delivering 1.8 TB/s of GPU-to-GPU bandwidth, it supports up to 2.3 TB of high-speed HBM3e memory.
-
NVIDIA HGX B200 Blackwell Platform
EAN: 19378The NVIDIA HGX B200 Blackwell Platform is a high-performance, rack-scale AI infrastructure solution designed for large-scale machine learning and data center applications. It features NVIDIA Blackwell B200 GPUs, up to 1.44 TB of GPU memory, and advanced networking with 400 Gb/s connectivity. Powered by dual Intel Xeon Platinum processors, it delivers up to 72 petaFLOPS of FP8 training performance.
-
NVIDIA HGX H200 Hopper Platform
EAN: 19379The NVIDIA HGX H200 is a high-performance computing platform designed for AI training and inference, as well as high-performance computing (HPC) workloads. It features the NVIDIA H200 Tensor Core GPU, built on the Hopper architecture. This platform offers substantial improvements in computational power, memory bandwidth, and interconnect capabilities compared to its predecessors.
NVIDIA RTX™, A-series, and Tesla™ graphics cards incorporate advanced architectures such as Ampere, Ada Lovelace, and Hopper, offering thousands of CUDA® and Tensor Cores to accelerate AI, graphics, and intensive computing workloads.
NVIDIA DGX™ supercomputers, on the other hand, are complete platforms integrating multiple high-performance GPUs (such as the H100, A100, and L40S) connected by NVLink® and InfiniBand interconnects, ensuring massive bandwidth and minimal latency.
Key Benefits:
Massive parallel computing for AI model training and inference;
Performance optimization through the NVIDIA AI Enterprise software stack and CUDA Toolkit;
Proven reliability and compatibility for data center, research lab, and private cloud environments;
Comprehensive support for AI frameworks: TensorFlow, PyTorch, JAX, etc.
NVIDIA solutions are available in a wide range:
GPU Cards: RTX 4000 Ada, RTX 6000 Ada, A100, H100, L40S;
Workstations: NVIDIA RTX™, OVX, DGX Station;
Supercomputers: DGX A100, DGX H100, DGX Cloud systems.
With NVIDIA technologies, you get unparalleled acceleration for your data science, simulation, graphics rendering, and predictive analytics applications.










Log In