GPUs

Brand

Category

Diagram illustrating NVIDIA DGX Cloud's unified AI platform, showcasing layers for AI development, deployment, and scalable GPU infrastructure across multi-cloud environments.

NVIDIA DGX Cloud

  • World’s first fully managed AI supercomputer in the cloud, delivering multi-node GPU clusters via browser or API

  • Provides instant access to 8× A100 or H100 GPUs per node (640 GB GPU memory), integrated with NVIDIA-optimized software and expert support

NVIDIA L40-Q9

NVIDIA L40

  • GPU memory size: 48 GB GDDR6 ECC
  • Thermal Solution: Passive
  • Form Factor: 4.4″ H x 10.5″ L | Dual Slot
NVIDIA L40S -Q9

NVIDIA L40S

  • GPU memory size: 48GB GDDR6 with ECC
  • Thermal Solution: Passive
  • Form Factor: 4.4″ (H) x 10.5″ (L), dual slot
NVIDIA DGX SuperPOD-Q9

NVIDIA DGX SuperPOD

  • NVIDIA DGX-certified SuperPOD reference architecture (RA
  • 32+ NVIDIA DGX systems (1 SuperPOD SU)
  • NVIDIA Networking: per RA
  • Data storage/management: per partner
  • AI Software: NVIDIA AI Enterprise, Base Command
NVIDIA DGX BasePOD-Q9

NVIDIA DGX BasePOD

  • NVIDIA DGX-certified BasePOD reference architecture (RA) per partner
  • 2+ NVIDIA DGX systems
  • NVIDIA Networking: per RA
  • Data storage/management: per partner
  • AI Software: NVIDIA AI Enterprise, Base Command
NVIDIA A100 Liquid Cooled-Q9

NVIDIA A100 Liquid Cooled

  • GPU memory size: 80GB HBM2e ECC on by Default
  • Thermal Solution: Liquid Cooled
  • Form Factor: PCIe | single-slot liquid cooled
NVIDIA H100-Q9

NVIDIA H100

  • GPU memory size: 80GB HBM2e
  • Thermal Solution: Passive
  • Form Factor: Full-height, full-length (FHFL) dual-slot
NVIDIA T4 -Q9

NVIDIA T4

  • GPU memory size: 16 GB GDDR6 with ECC
  • Thermal Solution: Passive
  • Form Factor: Low-Profile PCIe | Single Slot