NVIDIA DGX SuperPOD

  • Turnkey AI supercomputing system for enterprise needs

  • Fully integrated with NVIDIA GPUs, software, storage & networking

  • Scalable from a few racks to massive multi-node deployments

  • Pre-installed AI software stack with Kubernetes & NVIDIA AI Enterprise

  • Proven in real-world use cases: LLMs, simulations, digital twins

  • Expert support & managed deployment options available

  • Compatible with top-tier storage providers for seamless data flow

  • DGX Cloud access for flexible training & development

  • Includes Mission Control for AI operations automation

  • Trusted by leaders like SoftBank, MITRE, University of Florida

NVIDIA DGX SuperPOD-Q9

NVIDIA DGX SuperPOD: Scalable Infrastructure for the AI Era

Transform Your Enterprise into an AI Powerhouse

NVIDIA DGX SuperPOD redefines the standard for enterprise AI infrastructure.It’s a turnkey solution designed to address the immense computational needs of modern AI, including large-scale model training and real-time inference. By bringing together GPU-accelerated computing, high-speed networking, scalable storage, and advanced software tools, the DGX SuperPOD allows enterprises to build and operate their own AI factories with maximum performance and minimal friction.
With DGX SuperPOD, organizations can confidently pursue cutting-edge innovation—from building foundational language models to running mission-critical analytics and simulations. Its modular design ensures rapid deployment and scalability, while built-in AI automation tools reduce operational complexity. Whether you’re a financial institution, research lab, or cloud provider, DGX SuperPOD empowers your teams to move faster, scale smarter, and deliver transformative results.

Key Advantages

  • Integrated AI Supercomputing
    Unlike traditional systems that require separate vendors and integration work, DGX SuperPOD comes as a fully integrated system. It fuses cutting-edge NVIDIA GPUs, optimized storage subsystems, networking fabric, and AI-tuned software into a cohesive and efficient whole.
  • Exceptional Scalability for Next-Gen AI
    The DGX SuperPOD is engineered to expand seamlessly from a few racks to tens of thousands of GPUs. This makes it the perfect infrastructure for training cutting-edge, trillion-parameter models used in generative AI, computer vision, and scientific computing.
  • Developer-Centric Stack
    Included is a comprehensive software stack designed for AI developers: it features NVIDIA AI Enterprise, container orchestration with Kubernetes, workload management tools, AI frameworks, and libraries—all optimized for multi-GPU environments.
  • Field-Tested at Enterprise Scale
    DGX SuperPOD is not a prototype or concept—it’s a proven, production-grade platform deployed by leading enterprises and research institutions. NVIDIA has extensively validated it under real-world AI workloads, including LLMs, recommender systems, and digital twins.
  • Expert Guidance and End-to-End Services
    Customers benefit from NVIDIA’s deep bench of AI experts. Professional services include design consultation, deployment, optimization, training, and ongoing support to ensure organizations get the most out of their investment.

Complete AI Data Center in a Box

DGX SuperPOD simplifies infrastructure deployment by delivering a complete package that’s ready to use out of the box. From hardware provisioning to system orchestration and software installation, it’s pre-engineered for rapid AI development. This reduces setup time, operational costs, and the need for specialized in-house expertise.
Beyond its plug-and-play setup, the SuperPOD delivers balanced performance across compute, storage, and networking—optimized specifically for the most demanding AI use cases. Enterprises can deploy their AI workloads immediately, scale them easily, and transition from development to production faster than ever. With reduced integration complexity and a unified system design, it empowers IT teams to focus on innovation instead of infrastructure management.

Powered by NVIDIA DGX Systems

DGX SuperPOD leverages multiple powerful systems powered by the Blackwell architecture:

  • DGX GB300
    A powerhouse built with Grace Blackwell Ultra Superchips and advanced liquid cooling, delivering ultra-efficient compute performance for training, optimization, and generative model inference.
  • DGX B300
    Tailored for large-scale transformer models, DGX B300 delivers top-tier performance for next-gen generative AI tasks, running on Blackwell Ultra GPUs.
  • DGX GB200
    Combines Grace CPU and Blackwell GPU technologies to offer balanced performance and energy efficiency for foundational model training and inference at scale.
  • DGX B200
    Built for end-to-end AI workflows, the B200 excels in handling training, fine-tuning, and deployment tasks in a unified system.
  • DGX H200
    Designed for the highest generative workloads, the H200 shines in processing massive datasets for LLMs and cutting-edge neural networks.

Deployment Options by System

  • DGX B200 Configurations: Ideal for enterprises that need scalable AI capabilities with a focus on logistics optimization, customer analytics, and multimodal data interpretation.
  • DGX H200 Configurations: Tailored for organizations developing or deploying LLMs, especially when using NVIDIA NeMo or deep learning recommendation models requiring massive compute throughput.

NVIDIA Mission Control: AI Factory Automation

Mission Control is the intelligent operations layer for the DGX SuperPOD. It provides end-to-end monitoring, diagnostics, alerting, and AI-driven automation to ensure optimal performance, reduce downtime, and simplify management. It also supports iterative experimentation by accelerating ML and AI research pipelines.
With predictive analytics and system-aware automation, Mission Control reduces administrative overhead, helps maintain uptime, and enables faster troubleshooting. Its seamless integration with the broader DGX software stack enhances visibility into workloads and resource usage across all nodes in the SuperPOD.

Global Enterprise Adoption

DGX SuperPOD has powered national AI strategies, academic breakthroughs, and enterprise-level digital transformation:

  • SoftBank (Japan): Developing a large Japanese LLM with 390B parameters, enabled by DGX SuperPOD and NVIDIA AI Enterprise.
  • KT Corporation (Korea): Accelerating time-to-train for Korean-language LLMs using NeMo-powered DGX infrastructure.
  • C-DAC (India): Deployed Param Siddhi AI, delivering 210 petaflops for government-led scientific computing.
  • University of Florida (USA): Reduced molecular dynamics simulations from centuries to days.
  • NAVER CLOVA (Korea): Using DGX SuperPOD for advanced AI tailored to Korean and Japanese languages.

These deployments highlight DGX SuperPOD’s adaptability across diverse sectors, reinforcing its value in academic, commercial, and national projects.

Customer Success Stories

  • BNY Mellon: Hosts over 40 AI apps used by 17,000 staff, including Eliza, an internal AI assistant—all built on DGX SuperPOD and NVIDIA AI Enterprise.
  • MITRE: Created an AI sandbox for government R&D, improving public sector capabilities in weather forecasting, health, and cybersecurity.
  • University of Florida: Their HiPerGator AI cluster handles over 60% of academic research workloads and supports millions of inference queries annually.
  • SoftBank Corp: Anchors Japan’s sovereign AI efforts with a national-scale DGX SuperPOD deployment.

These use cases underscore how enterprises are transforming operations, accelerating R&D, and unlocking competitive advantages through DGX SuperPOD.

Flexible Deployment: Private AI Colocation

Through partnerships like Equinix Private AI, DGX SuperPOD can be hosted in secure, high-performance colocation facilities. This model allows enterprises to leverage full-stack NVIDIA infrastructure without maintaining on-prem data centers. It includes hosting, management, and networking integration.
By offloading infrastructure maintenance, organizations can maintain strict security and compliance standards while benefiting from lower latency, better throughput, and dedicated AI performance. It’s an ideal approach for businesses that require high-performance computing but want to minimize the overhead of physical infrastructure.

Cutting-Edge Research with NVIDIA Eos

NVIDIA’s internal DGX SuperPOD system—Eos—is one of the top ten most powerful supercomputers globally (TOP500 rank #10). Eos fuels NVIDIA’s AI research in areas like climate modeling, drug discovery, and foundational AI models. It’s a real-world demonstration of DGX SuperPOD’s potential at hyperscale.
Eos exemplifies how DGX infrastructure can be applied at the frontier of AI science, delivering breakthroughs with unmatched speed and scale. It also serves as a blueprint for other institutions seeking to build elite AI supercomputing systems.

The DGX Platform Ecosystem

The DGX ecosystem includes modular components and services that enhance deployment flexibility and reduce complexity:

  • DGX BasePOD: A reference architecture that brings DGX-grade performance to customizable, mid-scale installations.
  • Enterprise Services: NVIDIA offers comprehensive support, training, and access to its Deep Learning Institute, ensuring AI teams are equipped to succeed.
    Together, these tools empower organizations to tailor their AI infrastructure strategy—whether deploying on-prem, in the cloud, or hybrid setups.

AI Training in the Cloud

DGX Cloud extends the power of DGX SuperPOD to the cloud. Offered through leading service providers, it allows organizations to access DGX infrastructure via subscription. No need for procurement or setup—just immediate, cloud-scale access to AI supercomputing.
This pay-as-you-go model is perfect for experimentation, rapid scaling, or augmenting on-prem capabilities. Users can spin up training jobs in minutes while benefiting from NVIDIA’s end-to-end software stack and enterprise-grade security.

Certified Storage Solutions

DGX SuperPOD is compatible with leading storage platforms optimized for AI workloads. These solutions ensure consistent high throughput, reliability, and ease of management:

  • DDN A3I
  • IBM Storage Scale
  • VAST Data
  • NetApp ONTAP
  • Dell PowerScale
  • WEKA Data Platform
  • Pure Storage FlashBlade//S

All storage solutions are pre-validated and recommended for seamless deployment with DGX SuperPOD.

NVIDIA DGX SuperPOD

Resources

Continue Exploring

 
  • Turnkey AI Supercomputing Infrastructure

    Pre-integrated, fully validated AI/HPC solution combining DGX systems, high-speed networking, storage, and management software — deployable in weeks.

  • Massive Compute Scale with DGX Nodes

    Configurations start from 96 DGX-2H or up to 128 DGX H100 nodes, totaling hundreds to thousands of GPUs and delivering unprecedented AI performance.

  • Ultra-High Interconnect & Storage Fabric

    Built on InfiniBand (leaf/spine director) or Ethernet, paired with OEM-certified storage (IBM Spectrum, NetApp, Pure, VAST) for seamless data flow and throughput.

  • Enterprise-Grade Software & Management

    Includes NVIDIA Base Command and Mission Control platforms, AI Enterprise software, cluster orchestration (Slurm/K8s), and pre-integrated AI libraries.

  • Scalable & Flexible Architecture

    Designed to scale from tens to thousands of GPUs (e.g., up to 8,192 H100 GPUs), DGX SuperPOD adapts to growth with modular compute, network, and storage units.

  • Proven Performance & Support

    Achieves ~200 PFLOPS FP16 (DGX-2H) to exascale-class loads (H100 GB300/Eos); deployment backed by NVIDIA expert support and validation.

NVIDIA DGX SuperPOD-Q9

NVIDIA DGX SuperPOD

  • NVIDIA DGX-certified SuperPOD reference architecture (RA
  • 32+ NVIDIA DGX systems (1 SuperPOD SU)
  • NVIDIA Networking: per RA
  • Data storage/management: per partner
  • AI Software: NVIDIA AI Enterprise, Base Command

Related Products