Supports 100GbE / 400GbE for ultra-fast data transfer
Up to 40 km of long-haul InfiniBand connectivity
Equipped with 4x QSFP112 + 2x QSFP112 ports
Full RDMA and In-Network Computing across distance
DWDM-compatible for efficient multi-channel fiber transmission
Ideal for remote data centers, edge computing, and disaster recovery
Built for high-availability, low-latency, and scalable environments
Seamlessly integrates with NVIDIA Quantum InfiniBand networks
The NVIDIA MetroX-3 XC system is a high-performance long-haul connectivity solution purpose-built to extend the NVIDIA Quantum InfiniBand networking architecture to remote and geographically distributed environments.
Designed with the modern data center in mind, MetroX-3 XC bridges the gap between core data processing sites and satellite or edge locations.
This system plays a pivotal role in overcoming the distance limitations of traditional InfiniBand networks by enabling ultra-low-latency and high-bandwidth connectivity over fiber links up to 40 kilometers long.
Ideal for enterprise-scale and cloud-driven infrastructures, the MetroX-3 XC ensures seamless integration and consistency of performance across all locations.
Whether it’s used for interconnecting remote data centers, facilitating edge computing, or enabling robust disaster recovery setups, it delivers secure, high-speed data transmission with minimal signal degradation.
The system also supports DWDM technology, offering dense and scalable optical networking for long-haul deployments.
Backed by NVIDIA’s end-to-end networking expertise, MetroX-3 XC provides a reliable and future-proof solution for organizations looking to scale their high-performance computing (HPC), artificial intelligence (AI), and storage environments across distance-sensitive infrastructures.
The MetroX-3 XC system delivers top-tier networking performance through advanced hardware specifications optimized for modern data transmission demands.
It supports data rates of 100GbE and 400GbE, ensuring compatibility with the latest generation of high-throughput computing workloads.
The inclusion of 4x QSFP112 primary ports along with 2 additional QSFP112 expansion ports provides flexible deployment options and high port density.
These ports support cutting-edge optical transceivers designed for both standard and long-haul connections.
The use of QSFP112 (Quad Small Form-Factor Pluggable) interfaces ensures compatibility with a wide range of networking hardware, including switches, servers, and optical line systems.
These interfaces are known for their compact design and high-speed signaling capabilities, allowing for dense and energy-efficient configurations.
As a result, MetroX-3 XC is well-suited for data centers and enterprises requiring high-capacity, reliable interconnects over metropolitan and regional distances.
By combining bandwidth scalability with port flexibility, it supports a wide variety of applications from HPC clusters and AI training networks to hybrid cloud storage solutions.
The NVIDIA MetroX-3 XC system significantly extends the reach of InfiniBand networks, making it possible to maintain seamless high-performance connectivity across large geographic distances.
By supporting transmission over fiber links up to 40 kilometers, the system removes traditional distance barriers in networking infrastructure
This feature is essential for organizations that operate distributed data centers, need to connect campus environments, or deploy critical infrastructure in remote locations.
Unlike conventional solutions that suffer from high latency and signal degradation over long distances, MetroX-3 XC maintains the native performance advantages of InfiniBand including low latency, high throughput, and consistent packet delivery.
This makes it ideal for latency-sensitive applications such as real-time analytics, AI training, financial transactions, and scientific simulations.
It ensures that InfiniBand capabilities are preserved even across wide area networks (WANs), enabling high-speed interconnectivity without sacrificing performance or reliability.
One of the most powerful features of the MetroX-3 XC is its support for DWDM (Dense Wavelength Division Multiplexing) technology, which allows multiple data streams to be transmitted simultaneously over a single fiber link by assigning each stream a unique wavelength.
This significantly enhances bandwidth efficiency and allows for more scalable network designs across long distances.
By leveraging DWDM, MetroX-3 XC minimizes the need for additional cabling and infrastructure while maximizing data throughput.
It ensures secure and high-integrity communication between geographically dispersed sites, with minimal signal degradation even across tens of kilometers.
This makes it particularly valuable for use cases like multi-region data center connectivity, metro-scale data replication, and synchronous mirroring of mission-critical workloads.
The system’s DWDM compatibility not only enhances performance but also simplifies network operations, lowers total cost of ownership (TCO), and improves resilience across extended infrastructures.
MetroX-3 XC fully supports In-Network Computing by preserving native RDMA (Remote Direct Memory Access) performance across long distances.
This ensures that data can be transferred directly between systems’ memory without involving the CPU, resulting in ultra-low latency and higher efficiency.
With RDMA intact, MetroX-3 XC enables applications to benefit from accelerated compute performance even in multi-site environments.
This is particularly critical for workloads in high-performance computing (HPC), deep learning, and real-time processing, where latency and bandwidth are key performance indicators.
MetroX-3 XC ensures that data centers connected over long distances function as a single unified high-speed network.
The result is accelerated communication, reduced overhead, and improved application responsiveness across distributed systems empowering developers and IT administrators to scale high-performance workloads without compromising on compute speed or accuracy.
MetroX-3 XC is a vital component in building highly resilient IT infrastructures.
Its ability to connect primary and secondary data center sites with reliable, high-speed links makes it ideal for disaster recovery (DR) and business continuity (BC) planning.
By maintaining synchronous and asynchronous data replication across distant sites, it ensures that mission-critical data remains safe and recoverable in the event of system failures, power outages, or natural disasters.
Additionally, the system supports redundant paths and fault-tolerant configurations to minimize downtime and ensure continuous service availability.
Whether it’s ensuring 24/7 access to enterprise applications or supporting rapid failover in cloud deployments, MetroX-3 XC is designed to keep business operations running without interruption.
Its ability to deliver low-latency, high-throughput connectivity between backup and production environments makes it indispensable in any organization’s high-availability strategy.
The NVIDIA MetroX-3 XC system is purpose-built for a wide range of enterprise and cloud networking scenarios that require high-speed, long-distance connectivity.
With its unmatched performance, reliability, and distance capability, MetroX-3 XC addresses the growing need for scalable, high-performance interconnects in today’s hybrid and distributed computing environments.
Data Rates: 100GbE / 400GbE
Interface: QSFP112
Ports: 4x QSFP112 + 2x QSFP112
Distance Support: Up to 40 km
Protocol Support: Native InfiniBand, RDMA
Optical Tech: DWDM-enabled for multi-wavelength transmission
Use Cases: Long-distance data center interconnect (DCI), edge connectivity, disaster recovery, in-network computing
Latency: Ultra-low with full In-Network Computing support
Compatibility: NVIDIA Quantum InfiniBand platform
High-Speed Connectivity
Supports blazing-fast 100GbE and 400GbE data rates for next-gen performance.
40km Long-Haul Reach
Extend your InfiniBand network up to 40 kilometers — ideal for connecting remote data centers and edge locations.
Ultra-Low Latency RDMA
Native RDMA and In-Network Computing over distance ensure seamless data transfer with minimal delay.
DWDM-Enabled
Compatible with Dense Wavelength Division Multiplexing for optimized multi-channel optical transmission.
Built for In-Network Computing
Supports distributed AI, HPC, and data-intensive workflows — without compromising speed or scalability.
Business Continuity Ready
Provides fault-tolerant, high-availability links — perfect for disaster recovery and backup infrastructure.
Seamless Integration
Fully compatible with NVIDIA Quantum InfiniBand platforms, making it a flexible choice for growing enterprise needs.
Edge & Metro Deployments
Designed for hybrid infrastructures, from centralized data centers to far-edge computing environments.
Discover the countless ways that Q9 technology can solve your network challenges and transform your business – with a free 30-minute discovery call.
At Q9, we have the skills, the experience, and the passion to help you achieve your business goals and transform your organization.
All rights reserved for Q9 technologies.