Dual 3rd Gen Intel Xeon Scalable CPUs – Maximize compute throughput and efficiency
Up to 8TB DDR4 & Intel Optane Memory Support – Perfect for large datasets and memory-intensive applications
10x PCIe Gen 4.0 Slots – Future-proof expandability for networking and accelerators
6x Hot-Swap NVMe Bays + 2x M.2 Slots – High-speed local storage and OS caching
Modular Networking (AIOM) – Flexible 10/25/100GbE or InfiniBand connectivity
Remote Management via IPMI/BMC (AST2600) – Easy and secure out-of-band administration
Redundant 3000W Titanium PSUs – Maximum uptime with enterprise-grade power efficiency
Preloaded with Exxact Machine Learning Image (EMLI) – Ready-to-run AI stack with TensorFlow, PyTorch & more
4U Rackmount Chassis with Hot-Swap Design – Ideal for scalable data center deployments
The Exxact TensorEX TS4‑168747704‑DPN is a high-performance 4U rackmount server designed for advanced AI, deep learning, and HPC workloads. As part of Exxact’s TensorEX series, this system integrates the NVIDIA HGX A100 platform, enabling enterprises and research institutions to handle the most demanding data science and computational workloads.
The 4U form factor houses a dense configuration of up to 8 A100 Tensor Core GPUs, supported by robust compute, memory, and I/O subsystems. Whether you’re deploying it for model training, inference, or massive parallel simulations, the TS4 system delivers unparalleled performance and scalability.
Its combination of powerful CPUs, massive memory capacity, and cutting-edge GPU interconnects allows for seamless acceleration of workloads that were previously impractical on conventional servers. Built for data centers, this server is engineered for reliability, remote management, and long-term deployment in mission-critical environments.
This system supports two 3rd-Generation Intel Xeon Scalable processors, offering significant improvements in AI and HPC performance over previous generations. With socket LGA-4189 (Socket P+), and Intel’s C621A chipset, the server is built on a solid platform that supports multi-threaded, high-throughput operations.
The CPUs benefit from large core counts, advanced AVX-512 instructions, and DL Boost for enhanced inferencing. These processors are ideal for intensive tasks like large model training, complex simulations, or real-time data analytics.
The C621A chipset further enables high memory bandwidth and I/O connectivity, optimizing the entire compute pipeline. Moreover, this CPU platform is compatible with Intel Optane Persistent Memory, expanding possibilities for large datasets and in-memory workloads. Whether used in AI model development or scientific research, the CPU architecture in this server ensures maximum responsiveness, efficiency, and multi-user performance.
The TS4 system supports up to 32 DDR4 memory slots, allowing for a total memory capacity of up to 8TB. With support for DDR4‑3200 modules and Intel Optane Persistent Memory (DCPMM), this server is capable of hosting extremely large datasets directly in memory, reducing latency caused by traditional storage.
This memory configuration is particularly valuable for deep learning training that requires large batch sizes, natural language processing models like GPT or BERT, and in-memory databases. Each memory channel is designed to deliver high bandwidth, essential for feeding data to the CPUs and GPUs without delay.
Additionally, memory reliability is ensured through ECC (Error Correction Code), safeguarding against data corruption. This robust memory architecture allows researchers and engineers to scale their applications with confidence, while maintaining high performance under continuous workload stress.
At the heart of the Exxact TS4 system are eight NVIDIA A100 Tensor Core GPUs in SXM4 form factor. These GPUs offer unmatched processing power, with each unit delivering 80 GB of HBM2e memory and a blazing 600 GB/s NVLink interconnect speed.
The A100 architecture supports all major numerical precisions—from FP64 to TF32 and mixed-precision Tensor operations—making it ideal for both scientific computing and AI model training. The NVLink interconnect allows all GPUs to operate as a unified high-speed GPU fabric, significantly reducing bottlenecks and increasing training efficiency in multi-GPU setups.
With 432 Tensor Cores per GPU, users can accelerate matrix computations, convolutional layers, and transformer-based architectures like never before. Whether it’s training large-scale AI models or running complex simulations, this GPU setup transforms the server into a computing powerhouse.
The TS4 server includes six 2.5″ hot-swappable NVMe drive bays, offering high-speed local storage for massive datasets, training checkpoints, and model repositories. These bays support enterprise-grade NVMe SSDs, enabling extremely fast read/write operations crucial for data-intensive workloads.
Additionally, two M.2 PCIe slots are included for OS installations or high-speed cache drives. On the expansion side, the system features 10 PCIe Gen 4.0 x16 slots, eight of which are low-profile riser-mounted and two directly connected to the CPUs. This setup ensures optimal I/O throughput for networking cards, accelerators, or additional GPUs.
The flexibility in expansion slots enables future-proofing the server for upcoming technologies like NVMe RAID controllers or high-speed fabric interconnects. This well-balanced storage and I/O architecture is ideal for organizations that need both performance and modularity.
Networking in the TS4 server is handled via a modular AIOM (Advanced I/O Module) card, allowing users to customize Ethernet or InfiniBand connectivity based on deployment needs. This modularity makes the system versatile, whether used in local datacenters or integrated into large-scale HPC clusters.
Additionally, the server features a dedicated RJ-45 port for IPMI/BMC management, providing secure out-of-band control for remote system monitoring, BIOS-level access, and power cycling.
A built-in ASPEED AST2600 baseboard management controller also offers VGA video output, essential for initial setup and debugging. These enterprise-class management tools allow system administrators to monitor thermal data, manage users, and update firmware all without requiring direct access to the server. This robust network and management suite ensures high uptime, security, and efficiency for long-term deployments.
To power such a dense and high-performance configuration, the TS4 comes with four redundant power supplies, each rated to collectively deliver up to 3000 watts. These PSUs are 80 Plus Titanium certified, the highest energy-efficiency rating available, ensuring minimal power waste and heat generation even under full load.
Redundancy provides fault tolerance; if one or two power supplies fail, the server continues operating seamlessly. This is critical in environments where uptime is non-negotiable. Intelligent fan control and efficient voltage regulation modules further optimize power consumption, reducing operational costs over time.
Whether you’re deploying the server in a commercial datacenter or academic HPC cluster, this power system ensures reliability, efficiency, and long-term sustainability.
Physically, the TS4 server adopts a 4U rackmount chassis, measuring 17.6 inches in width. Its depth is optimized to support airflow, component accessibility, and thermal efficiency. The black exterior and industrial-grade design signify a rugged build intended for 24/7 operations in climate-controlled server rooms.
The internal layout is optimized for serviceability, with hot-swappable bays, tool-less GPU brackets, and airflow-optimized ducts. Every component placement is engineered to balance thermal management with performance.
Rack rails and mounting kits are compatible with standard server racks, making integration into existing infrastructure straightforward. The system’s structural integrity supports the weight of eight high-power GPUs, robust CPUs, and multiple drives without vibration or instability, ensuring smooth operation even during high-load computational bursts.
The TS4 ships with Exxact’s Machine Learning Image (EMLI), a pre-configured software stack based on Ubuntu 18.04 or 20.04. This turnkey OS includes optimized drivers, NVIDIA CUDA toolkit, cuDNN, TensorFlow, PyTorch, and other ML libraries, ensuring out-of-the-box productivity.
EMLI enables developers and researchers to skip the complex software installation process and get started with training or inference immediately. The system is also Docker-compatible, allowing containerized deployment of applications and easy scalability.
Exxact provides update tools and support services to keep the software ecosystem current and secure. Whether you’re a researcher prototyping AI models or an enterprise deploying in production, the pre-installed software ensures performance tuning and compatibility with modern AI frameworks.
Performance-wise, the TS4 system stands at the pinnacle of modern AI infrastructure. With Intel’s 3rd-gen Xeon CPUs and NVIDIA’s A100 GPUs, users experience over 1.5x performance gains across typical AI training tasks. The 8-GPU NVLink topology allows users to train large models like GPT, BERT, and Stable Diffusion significantly faster than with previous generation servers.
Scientific simulations that once took 10 hours on the V100 platform can now be completed in under 4 hours using A100’s double-precision compute capabilities. Additionally, the Tensor Cores accelerate matrix operations crucial for deep learning, boosting training throughput while maintaining high accuracy. With superior memory bandwidth, I/O performance, and system-level optimization, the TS4 ensures time-to-solution is minimized—making it the ultimate tool for next-gen computing challenges.
| Component | Specification |
| Form Factor | 4U rack-mountable HGX A100 server |
| CPUs | 2× 3rd‑Gen Intel Xeon Scalable (LGA‑4189, C621A chipset) |
| Memory Slots | 32× DDR4‑3200 (up to 8 TB), supports Optane DCPMM |
| GPUs | 8× NVIDIA A100 SXM4 (80 GB HBM2e, 600 GB/s NVLink) |
| Storage Bays | 6× hot‑swap 2.5″ bays, 2× M.2 slots |
| Expansion Slots | 10× PCIe 4.0 x16 (8 via riser + 2 CPU) |
| Networking | AIOM NIC; 1× IPMI port; VGA via AST2600 |
| Power Supply | 4× redundant PSUs, total 3000 W, 80 Plus Titanium |
| OS & Software | Ubuntu 18.04/20.04, EMLI, TensorFlow, AI/deep learning tools |
| Price | From US $136,628.80 |
| Lead Time | ~14 days |
Discover the countless ways that Q9 technology can solve your network challenges and transform your business – with a free 30-minute discovery call.
At Q9, we have the skills, the experience, and the passion to help you achieve your business goals and transform your organization.
All rights reserved for Q9 technologies.