Exxact TensorEX TS4 168747704 DPN

  • Powered by 8x NVIDIA A100 SXM4 GPUs – Ideal for AI training, inference, and scientific simulations
  • Dual 3rd Gen Intel Xeon Scalable CPUs – Maximize compute throughput and efficiency

  • Up to 8TB DDR4 & Intel Optane Memory Support – Perfect for large datasets and memory-intensive applications

  • 10x PCIe Gen 4.0 Slots – Future-proof expandability for networking and accelerators

  • 6x Hot-Swap NVMe Bays + 2x M.2 Slots – High-speed local storage and OS caching

  • Modular Networking (AIOM) – Flexible 10/25/100GbE or InfiniBand connectivity

  • Remote Management via IPMI/BMC (AST2600) – Easy and secure out-of-band administration

  • Redundant 3000W Titanium PSUs – Maximum uptime with enterprise-grade power efficiency

  • Preloaded with Exxact Machine Learning Image (EMLI) – Ready-to-run AI stack with TensorFlow, PyTorch & more

  • 4U Rackmount Chassis with Hot-Swap Design – Ideal for scalable data center deployments

Exxact TensorEX TS4 168747704 DPN-Q9

Product Overview

  • Model: Exxact TensorEX TS4‑168747704‑DPN
  • Form Factor: 4U rack-mountable HGX A100 server

The Exxact TensorEX TS4‑168747704‑DPN is a high-performance 4U rackmount server designed for advanced AI, deep learning, and HPC workloads. As part of Exxact’s TensorEX series, this system integrates the NVIDIA HGX A100 platform, enabling enterprises and research institutions to handle the most demanding data science and computational workloads.

The 4U form factor houses a dense configuration of up to 8 A100 Tensor Core GPUs, supported by robust compute, memory, and I/O subsystems. Whether you’re deploying it for model training, inference, or massive parallel simulations, the TS4 system delivers unparalleled performance and scalability.

Its combination of powerful CPUs, massive memory capacity, and cutting-edge GPU interconnects allows for seamless acceleration of workloads that were previously impractical on conventional servers. Built for data centers, this server is engineered for reliability, remote management, and long-term deployment in mission-critical environments.

CPU & Chipset

  • Supports 2× 3rd‑Generation Intel Xeon Scalable processors (e.g., Silver 43XX, Gold 53XX/63XX, Platinum 83XX)
  • CPU socket: LGA‑4189 (Socket P+)
  • Chipset: Intel C621A

This system supports two 3rd-Generation Intel Xeon Scalable processors, offering significant improvements in AI and HPC performance over previous generations. With socket LGA-4189 (Socket P+), and Intel’s C621A chipset, the server is built on a solid platform that supports multi-threaded, high-throughput operations.

The CPUs benefit from large core counts, advanced AVX-512 instructions, and DL Boost for enhanced inferencing. These processors are ideal for intensive tasks like large model training, complex simulations, or real-time data analytics.

The C621A chipset further enables high memory bandwidth and I/O connectivity, optimizing the entire compute pipeline. Moreover, this CPU platform is compatible with Intel Optane Persistent Memory, expanding possibilities for large datasets and in-memory workloads. Whether used in AI model development or scientific research, the CPU architecture in this server ensures maximum responsiveness, efficiency, and multi-user performance.

Memory

  • 32 DDR4 memory slots, supporting DDR4‑3200 / PC4‑25600
  • Supports DDR4 NVDIMM (Intel Optane DCPMM)
  • Max memory: up to 8 TB

The TS4 system supports up to 32 DDR4 memory slots, allowing for a total memory capacity of up to 8TB. With support for DDR4‑3200 modules and Intel Optane Persistent Memory (DCPMM), this server is capable of hosting extremely large datasets directly in memory, reducing latency caused by traditional storage.

This memory configuration is particularly valuable for deep learning training that requires large batch sizes, natural language processing models like GPT or BERT, and in-memory databases. Each memory channel is designed to deliver high bandwidth, essential for feeding data to the CPUs and GPUs without delay.

Additionally, memory reliability is ensured through ECC (Error Correction Code), safeguarding against data corruption. This robust memory architecture allows researchers and engineers to scale their applications with confidence, while maintaining high performance under continuous workload stress.

GPU Configuration

  • Equipped with 8× NVIDIA A100 Tensor Core GPUs (SXM4)
  • Each GPU features 80 GB HBM2e, 600 GB/s NVLink interconnect
  • GPU spec per card:
    • 6912 CUDA cores, 432 Tensor cores
    • FP32: 19.5 TFLOPS
    • FP64: 9.7 TFLOPS
    • 400 W power

At the heart of the Exxact TS4 system are eight NVIDIA A100 Tensor Core GPUs in SXM4 form factor. These GPUs offer unmatched processing power, with each unit delivering 80 GB of HBM2e memory and a blazing 600 GB/s NVLink interconnect speed.

The A100 architecture supports all major numerical precisions—from FP64 to TF32 and mixed-precision Tensor operations—making it ideal for both scientific computing and AI model training. The NVLink interconnect allows all GPUs to operate as a unified high-speed GPU fabric, significantly reducing bottlenecks and increasing training efficiency in multi-GPU setups.

With 432 Tensor Cores per GPU, users can accelerate matrix computations, convolutional layers, and transformer-based architectures like never before. Whether it’s training large-scale AI models or running complex simulations, this GPU setup transforms the server into a computing powerhouse.

Storage & Expansion

  • Drive Bays:
    • 6× hot-swap 2.5″ bays (supports NVMe drives)
  • PCI‑Express Slots:
    • 8× PCIe 4.0 x16 (low-profile, via PCI-EX)
    • 2× PCIe 4.0 x16 (low-profile, via CPU)
  • M.2 Slots:
    • 2× PCIe M.2

The TS4 server includes six 2.5″ hot-swappable NVMe drive bays, offering high-speed local storage for massive datasets, training checkpoints, and model repositories. These bays support enterprise-grade NVMe SSDs, enabling extremely fast read/write operations crucial for data-intensive workloads.

Additionally, two M.2 PCIe slots are included for OS installations or high-speed cache drives. On the expansion side, the system features 10 PCIe Gen 4.0 x16 slots, eight of which are low-profile riser-mounted and two directly connected to the CPUs. This setup ensures optimal I/O throughput for networking cards, accelerators, or additional GPUs.

The flexibility in expansion slots enables future-proofing the server for upcoming technologies like NVMe RAID controllers or high-speed fabric interconnects. This well-balanced storage and I/O architecture is ideal for organizations that need both performance and modularity.

Networking & Management

  • Ethernet: via AIOM network card (unspecified ports)
  • 1× dedicated RJ-45 IPMI/out-of-band management port
  • Graphics: ASPEED AST2600 BMC with 1× VGA port

Networking in the TS4 server is handled via a modular AIOM (Advanced I/O Module) card, allowing users to customize Ethernet or InfiniBand connectivity based on deployment needs. This modularity makes the system versatile, whether used in local datacenters or integrated into large-scale HPC clusters.

Additionally, the server features a dedicated RJ-45 port for IPMI/BMC management, providing secure out-of-band control for remote system monitoring, BIOS-level access, and power cycling.

A built-in ASPEED AST2600 baseboard management controller also offers VGA video output, essential for initial setup and debugging. These enterprise-class management tools allow system administrators to monitor thermal data, manage users, and update firmware all without requiring direct access to the server. This robust network and management suite ensures high uptime, security, and efficiency for long-term deployments.

Power & Efficiency

  • Power Supply: 4× redundant PSUs
  • Max cumulative PSU wattage: 3000 W
  • PSU certification: 80 Plus Titanium

To power such a dense and high-performance configuration, the TS4 comes with four redundant power supplies, each rated to collectively deliver up to 3000 watts. These PSUs are 80 Plus Titanium certified, the highest energy-efficiency rating available, ensuring minimal power waste and heat generation even under full load.

Redundancy provides fault tolerance; if one or two power supplies fail, the server continues operating seamlessly. This is critical in environments where uptime is non-negotiable. Intelligent fan control and efficient voltage regulation modules further optimize power consumption, reducing operational costs over time.

Whether you’re deploying the server in a commercial datacenter or academic HPC cluster, this power system ensures reliability, efficiency, and long-term sustainability.

Physical Specs & Build

  • Rack Height: 4U (≈ 7.0″ tall)
  • Width: 17.6″, Depth: not specified
  • Color: Black

Physically, the TS4 server adopts a 4U rackmount chassis, measuring 17.6 inches in width. Its depth is optimized to support airflow, component accessibility, and thermal efficiency. The black exterior and industrial-grade design signify a rugged build intended for 24/7 operations in climate-controlled server rooms.

The internal layout is optimized for serviceability, with hot-swappable bays, tool-less GPU brackets, and airflow-optimized ducts. Every component placement is engineered to balance thermal management with performance.

Rack rails and mounting kits are compatible with standard server racks, making integration into existing infrastructure straightforward. The system’s structural integrity supports the weight of eight high-power GPUs, robust CPUs, and multiple drives without vibration or instability, ensuring smooth operation even during high-load computational bursts.

Software & Compatibility

  • Ships as a turnkey GPU training system:
    • Includes Exxact Machine Learning Image (EMLI)
    • Pre-installed Ubuntu 18.04/20.04
    • TensorFlow, with automatic update tools
  • Designed for deep learning, AI training, HPC workloads, and turnkey deployment

The TS4 ships with Exxact’s Machine Learning Image (EMLI), a pre-configured software stack based on Ubuntu 18.04 or 20.04. This turnkey OS includes optimized drivers, NVIDIA CUDA toolkit, cuDNN, TensorFlow, PyTorch, and other ML libraries, ensuring out-of-the-box productivity.

EMLI enables developers and researchers to skip the complex software installation process and get started with training or inference immediately. The system is also Docker-compatible, allowing containerized deployment of applications and easy scalability.

Exxact provides update tools and support services to keep the software ecosystem current and secure. Whether you’re a researcher prototyping AI models or an enterprise deploying in production, the pre-installed software ensures performance tuning and compatibility with modern AI frameworks.

Performance Highlights

  • Intel 3rd‑Gen Xeon CPUs deliver:
    • ~1.5× performance boost over previous generations across ML/DL workloads
    • Up to 62% improvement in network/5G workloads
    • Up to 74% better BERT inferencing performance
  • NVIDIA A100 GPUs dramatically speed up double-precision HPC, reducing V100 runtime from ~10 h to ~4 h

Performance-wise, the TS4 system stands at the pinnacle of modern AI infrastructure. With Intel’s 3rd-gen Xeon CPUs and NVIDIA’s A100 GPUs, users experience over 1.5x performance gains across typical AI training tasks. The 8-GPU NVLink topology allows users to train large models like GPT, BERT, and Stable Diffusion significantly faster than with previous generation servers.

Scientific simulations that once took 10 hours on the V100 platform can now be completed in under 4 hours using A100’s double-precision compute capabilities. Additionally, the Tensor Cores accelerate matrix operations crucial for deep learning, boosting training throughput while maintaining high accuracy. With superior memory bandwidth, I/O performance, and system-level optimization, the TS4 ensures time-to-solution is minimized—making it the ultimate tool for next-gen computing challenges.

Summary Table

Component Specification
Form Factor 4U rack-mountable HGX A100 server
CPUs 2× 3rd‑Gen Intel Xeon Scalable (LGA‑4189, C621A chipset)
Memory Slots 32× DDR4‑3200 (up to 8 TB), supports Optane DCPMM
GPUs 8× NVIDIA A100 SXM4 (80 GB HBM2e, 600 GB/s NVLink)
Storage Bays 6× hot‑swap 2.5″ bays, 2× M.2 slots
Expansion Slots 10× PCIe 4.0 x16 (8 via riser + 2 CPU)
Networking AIOM NIC; 1× IPMI port; VGA via AST2600
Power Supply 4× redundant PSUs, total 3000 W, 80 Plus Titanium
OS & Software Ubuntu 18.04/20.04, EMLI, TensorFlow, AI/deep learning tools
Price From US $136,628.80
Lead Time ~14 days

 

Exxact TensorEX TS4 168747704 DPN

Resources

Continue Exploring

 
Exxact TensorEX TS4 168747704 DPN-Q9

Exxact TensorEX TS4 168747704 DPN

  • NVIDIA GPU: 8x NVIDIA A100 GPU
  • Processor:  2x Intel Xeon Scalable processor

Related Products