The total system consists of a total of 300 computer nodes, each with two Intel Xeon E5-2680v4 (Broadwell) processors (14 cores per processor) with a clock frequency of 2.4 GHz and 35 MB cache per chip. Thus, the system delivers a nominal peak performance of almost 543 tera-flop/s.
- 236 compute nodes have 128 GB DDR4 RAM and 256 GB local SSD
- 60 extra GPU nodes also have 2 Nvidia Tesla K80 with 2 Kepler GK210 chips each and 2x12 GB GDDR5 memory
- 4 SMP computer nodes have 1 TB DDR4 RAM and 256 GB local SSD
- 3 separate login servers additionally have 1 TB (Raid 1)
- 4 dedicated visualization nodes have 128GB of memoryand NVIDIA Quadro M4000 graphics cards with 8GB of GDDR5 memory
The compute nodes are connected by a two-stage FDR InfiniBand of spine and edge switches with a blocking factor of 1:8. The maximum size for non-blocking jobs is 896 cores.
As a global workspace file system, the scalable, parallel file system BeeGFS is used, which provides a total of 720TB (gross) temporary usable storage space for current calculations.
The operating system used is Red Hat Enterprise Linux 7. Application software and compilers are provided via the Module-System.
Learn more: