4 nodes (devoted to computation). Login is shared with HCA cluster.
Each node is equiped with:
2x sockets Cavium ThunderX
48x ARMv8-A cores each (i.e. 96 cores with shared mem each node) @ 1.8 GHz
128GB memory
2x 10GbE (data network)
1x 40GbE (not connected)
1x 128GB SSD
System software
SLURM
OpenLDAP
GNU Compiler Suite
MPICH
OpenMPI
Extrae
OmpSs
Status
Available
Login node is HCA
Issues
Performance of applications when using memory allocated on a non-local NUMA node could affect the performance in a non-negligible way. Please check the example on how to bind SLURM tasks to specific cores before executing any job.
wiki/thunderx.txt · Last modified: 2018/07/31 10:40 by kpeiro