Mont-Blanc prototype (Testing chassis)

The Mont-Blanc prototype is located at the BSC facilities.

mb-proto_2.jpg

Cluster information

Setup

Use the command sinfo in order to check node availability.

  • Login node: x86 machine, provides access to the cluster.
    • Ubuntu 14.04, Kernel 3.11.0
  • Blade: The cluster has 9 blades and each of them contains 15 SDB nodes.
    • The blades are interconnected using a tree topology with four nodes connected to the outside.
  • Nodes: you can find more information about the SDB here.
    • Kernel 3.11.0-3.11.0-bsc_opencl+

Storage

Each node has 15GB at root partition, /home and /apps folders are mounted over the network (NFS) from the login node and are shared among all the nodes.
We are building the final Lustre file system which will allow larger storage space.

Power monitoring

No power monitoring available at the moment. We are working on it.

Login information

Account

If you want to get access to the cluster, please send an e-mail to hca.sysadmin@bsc.es

  • Full name
  • Institution
  • Intended use

Please, do not share your account with other people. If several users from your institution require accessing the ARM clusters, please ask for one account for each user.

By getting access to the ARM clusters at BSC, we might ask you to share your performance results to help us improve the cluster configuration.

Login

You can access the prototype cluster only through the HCA login node at BSC:

  • First log into the HCA login node with your user:
    • ssh user@ssh.hca.bsc.es
  • Second log into the Montblanc prototype cluster with your user:
    • ssh user@mb-cluster

At this point, you would be at one of the login nodes (ARM-based machine). From here you could compile your applications (applications are found at /apps) or execute your application using the job scheduler (SLURM).

Change Password

In order to change your password in the Montblanc prototype you must use the following link.

Software available

Location

  • All the software is located at /apps folder.

Software

  • GNU Bison v3.0
  • CLooG v0.18.1
  • FLEX 2.5.39
  • GMP Library v6.0.0
  • ISL v0.12.2
  • libunwind v1.1
  • GNU MPC v1.0.2
  • GNU MPFR v3.1.2
  • Munge v0.5.11
  • PAPI v5.4.0
  • SLURM 2.65
  • GNU Compiler Suite v4.9.1
  • Environment modules v3.2.10
  • Boost v1.56.0
  • Runtime
    • MPICH v3.1.3
    • OpenMPI v1.8.3
  • OpenCL Full Profile v1.1.0
  • OmpSs (stable and development)
  • Scientific libraries
    • FFTW v3.3.4
    • HDF5 v1.8.13
    • ATLAS v3.11.27
  • Development Tools
    • Extrae v3.0.1
      • Support for MPICH v3.1.3 and OpenMPI v1.8.3
    • Allinea DDT
  • Network
    • Open-MX v1.5.4
  • SLURM script example:
#!/bin/sh
#SBATCH --ntasks=$NTASKS
#SBATCH --cpus-per-task=$NCPU_TASK
#SBATCH --partition=$PARTITION
#SBATCH --job-name=$JOB_NAME
#SBATCH --error=err/$JOB_NAME-%j.err
#SBATCH --output=out/$JOB_NAME-%j.out
#SBATCH --workdir=/path/to/binaries
 
mpirun ./$PROG

NOTE: to run a OpenCL application you must add #SBATCH –gres=gpu to your jobscript.

This script must be launched using the command sbatch jobscript.sh from mb-cluster node. You can look at some job scripts examples here.

Environment Modules

The Environment Modules package provides for the dynamic modification of a user's environment via modulefiles.

Each modulefile contains the information needed to configure the shell for an application. Once the Modules package is initialized, the environment can be modified on a per-module basis using the module command which interprets modulefiles. Typically modulefiles instruct the module command to alter or set shell environment variables such as PATH, MANPATH, etc. modulefiles may be shared by many users on a system and users may have their own collection to supplement or replace the shared modulefiles.

Modules can be loaded and unloaded dynamically and atomically, in an clean fashion. Modules are useful in managing different versions of applications. Modules can also be bundled into metamodules that will load an entire suite of different applications.

A complete list of available modules can be obtained using the command module avail at one of the compute nodes, obtaining an output as the folllowing.

druiz@mb1:~$ module avail
 
---------- /apps/modules/3.2.10/Modules/default/modulefiles/compilers ----------
gcc/4.7.3             mpich2/1.5(default)   ompss/stable(default)
gcc/4.9.0(default)    ompss/development
 
------------ /apps/modules/3.2.10/Modules/default/modulefiles/tools ------------
extrae/2.5.1(default) papi/5.3.0(default)
 
---------- /apps/modules/3.2.10/Modules/default/modulefiles/libraries ----------
atlas/3.11.27(default) hdf5/1.8.13_parallel(default)
fftw/3.3.4(default)    opencl/1.1.0(default)

Note that several versions are provided for some of the modules, like gcc. The following examples will show you how to load, unload or obtain info about available modules. Several module can be specified to obtain or load/unload at once.

druiz@mb1:~$ module help gcc/4.9.1
 
----------- Module Specific Help for 'gcc/4.9.1' ------------------
 
GNU Compiler Suite 4.9.1 (c++, cpp, g++, gcc, gcc-ar, gcc-nm, gcc-ranlib gcov, gfortran)
 
druiz@mb1:~$ module load gcc/4.9.1
load gcc/4.9.1 (PATH, MANPATH, LD_LIBRARY_PATH)
 
druiz@mb1:~$ module unload gcc/4.9.1
remove gcc/4.9.1 (PATH, MANPATH, LD_LIBRARY_PATH)
 
druiz@mb1:~$ module list
Currently Loaded Modulefiles:
  1) gcc/4.9.1      2) mpich2/1.5     3) extrae/2.5.1

Loading modules at login

If you want to load some modules automatically when login in to mb-cluster you can add the following to your .bashrc script.

HOSTNAME=$(hostname)
if [ "${HOSTNAME:0:2}" == "mb" ]
then
    module load gcc openmpi
fi
QR Code
QR Code montblanc_d6 (generated for current page)