Slurm examples
- Some examples are available at https://gitlab-research.centralesupelec.fr/mesocentre-public/ruche_examples
- The SLURM directive
--partition
is mandatory in each job (see available partitions)
SLURM sequential job
seq.sh
is a SLURM sequential job:
$ cat seq.sh
#!/bin/bash
#SBATCH --job-name=seq
#SBATCH --output=%x.o%j
#SBATCH --ntasks=1
#SBATCH --time=01:00:00
#SBATCH --partition=cpu_short # (see available partitions)
# To clean and load modules defined at the compile and link phases
module purge
module load ...
# echo of commands
set -x
# To compute in the submission directory
cd ${SLURM_SUBMIT_DIR}
# execution
./a.out
- To submit
seq.sh
with thesbatch
command:
# Soumission du script en batch
$ sbatch seq.sh
SLURM OpenMP parallel job
openmp.sh
is a SLURM OpenMP job with 20 OpenMP threads
$ cat openmp.sh
#!/bin/bash
#SBATCH --job-name=openmp
#SBATCH --output=%x.o%j
#SBATCH --time=01:00:00
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=20
#SBATCH --mem=80G
#SBATCH --partition=cpu_short # (see available partitions)
# To clean and to load the same modules at the compilation phases
module purge
module load ...
# echo of commands
set -x
# To compute in the submission directory
cd ${SLURM_SUBMIT_DIR}
# number of OpenMP threads
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
# Binding OpenMP threads on core
export OMP_PLACES=cores
# execution with 'OMP_NUM_THREADS' OpenMP threads
./a.out
- To submit
openmp.sh
with thesbatch
command:
$ sbatch openmp.sh
- Remarks:
- To adjust the memory per node, add
--mem
- To adjust the memory per node, add
SLURM MPI parallel job
mpi.sh
is a SLURM MPI job with 80 MPI processes
$ cat mpi.sh
#!/bin/bash
#SBATCH --job-name=mpi
#SBATCH --output=%x.o%j
#SBATCH --time=01:00:00
#SBATCH --ntasks=80
#SBATCH --partition=cpu_short # (see available partitions)
# To clean and to load the same modules at the compilation phases
module purge
module load ...
# echo of commands
set -x
# To compute in the submission directory
cd ${SLURM_SUBMIT_DIR}
# execution with 'ntasks' MPI processes
srun ./a.out
- To submit
mpi.sh
with thesbatch
command:
$ sbatch mpi.sh
- Remarks:
- Parallel compute nodes have 40 cores. Try to use a multiple number of MPI processes of 40 to use all the cores of nodes.
- If you don't use all cores of nodes, other jobs can share the same nodes and performances can decrease.
SLURM hybrid MPI/OpenMP parallel job
mpi_openmp.sh
is a SLURM MPI job allocating 40 cores :
* 2 MPI processes (--ntasks=2
),
* each MPI process will spawn 20 OpenMP threads (--cpus-per-task=20
)
$ cat mpi_openmp.sh
#!/bin/bash
#SBATCH --job-name=mpi_openmp
#SBATCH --output=%x.o%j
#SBATCH --time=01:00:00
#SBATCH --ntasks=2
#SBATCH --cpus-per-task=20
#SBATCH --partition=cpu_short # (see available partitions)
# To clean and to load the same modules at the compilation phases
module purge
module load ...
# echo of commands
set -x
# To compute in the submission directory
cd ${SLURM_SUBMIT_DIR}
# number of OpenMP threads
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
# Binding OpenMP Threads of each MPI process on cores
export OMP_PLACES=cores
# execution
# with 'ntasks' MPI processes
# with 'cpus-per-task' OpenMP threads per MPI process
srun ./a.out
- To submit
mpi_openmp.sh
with thesbatch
command:
$ sbatch mpi_openmp.sh
- Remarks:
- Parallel compute nodes have 40 cores. Try to use a multiple number of [MPI processes * OpenMP threads] of 40 to use all the cores of nodes.
- If you don't use all cores of nodes, other jobs can share the same nodes and performances can decrease.