Job Scheduler(SLURM):
Slurm (Simple Linux Utility for Resource Management) is an open-source job scheduler that allocates compute resources on clusters for queued researcher defined jobs. Slurm has been deployed at various national and international computing centers, and by approximately 60% of the TOP500 supercomputers in the world.
You can learn more about SLURM and its commands from the official Slurm website.
Queuing System:
When a job is submitted, it is placed in a queue. There are different queues available for different purposes. The user must select any one of the queues from the ones listed below which is appropriate for his/her computation need.
Slurm partitions are essentially different queues that point to collections of nodes. On Mario there are three partitions:
- devel: this partition has one compute node that has been set aside for testing jobs before they are submitted to the main partition i.e short or long (essentially to make sure the submission scripts work). This partition has a maximum time limit of 30 minutes.
- short: this partition has 4 compute nodes that have been set aside for running the smaller job.s This queue/partition has a maximum time limit of 36 hours.
- long: this partition has 27 compute nodes that have been set aside for running the longer jobs. This partition has no time limits.
Queue name | No. of Nodes | Node list | Default walltime (day-hrs:min) | Total No. of Actual CPUs | Total No. of CPUs with Hyper-Threading |
---|---|---|---|---|---|
devel | 1 | node[1] | 00:30 | 16 | 32 |
short | 4 | node[2-5] | 1-12:00 | 64 | 128 |
long | 27 | node[6-32] | no-limit | 432 | 864 |
Useful commands
Slurm Command | Description | Syntex |
---|---|---|
sbatch | Submit a batch serial or parallel job using slurm submit script | sbatch slurm_submit_script.sub |
srun | Run a script or application interactively | srun --pty -p test -t 10 --mem 1000 /bin/bash [script or app] |
scancel | Kill a job by job id number | scancel 999999 |
squeue | View status of your jobs | squeue -u |
sinfo | View the cluster nodes, partitions and node status information | sinfo OR sinfo -lNe |
sacct | Check current job by id number | sacct -j 999999 |
Usage Guidelines
- Users are supposed to submit jobs only through the scheduler.
- Users are not supposed to run any job on the master node.
- Users are not allowed to run a job by direct login to any compute node.