1. Home
  2. Tetris Cluster
  3. Queuing Systems & Scheduler

Queuing Systems & Scheduler

Job Scheduler(SLURM):

Slurm (Simple Linux Utility for Resource Management) is an open-source job scheduler that allocates compute resources on clusters for queued researcher defined jobs. Slurm has been deployed at various national and international computing centers, and by approximately 60% of the TOP500 supercomputers in the world.

You can learn more about SLURM and its commands from the official Slurm website.

Queuing System:

When a job is submitted, it is placed in a queue. There are different queues available for different purposes. The user must select any one of the queues from the ones listed below which is appropriate for his/her computation need.

Slurm partitions are essentially different queues that point to collections of nodes. On Mario there are three partitions:

  • devel: this partition has one compute node that has been set aside for testing jobs before they are submitted to the main partition i.e short, medium or long (essentially to make sure the submission scripts work). This partition has a maximum time limit of 30 minutes.
  • shortthis partition has 21 compute nodes that have been set aside for running the smaller jobs. This queue/partition has a maximum time limit of 36 hours.
  • mediumthis partition has 26 compute nodes that have been set aside for running the medium jobs. This queue/partition has a maximum time limit of 7 days.
  • longthis partition has 48 compute nodes that have been set aside for running the longer jobs. This partition has no time limits.
Queue nameNo. of NodesNode listDefault walltime
(day-hrs:min)
Total No. of Actual CPUsTotal No. of CPUs with
Hyper-Threading
devel1cn[1]00:301632
short21cn[2-22]1-12:00336672
medium26cn[23-22]7-00:00416832
long48cn[49-96]no limit(inf)7681536
NOTE: devel is the default partition

Useful commands

Slurm CommandDescriptionSyntex
sbatchSubmit a batch serial or parallel job using slurm submit scriptsbatch slurm_submit_script.sub
srunRun a script or application interactivelysrun --pty -p test -t 10 --mem 1000 /bin/bash [script or app]
scancelKill a job by job id numberscancel 999999
squeueView status of your jobssqueue -u OR squeue -l
sinfoView the cluster nodes, partitions and node status informationsinfo OR sinfo -lNe
sacctCheck current job by id numbersacct -j 999999

Usage Guidelines

  • Users are supposed to submit jobs only through the scheduler.
  • Users are not supposed to run any job on the master node.
  • Users are not allowed to run a job by direct login to any compute node.
Was this article helpful to you? Yes 1 No