Sample script
Serial job sample script – sample.sub
#!/bin/bash # Job name #SBATCH --job-name=test-serial # # Set partition [change] #SBATCH --partition=long # # STDOUT file; "N" is node number and "j" job id number #SBATCH --output=test-serial_%N_%j.out # STDERR file; "N" is node number and "j" job id number #SBATCH --error=test-serial_%N_%j.err # # Number of tasks per node #SBATCH --ntasks=1 # Memory requirement per CPU #SBATCH --mem-per-cpu=300 # # Total wall-time #SBATCH --time=00:05:00 # # To get email alert [Optional] # NOTE: Remove one "#" and "write your email ID" (ex: #SBATCH --mail-user=hemanta.kumar@icts.res.in) ##SBATCH --mail-user= email id ##SBATCH --mail-type=ALL # date cd /home/hemanta.kumar/slurm_test ./long_run date
Submit job:
sbatch sample.sub
The job’s status in the queue can be monitored with squeue; (add -u username to focus on a particular user’s jobs).
The job can be deleted with scancel <job_id> .
When the job finishes (in error or correctly) there will normally be one file created in the submission directory with the name of the form slurm-NNNN.out (where NNNN is the job id).
Submit script flags
Resource | Flag Syntax | Description | Notes |
---|---|---|---|
job name | -J, --job-name=hello_test | Name of job | default is the JobID |
partition | -p, --partition=devel | Partition is a queue for jobs | default partition maked with *, devel is the default partition on Mario |
time | -t, --time=01:00:00 | Time limit for the job. Acceptable time formats include minutes, minutes:seconds, hours:minutes:seconds, days-hours, days-hours:minutes and days-hours:minutes:seconds | here it is given as 1 hour |
nodes | -N, --nodes=2 | Number of compute nodes for the job | default is 1 compute node |
number tasks | -n, --ntasks=1 | A maximum of number tasks and to provide for sufficient resources. | default is 1 task per node |
ntasks on each node | --ntasks-per-node=8 | Request that ntasks be invoked on each node. If used with the --ntasks option, the --ntasks option will take precedence and the --ntasks-per-node will be treated as a maximum count of tasks per node | default is 1 task per node |
memory | --mem=32000 | Memory limit per compute node for the job. Do not use with mem-per-cpu flag | by default memory in MB |
memory per CPU | --mem-per-cpu=1000 | per core memory limit. Do not use with mem flag | by default memory in MB |
output file | -o, --output=test.out | Name of file for stdout | default is the JobID |
error file | -e, --error=test.err | Name of file for stderr | default is the JobID |
email address | --mail-user=username@buffalo.edu | User's email address | send email on submition and complition of job OR omit for no email |
email notification | --mail-type=ALL –mail-type=END | When email is sent to user. | omit for no email |