skip to primary navigationskip to content
 

Accounting commands

The following commands are wrappers around the underlying SLURM commands sacct and sreport which are much more powerful.

Note that project names in SLURM are not case sensitive.

What resources do I have available to me?

This is the first question to settle before submitting jobs to CSD3. Use the command

mybalance

to show your projects, your current usages and the remaining balances in compute unit hours.

On CSD3 we are using natural compute units for each component of the facility:

  • on Peta4-Skylake we are allocating and reporting in CPU core hours
  • on Peta4-KNL we are allocating and reporting in KNL node hours
  • on Wilkes2-GPU we are allocating and reporting in GPU hours.

We have adopted the convention that projects containing Peta4-Skylake CPU hours will end in -CPU, while those holding GPU hours for Wilkes2-GPU end in -GPU, and projects containing Peta4-KNL node hours end in -KNL.

The projects listed by mybalance are the projects you may specify in SLURM submissions either through

#SBATCH -A project

in the job submission script or equivalently on the command line with

sbatch -A project ...

Where -CPU projects should be used for Peta4-Skylake jobs, -KNL projects for Peta4-KNL and -GPU projects for Wilkes2. See the Submitting jobs section for details on submitting to each cluster.

How many core hours does some other project or user have? 

gbalance -p T2BENCH-SL2-CPU
User           Usage | Account Usage         | Account Limit Available (hours)
---------- --------- + -------------- ------ + ------------- ---------
xyz10              0 | T2BENCH-SL2-CPU     0 | 200,000          200,000

This outputs the total usage in core hours accumulated to date for the project, the total awarded and total remaining available (i.e. to all members). It also prints the component of the total usage due to each member.

I would like a listing of all jobs I have submitted through a certain project and between certain times

gstatement -p SUPPORT-CPU -u xyz10 -s "2017-10-01-00:00:00" -e "2017-11-22-23:59:59"
JobID User Account JobName Partition End ExitCode State CompHrs
------------ --------- ---------- ---------- ---------- ------------------- -------- ---------- --------
204815 xyz10 support-c+ _interact+ skylake 2017-10-20T16:20:07 0:0 COMPLETED 0.9
261251 xyz10 support-c+ _interact+ skylake 2017-11-09T17:39:43 0:0 TIMEOUT 1.0
262050 xyz10 support-c+ _interact+ skylake 2017-11-11T14:00:03 0:0 CANCELLED+ 1.5
262051 xyz10 support-c+ _interact+ skylake-h+ 2017-11-11T14:00:03 0:0 CANCELLED+ 0.7
...

This lists the charge for each job in the CompHrs column. Since this example queries usage of a -CPU project, these are CPU core hours. Similarly, for a -GPU project they would be GPU hours, and for a -KNL project they would be node hours.

I would like to add core hours to a particular member of my group

gdeposit -z 10000 -p halos-sl2-spqr1-gpu

This coordinator of the HALOS-SL2-GPU  might use this to add 10000 GPU hours to the HALOS-SL2-SPQR1-GPU subproject assigned to the user spqr1. Note that if a compute hour limit applies to the parent of the project in the project hierarchy - i.e. if the parent project HALOS-SL2-GPU has an overall compute hour limit (which it almost certainly does) - then the global limit will still apply across all per-user projects.

Compute hours may be added to a project by a designated project coordinator user. Reducing the compute hours available to a project is also possible by adding a negative number of hours via the --time= syntax, e.g. the following command undoes the above:

gdeposit --time=-10000 -p halos-sl2-spqr1-gpu