Skip to main content Skip to navigation

Hine Group Computing Resources

Local Resources

We have 3 SLURM partitions on the local SCRTP machines:

SLURM Partition Nodes Cores/node GPUs/node List
nanosim 3 24 Intel i7 (old) None

stan1.csc.warwick.ac.uk
stan2.csc.warwick.ac.uk
stan3.csc.warwick.ac.uk

nanosimd 5 8 Intel i7 nVidia RTX4000 or RTX4500

alpher.theory.warwick.ac.uk
bethe.theory.warwick.ac.uk
gamow.theory.warwick.ac.uk
eckhart.theory.warwick.ac.uk
zernike.theory.warwick.ac.uk

nanosimo 1 128 AMD EPYC nVidia A40 ohm.theory.warwick.ac.uk

The command:

squeue -p nanosim,nanosimd,nanosimo

should show you everything that is currently running on these three partitions.

You can write and submit a SLURM job script (see SCRTP documentation) to use these nodes, eg:

sbatch -p nanosimd my_script

It is also fine to use srun directly, eg:

srun -p nanosimo -n 32 -c 4 ~/bin/my_executable

as a way to run interactively on one of these partitions (this can be from any machine).
Please do not run long jobs via mpirun while logged into them directly as SLURM is then unaware of the job and you may end up trampling on another user's job.

For the Desktops (nanosimd), please refrain from running long-duration jobs on other people's desktops (at least without checking first) during office hours (9-5 weekdays) as the fans can be quite loud and the heat output is non-negligible in summer.

If using hybrid OpenMP/MPI please pay attention to core binding as it can make a big difference to efficiency!

Storage

We have dedicated storage on the SCRTP filesystem at /storage/nanosim

My suggestion is that group members should create a directory /storage/nanosim/phxxxx where phxxxx is your SCRTP username. This folder should be owned by the nanosim group (chgrp nanosim /storage/nanosim/phxxxx), and should have group read/write permissions (chmod -R g+rw /storage/nanosim/phxxxx) so that I am not left with un-removable files after you have left!

HPC Resources

We have a generous allocation of time on Avon for which the signup is here. This contains 48-core nodes with Intel processors and some nodes (the gpu queue) have RTX6000 GPUs.

We also typically receive time allocations each quarter on Sulis, the HPC Midlands+ regional supercomputer.

We are also members of the UKCP High-End Consortium for ARCHER2 usage. Apply via SAFE for membership of the e89-warp project.