Skip to main content Skip to navigation

Clusters

Duncan Lockerby, James Kermode, James Sprittles and Peter Brommer's groups have purchased a number of dedicated CoW (cluster of workstation) nodes.

Accessing the nodes

You must have a SCRTP desktop account to get access to these nodes. There is no further access control enforced so we have to be self-governing/monitoring. Once you have an account, you can ssh directly to one of the nodes and run jobs interactively. See the Scientific Computing RTP's desktop support pages for information on remote access.

The idea is that individual users have priority on nodes as set out in the table below. Spare capacity can be used by submitting jobs through the queue as described in the next section.In addition to these dedicated nodes, the centrally provided taskfarm nodes are available.

Node name

Cores (RAM) Priority user
js119.csc.warwick.ac.uk 32 (64 GB) Jack
js120.csc.warwick.ac.uk 32 (64 GB) Jesse / James S
mnf144.csc.warwick.ac.uk 28( 64 GB) James K
mnf145.csc.warwick.ac.uk 28 (64 GB)
mnf146.csc.warwick.ac.uk 28 (64 GB) Lakshmi
mnf147.csc.warwick.ac.uk 32 (64 GB) Wang
mnf148.csc.warwick.ac.uk 32 (256 GB)
mnf149.csc.warwick.ac.uk 32 (128 GB) Tom R
dedicated-f02.csc.warwick.ac.uk 40 (384 GB) Shiwani
dedicated-g02.csc.warwick.ac.uk 40 (384 GB) Shiwani
dedicated-h02.csc.warwick.ac.uk 40 (384 GB) Peter L-J
dedicated107.csc.warwick.ac.uk 28( 64 GB)
dedicated108.csc.warwick.ac.uk 28 (64 GB) Mykyta
dedicated109.csc.warwick.ac.uk 28 (64 GB) James B
dedicated120.csc.warwick.ac.uk 28 (64 GB) Adam F
dedicated121.csc.warwick.ac.uk 28 (64 GB) Geraldine A
dedicated123.csc.warwick.ac.uk 28 (64 GB)
dedicated217.csc.warwick.ac.uk 16 (32 GB) Jos
dedicated218.csc.warwick.ac.uk 16 (32 GB)

Running jobs

There is a dedicated queue named “mnf” (short for micro nano fluids). To submit jobs to this queue just run the following command from any CoW machine:

sbatch -p mnf <jobscript>

You can also specify that jobs should run on a particular node by providing a nodelist in your jobscript, e.g. to restrict to running on the dedicated109 node you could use:

#SBATCH -p mnf
#SBATCH --nodelist dedicated109.csc.warwick.ac.uk

See the SCRTP wiki (login with your SCRTP account) for more on running jobs on the CoW.

Useful scripts

The cownodes function defined below is useful to see current status of the nodes to pick a free one to run on.

export COW_NODES="$USER@js119.csc.warwick.ac.uk \
$USER@js120.csc.warwick.ac.uk \
$USER@mnf144.csc.warwick.ac.uk \
$USER@mnf145.csc.warwick.ac.uk \
$USER@mnf146.csc.warwick.ac.uk \
$USER@mnf147.csc.warwick.ac.uk \
$USER@dedicated-f02.csc.warwick.ac.uk \
$USER@dedicated-g02.csc.warwick.ac.uk \
$USER@dedicated-h02.csc.warwick.ac.uk \
$USER@dedicated107.csc.warwick.ac.uk \
$USER@dedicated108.csc.warwick.ac.uk \
$USER@dedicated109.csc.warwick.ac.uk \
$USER@dedicated120.csc.warwick.ac.uk \
$USER@dedicated121.csc.warwick.ac.uk \
$USER@dedicated123.csc.warwick.ac.uk \
$USER@dedicated217.csc.warwick.ac.uk \
$USER@dedicated218.csc.warwick.ac.uk"

function cowexec() {
for node in $COW_NODES; do
echo -ne $node
ssh $node $@
done
}

function cownodes() {
cowexec uptime
}