STORAGE AND LOGIN

The CMCS (condensed matter and complex systems) cluster is the main computational resource used by the group. To use it and how to use it please follow the indications below.

The storage is split into "personal" and "group" space. Both are within the so-called "DataStore" space. Additionally, each node has a "scratch" on which you can write locally (not backed up). CAREFUL! Data on scratch is automatically deleted after 180 days that no one "touches" the data.

PERSONAL: cd /Disk/ds-sopa-personal/uun will bring you into the personal space. You should have 500GB on this space and it is backed up.

GROUP: cd /Disk/ds-sopa-group/cmcs/uun will bring you into the cmcs group space. This space is very large ~100TB but a rule of thumb is that you don't use more than a few TB of space. ATTENTION this space has weird problems with permissions. Do not co-work on the same directory. Do not use chown/chmod otherwise you will break the permissions.

LAB: We have 5TB of Lab storage at /Disk/ds-sopa-group/cmcs-dmichiel. This is more suitable for shared projects and is backed up. There is a "quota.txt" file that is updated everyday. Keep an eye on it to avoid overfilling the space.

if you don't have access to these spaces email sopa-helpdesk@ed.ac.uk and request access.

  1. Login in the condensed matter gate

    ssh -X uun@ph-cm.ph.ed.ac.uk

    from here you cannot/shouldn't do anything.

  2. Login to a node (ssh -X nodename) a. walk-in nodes: ball, feltz -> you can run simulations directly on these. To do so do the following

    • cd /scratch/
    • create folder
    • run job (max cores is 48 but be mindful of other users and always check with "top" the CPU usage before launching new jobs) b. scheduled nodes: access via any node via qsub (see below) c. local/personal node (you are assigned a desktop machine)
    • ssh -X nodename
    • cd into either DataStore (see above or local "scratch" space
    • Run job

NOTE: For jobs that require a lot of I/O please use the scratch. Jobs with I/O on DataStore will create traffic problems on the network.

HOW TO RUN JOBS

From any node you can use the following commands:

"qstat -u uun" to see your active jobs

"qstat -f" to see activity of all scheduled nodes

"qdel job_ID" to stop/cancel job

"qsub script" to submit a script to the scheduler

Example of scripts

#!/bin/sh

#$ -N jobname #$ -cwd #$ -q cdt.7.day #choose between cdt.7.day/softcm.7.day/sopa.1.day #$ -pe mpi 64 #number of cores requested (64 is the max) #$ -l h_rt=168:00:00 #total wallclock time requested (168h is the max on 7.day queues)

#to write on the local scratch of the machine export MYHOME=/scratch/myfolder mkdir -p $MYHOME cd $MYHOME

export mainfolder=/Disk/ds-sopa-group/XXX #path to where your simulation scripts are cp -r $mainfolder/run.lam . #lammps commands file cp -r $mainfolder/Start.data . #start configuration (can be a folder with many configs)

#this below is to get the name of the machine on which the job is running #so that you can retrieve it afterwards date hostname

#to activate mpi module load mpi

#keep a lammps exe in you home folder to access it from any scracth mpirun -np $NSLOTS ~/lmp_mpi -in run.lam