RCDC
Table of Contents
- General Information
- Allocations
- Storage
- Creating an Account
- Interactive Jobs
- Job Submission Files
- SSH Keys
- CLAIRE
General Information
For more information on how to access advanced computing resources at RCDC visit the following web page:
To learn how the available SUS will be affected by executing software on a particular hardware, see allocations.
Allocations on RCDC Clusters
- Opuntia
- Awarded allocation: 50,000 SUs
- Start date: September 1st 2020
- Renewal date: July 2021.
- Sabine
- Awarded allocation: 25,000 SUs
- Start date: September 1st 2020
- Renewal date: July 2021.
Storage
Do not store data and results in your $HOME
directory. Instead, create a folder with your username/name in
/project/mang
Store all files there. To make sure that others in our team can access these files add umask 002
to your bashrc
(~/.bashrc
).
Creating an Account
To create an account for one of the RCDC clusters go here: https://uh.edu/rcdc/getting-started/request-account.php
Information to be entered:
- Principle Investigator: Andreas Mang
- PI Email Address: andreas@math.uh.edu
- Grant Information: UH allocation of amang (PI)
- If you want to execute / develop GPU code use Sabine [ Resource ( Cluster ) ]
- For CPU code, use “Opuntia” [ Resource ( Cluster ) ]
- For your login shell select
bash
(if you don’t know what you are doing)
Interactive Jobs
On the compute nodes, to run an interactive job (log into a compute node requesting one node and 20 cores) you need to do the following:
srun -A mang -n 20 -t 2:00:00 -p medium --pty /bin/bash -l
in your command window. To request a GPU do
srun -A mang -t 3:00:00 -n 1 -p volta --gres=gpu:1 -N 1 --pty /bin/bash -l
You can define an ALIAS in your ~/.bashrc
to make your live easier. Here are two examples:
alias irun='srun -A mang -t 3:00:00 -n 1 -p volta --gres=gpu:1 -N 1 --pty /bin/bash -l'
alias irun='srun -A mang -n 20 -t 2:00:00 -p medium --pty /bin/bash -l'
Job Submission Files
An exemplary job submission file (one node with 20 CPU cores)
#!/bin/bash
### sbatch parameters
#SBATCH -J #ADD YOUR JOB NAME HERE
#SBATCH -N 1
#SBATCH -n 20
#SBATCH -o hostname.out
#SBATCH -e hostname.err
#SBATCH -t 0-04:00:00
#SBATCH --mail-user= #ADD YOUR EMAIL HERE
#SBATCH --mail-type=begin
#SBATCH --mail-type=end
#SBATCH --mail-type=fail
#SBATCH -A mang
export I_MPI_PIN_DOMAIN=omp
export OMP_NUM_THREADS=1
module load # ADD YOUR MODULES HERE
### directory of your code
CDIR= #ADD YOUR CODE DIRECTORY HERE (NO EMPTY SPACE AFTER =)
DDIR= #ADD YOUR DATA DIRECTORY DIRECTORY HERE (NO EMPTY SPACE AFTER =)
#### define paths
# ADD DEFINITION FOR PATHS HERE IF YOU HAVE ANY
#### submitt job
# ADD COMMAND YOU WOULD LIKE TO EXECUTE HERE
If this file is stored as jobsub.sh
, you can submit it using
sbatch jobsub.sh
An example for a command line for Matlab is.
matlab -nodesktop -nodisplay -nosplash -r \"script.m; quit;\"
Here, we execute the Matlab script script.m
. The script needs to be in $CDIR
(or in your current directory). You need to load the Matlab module to execute matlab on the cluster.
SSH Keys
To prevent having to enter your password whenever you check out code on a compute node add your SSH key to GitHub.
General instructions can be found here: ssh-agent
ssh-keygen -t ed25519 -C "YOUREMAIL_GIT@uh.edu"
To enable this functionality on SABINE you will also need to modify ~/.ssh/config
. Add the following:
Host *
AddKeysToAgent yes
IdentityFile ~/.ssh/id_ed25519
CLAIRE
Compilation of CLAIRE on Sabine
- Set of modules loaded:
module load python module load CMake module load OpenMPI/intel/4.0.1 module load CUDA/10.0.130 (edited)
- Compilation of dependencies:
cd deps make -j cd ..
- Compilation of CLAIRE:
source deps/env_source.sh make -j