Metadata
Title
Callan HPC Cluster
Category
general
UUID
eab0a620589843e7aec69c0f21d6767d
Source URL
https://www.tchpc.tcd.ie/docs/callan/
Parent URL
https://www.tchpc.tcd.ie/resources
Crawl Time
2026-03-23T14:10:58+00:00
Rendered Raw Markdown

Callan HPC Cluster

Source: https://www.tchpc.tcd.ie/docs/callan/ Parent: https://www.tchpc.tcd.ie/resources

TCD, Research IT docs

TCD, Research IT docs


Callan HPC Cluster

Callan is the general access HPC Cluster for Trinity Researcher's. It's hardware characteristics are:

Access

To access it you must have a Research IT account, please apply for one if you don't have one.

To request access to the cluster please email rit-support@tcd.ie.

Please note that sensitive data coming under the GDPR cannot be stored or analysed on the Callan HPC Cluster.

Login

To login please connect to callan.tchpc.tcd.ie using the usual SSH instructions. It is accessible from the College network, including the VPN. To connect to it from the internet please first login to the College VPN or relay through rsync.tchpc.tcd.ie as per our instructions.

Details of the Callan file system

Software

Software is installed with our usual modules system. You can view the available software with module available and load software with module load, e.g. module load gcc/13.1.0-gcc-8.5.0-k3cddbg. The modgrep utility will search the available module files from the head node, e.g. modgrep conda will display any modules with conda in their name.

Intel OneAPI

Suggested intel modules to load:

module load tbb/latest compiler-rt/latest oclfpga/latest compiler/latest mpi/latest

Manual activation: source /home/support/intel/oneapi/2024.1.0/setvars.sh

Running jobs

Running jobs must be done via the Slurm scheduler.

Intel and AMD partitions

There are two different CPU architectures in Callan, some nodes have Intel CPU's, others have AMD CPU's. See the top of this page for more information on that. To avoid inadvertently running jobs with the wrong architecture the node types are in different partitions.

The Intel nodes are in the compute partition. This is the default partition. To request resources in that partition you do not need to explicitly do anything, Intel nodes in the compute partition will be assigned to you by default if you do not do anything.

If you wish to manually specify Intel nodes in the compute partition use the #SBATCH -p compute directive in your batch job scripts or for interactive jobs use salloc -p compute ....

To request AMD nodes you must specify the amd partition with the #SBATCH -p amd directive in your batch job scripts or for interactive jobs use salloc -p amd ....

Chemistry partition

The 11 AMD nodes, callan-n[13-23] are in both the amd partition and 7 of them, (callan-n[13-19]), are also in the chemistry partition. The ccem partition has a higher priority, jobs submitted to it will run sooner. The chemistry partition is only accessible to those who the Head of the School of Chemistry or their delegate specify. This is because these nodes where purchased with funding through the School of Chemistry.

To access the chemistry partition for interactive jobs use:

salloc -n 1 -p chemistry -A CCEM if you are a member of the CCEM group, Or

salloc -n 1 -p chemistry -A callan_watsong if you are a member of the Chemistry department with permission to access the partition

Or for batch:

#SBATCH -p ccem
#SBATCH -A CCEM

Or

#SBATCH -p ccem
#SBATCH -A callan_watsong

Batch job example's

Node sharing is enabled.

Create a batch submission script, e.g. name: run.sh, and submit it to the queue with the sbatch command. E.g.

> sbatch run.sh

Here are some example submission scripts:

#!/bin/bash
#SBATCH -n 12
#SBATCH --mem=48GB
module load openmpi
echo "Starting"
./exe.x
#!/bin/bash
#SBATCH -n 64
#SBATCH --mem=256000
module load openmpi
echo "Starting"
./exe.x
#!/bin/bash
#SBATCH -n 128
#SBATCH --mem=256000
module load openmpi
echo "Starting"
./exe.x

Interactive allocation

salloc -n 12 --mem=48GB - this will automatically log you into the node once it has been assigned.

Further instructions

See the HPC clusters usage documentation for further instructions.

See our Transferring files for more notes on transferring data.

« Previous Next »