Projects will receive a default 500 GiB storage allocation on Centre Storage at NSC. If you need more storage, please apply for a Storage project and decline default storage from this compute proposal.
Tetralith, tetralith.nsc.liu.se, runs a CentOS 7 version of the NSC Cluster Software Environment.
Use the workload manager Slurm (e.g sbatch, interactive, ...) to submit your jobs. ThinLinc is available on the login nodes. Applications are selected using "module".
All Tetralith compute nodes have 32 CPU cores. There is 1832 "thin" nodes with 96 GiB of primary memory (RAM) and 60 "fat" nodes with 384 GiB. Each compute node have a local SSD disk where applications can store temporary files (approximately 200 GiB per thin node, 900 GiB per fat node).
All Tetralith nodes are interconnected with a 100 Gbps Intel Omni-Path network which is also used to connect to the existing storage.
There are 170 nodes in Tetralith equipped with one NVIDIA Tesla T4 GPU each as well as an updated, high-performance NVMe SSD scratch disk of 2TB. The nodes are regular Tetralith thin nodes which have been retrofitted with the GPUs and disks, and are accessible to all of Tetralith's users.
GPU nodes on Dardel will probably be generally available 2023-01-01, but there is a risk for delays due to server maintenance to accomodate the GPUs.
Also, these GPUs are not nVIDIA GPUs but rather AMD GPUs, so if your software runs using CUDA, a certain amount of conversion of the code is needed.
You can read information about this at https://www.lumi-supercomputer.eu/preparing-codes-for-lumi-converting-cuda-applications-to-hip/
Rackham provides 9720 cores in the form of 486 nodes with two 10-core Intel Xeon V4 CPUs each. 4 fat nodes have 1 TB of memory, 32 fat nodes have 256 GB, and the rest have 128 GB.
The interconnect is Infiniband.
Click above to show more information about the resource.