SUPR
SUPR
NAISS Large Compute Spring 2023

This Round is Open for Proposals

The deadline for submitting proposals is 2023-04-12 15:00.

More information about this round is available at https://snic.se/allocations/compute/large-allocations/.

Resources

Resource Centre Upper
Limit
Available Unit Note
Alvis C3SE 175 000 GPU-h/month This resource is only intended for AI/ML research.
This resource is only intended for research on AI/ML or research using AI/ML methods.

Alvis is a GPU focused cluster dedicated for AI/ML research.

Phase 1 consist of:
  • 1 login node with 4 x Tesla T4 GPU with 16GB RAM, 2 x 16 core Intel Xeon Gold 6226R CPU @ 2.90GHz, 768GB RAM
  • 12 nodes with 2 x Tesla V100 SXM2 GPU with 32GB RAM, 2 x 8 core Intel Xeon Gold 6244 CPU @ 3.60GHz, 768GB RAM
  • 5 nodes with 4 x Tesla V100 SXM2 GPU with 32GB RAM, 2 x 16 core Intel Xeon Gold 6226R CPU @ 2.90GHz, 768GB RAM
  • 20 nodes with 8 x Tesla T4 GPU with 16GB RAM, 2 x 16 core Intel Xeon Gold 6226R CPU @ 2.90GHz, 576GB RAM (1 node with 1536GB)
Phase 2 consist of:
  • 1 data transfer node with 2 x 32 core Intel Xeon Gold 6338 CPU @ 2GHz, 256GB RAM
  • 85 nodes with 4 x Tesla A40 GPU with 48GB RAM, 2 x 32 core Intel Xeon Gold 6338 CPU @ 2GHz, 256GB RAM
  • 56 nodes with 4 x Tesla A100 HGX GPU with 40GB RAM, 2 x 32 core Intel Xeon Gold 6338 CPU @ 2GHz, 256GB RAM
  • 20 nodes with 4 x Tesla A100 HGX GPU with 40GB RAM, 2 x 32 core Intel Xeon Gold 6338 CPU @ 2GHz, 512GB RAM
  • 8 nodes with 4 x Tesla A100 HGX GPU with 80GB RAM, 2 x 32 core Intel Xeon Gold 6338 CPU @ 2GHz, 1024GB RAM
Tetralith NSC 14 500 x 1000 core-h/month
Tetralith, tetralith.nsc.liu.se, runs a CentOS 7 version of the NSC Cluster Software Environment. Use the workload manager Slurm (e.g sbatch, interactive, ...) to submit your jobs. ThinLinc is available on the login nodes. Applications are selected using "module". All Tetralith compute nodes have 32 CPU cores. There is 1832 "thin" nodes with 96 GiB of primary memory (RAM) and 60 "fat" nodes with 384 GiB. Each compute node have a local SSD disk where applications can store temporary files (approximately 200 GiB per thin node, 900 GiB per fat node). All Tetralith nodes are interconnected with a 100 Gbps Intel Omni-Path network which is also used to connect to the existing storage. There are 170 nodes in Tetralith equipped with one NVIDIA Tesla T4 GPU each as well as an updated, high-performance NVMe SSD scratch disk of 2TB. The nodes are regular Tetralith thin nodes which have been retrofitted with the GPUs and disks, and are accessible to all of Tetralith's users.
Dardel PDC 28 000 x 1000 core-h/month
Dardel-GPU PDC 105 000 GPU-h/month

Click above to show more information about the resource.