Compute, storage and cloud resources that are available via SUPR for the moment:
|
Resource |
Centre |
Short Description |
|
Alvis |
C3SE |
Accelerator-based resource dedicated to research using AI techni |
|
The Alvis cluster is a national NAISS resource dedicated to Artificial Intelligence and
Machine Learning research.
Note: Significant generation of training data is expected to be done elsewhere.
The system is built around Graphical Processing Units
(GPUs) accelerator cards. The first phase of the resource has 160 NVIDIA T4, 44
V100, and 4 A100 GPUs. The second phase is based on 340 NVIDIA A40 and 336
A100 GPUs.
|
|
LUMI-C |
LUMI Sweden |
Swedish CPU share of the EuroHPC JU resource LUMI |
|
LUMI is a general computational resource hosted by CSC in Finland.
LUMI, Large Unified Modern Infrastructure, is an HPE Cray EX supercomputer consisting of several partitions targeted for different use cases. The largest partition of the system is the LUMI-G partition consisting of GPU-accelerated nodes using AMD Instinct GPUs. In addition to this, there is a smaller CPU-only partition LUMI-C that features AMD Epyc CPUs and an auxiliary partition for data analytics with large memory nodes and some GPUs for data visualization.
The LUMI consortium countries are Finland, Belgium, the Czech Republic, Denmark, Estonia, Iceland, Norway, Poland, Sweden, and Switzerland. The acquisition and operation of the EuroHPC Supercomputer are jointly funded by the EuroHPC Joint Undertaking and the LUMI consortium. The Swedish Research Council has contributed approx. 3.5% of the funding. A corresponding share of the system is reserved for Swedish research, but researchers are encouraged to apply for resources from the JU part of LUMI and other EuroHPC resources.
|
|
LUMI-G |
LUMI Sweden |
Swedish GPU share of the EuroHPC JU resource LUMI |
|
LUMI is a general computational resource hosted by CSC in Finland.
LUMI, Large Unified Modern Infrastructure, is an HPE Cray EX supercomputer consisting of several partitions targeted for different use cases. The largest partition of the system is the LUMI-G partition consisting of GPU-accelerated nodes using AMD Instinct GPUs. In addition to this, there is a smaller CPU-only partition LUMI-C that features AMD Epyc CPUs and an auxiliary partition for data analytics with large memory nodes and some GPUs for data visualization.
The LUMI consortium countries are Finland, Belgium, the Czech Republic, Denmark, Estonia, Iceland, Norway, Poland, Sweden, and Switzerland. The acquisition and operation of the EuroHPC Supercomputer are jointly funded by the EuroHPC Joint Undertaking and the LUMI consortium. The Swedish Research Council has contributed approx. 3.5% of the funding. A corresponding share of the system is reserved for Swedish research, but researchers are encouraged to apply for resources from the JU part of LUMI and other EuroHPC resources.
|
|
Tetralith |
NSC |
General purpose mainly CPU based resource |
|
Tetralith is a general computational resource hosted by NSC at Linköping University.
Tetralith servers have two Intel Xeon Gold 6130 processors, providing 32 cores per server. 1844 of the servers are equipped with 96 GiB of primary memory and 64 servers with 384 GiB. All servers are interconnected with a 100 Gbit/s Intel Omni- Path network which is also used to connect the existing storage. Each server has a local SSD disk for ephemeral storage (approx. 200GiB per thin node, 900GiB per fat node). An IBM Spectrum Scale system comprises the centre storage. 170 of the Tetralith nodes are equipped with one NVIDIA Tesla T4 GPU each as well as a high- performance NVMe SSD scratch disk of 2TB.
|
|
Dardel |
PDC |
General purpose computational resource |
|
Dardel is a Cray EX system from Hewlett Packard Enterprise, based on AMD EPYC processors with an accompanying Lustre storage system.
The nodes are interconnected using Slingshot HPC Ethernet.
|
|
Dardel-GPU |
PDC |
General purpose accelerator based resource |
|
Dardel-GPU is the accelerated partition based on AMD’s Instinct MI250X GPU of the Cray EX system from Hewlett Packard Enterprise. It has an accompanying Lustre storage system.
The nodes are interconnected using Slingshot HPC Ethernet.
|
|
Bianca |
UPPMAX |
Cluster system for sensitive data. |
|
Bianca is a research system dedicated to analysing sensitive personal data, or other types of sensitive data.
Bianca provides 4480 cores in the form of 204 dual CPU (Intel Xeon E5-2630 v3) Huawei XH620 V3 nodes with 128GB memory, 75 fat nodes with 256 GB of memory, 15 nodes with 512 GB of memory, ten nodes with two NVIDIA A100 40GB GPUs each.
|
|
Rackham |
UPPMAX |
Cluster for life science and general use. CPU only |
|
Rackham provides 9720 cores in the form of 486 nodes with two 10-core Intel Xeon V4 CPUs each. 4 fat nodes have 1 TB of memory, 32 fat nodes have 256 GB, and the rest have 128 GB.
The interconnect is Infiniband.
|
|
Resource |
Centre |
Short Description |
|
Cephyr NOBACKUP |
C3SE |
Project storage primarily for Vera, also attached to Alvis |
|
Project storage based on Ceph with a total usable area of 5 PiB.
- 14 Storage servers, each with 3 NvMe:s (for database and journal).
- 7 JBOD, each with 42 14 TiB HDD:s.
|
|
Mimer |
C3SE |
Project storage attached to Alvis and Vera, dedicated for AI/ML |
|
Project storage attached to Alvis and Vera, dedicated for AI/ML
Mimer is an all-flashed based storage system based on as solution from WEKA IO.
It consists of an 0.6 PB all-flash tier and a 7 PB Ceph based bulk storage tier (with spinning disk).
|
|
Storage |
LUMI Sweden |
Swedish share of project storage attached to LUMI-C and LUMI-G |
|
Project storage for NAISS allocations on LUMI.
Storage is applied for using TB-hours. Flash storage, LUMI-F, is accounted at ten times the TB-hour rate, i.e. use of 1TB of Flash storage for one hour costs 10 TB-hours. Lustre storage, LUMI-P, is accounted at the TB-hour rate. CEPH object storage, LUMI-O, is accounted at ½ the TB-hour rate, i.e. use of 1TB of CEPH storage for one hour costs 0.5 TB-hours. The total size of the Swedish part LUMI storage system is 35 412 000 TB-hours.
|
|
Centre Storage |
NSC |
Project storage attached to Tetralith and Sigma |
|
Project storage for NAISS as well as LiU Local projects with compute allocations on resources hosted by NSC.
Centre Storage @ NSC is designed for fast access from compute resources at NSC. It consists of one IBM ESS GL6S building block and one IBM ESS 5000 SC4 building block.
In total there are 946 spinning hard disks and a small number of NVRAM devices and SSDs which act as a cache to speed up small writes. The total disk space that is usable for storing files is approximately 6.9 PiB.
|
|
Klemming |
PDC |
Project storage attached to Dardel and Dardel-GPU |
|
Project storage for NAISS as well as PDC projects with compute allocations on resources hosted by PDC.
Klemming is designed for fast access from compute resources at PCD. It uses the Lustre parallel file system, which is optimized for handling data from many clients at the same time. The total size of Klemming is 12 PB.
|
|
dCache |
Swestore |
National storage infrastructure for large-scale research data |
|
Swestore is a Research Data Storage Infrastructure, intended for active research data and operated by the National Academic Infrastructure for Supercomputing in Sweden, NAISS,
The storage resources provided by Swestore are made available for free for academic research funded by VR and Formas through open calls such that the best Swedish research is supported and new research is facilitated.
The purpose of Swestore allocations, granted by National Allocations Committee (NAC), is to provide large scale data storage for “live” or “working” research data, also known as active research data.
See the documentation at: https://docs.swestore.se
|
|
Crex 1 |
UPPMAX |
Project storage attached to Rackham and Snowy |
|
Active data storage for Rackham projects. Primarily for life science projects.
|
Resources funded outside of NAISS but which are nationally available, in some cases under special conditions. See conditions for access under each resource.
Resources financed by individual universities or in regional collaborations between universities. Access is often limited to employees of the universities where the resources are located. See conditions for access under each resource.
|
Resource |
Centre |
Short Description |
|
Vera |
C3SE |
Local resource for Chalmers researchers |
|
The Vera cluster is built on Intel Xeon Gold 6130 (code-named "Skylake") CPU's with 32 cores per node and Intel Xeon Platinum 8358 (code-name "Icelake" CPU's with 64 cores per node.
For details see Vera hardware.
|
|
Kebnekaise |
HPC2N |
Local resource for researchers in the HPC2N consortium |
|
This resource is for access to the CPU nodes in Kebnekaise.
For GPU nodes see resource 'Kebnekaise GPU'.
For large memory nodes see resource 'Kebnekaise Large Memory'.
Kebnekaise is a heterogeneous computing resource currently consisting of:
- Compute nodes:
- GPU nodes (separate resource):
- 10 Intel® Xeon Gold 6132 Processor (Skylake), 2x14 cores, 192 GB/node
- 2 AMD® EPYC 7413 (Zen3), 2x24 cores, 512 GB/node
- 1 AMD® EPYC 7413 (Zen3), 2x24 cores, 512 GB/node
- 1 AMD® EPYC 9254 (Zen4), 2x24 cores, 384 GB/node
- 10 AMD® EPYC 9254 (Zen4), 2x24 cores, 384 GB/node
- 2 AMD® EPYC 9454 (Zen4), 2x48 cores, 768 GB/node
- 1 AMD® EPYC 9334 (Zen4), 2x32 cores, 768 GB/node
- 2 AMD® EPYC 9334 (Zen4), 2x32 cores, 768 GB/node
- Large Memory nodes (separate resource):
Notes:
- Access to the GPU nodes are handled through the 'Kebnekaise GPU' resource.
- Access to the Large Memory nodes are handled through the 'Kebnekaise Large Memory' resource.
- New nodes will be procured on a semi-regular basis.
|
|
Kebnekaise GPU |
HPC2N |
Local resource for researchers in the HPC2N consortium |
|
This resource is for access to the GPU nodes in Kebnekaise.
For CPU nodes see resource 'Kebnekaise'.
For large memory nodes see resource 'Kebnekaise Large Memory'.
Kebnekaise is a heterogeneous computing resource currently consisting of:
- Compute nodes (separate resource):
- GPU nodes:
- 10 Intel® Xeon Gold 6132 Processor (Skylake), 2x14 cores, 192 GB/node
- 2 AMD® EPYC 7413 (Zen3), 2x24 cores, 512 GB/node
- 1 AMD® EPYC 7413 (Zen3), 2x24 cores, 512 GB/node
- 1 AMD® EPYC 9254 (Zen4), 2x24 cores, 384 GB/node
- 10 AMD® EPYC 9254 (Zen4), 2x24 cores, 384 GB/node
- 2 AMD® EPYC 9454 (Zen4), 2x48 cores, 768 GB/node
- 1 AMD® EPYC 9334 (Zen4), 2x32 cores, 768 GB/node
- 2 AMD® EPYC 9334 (Zen4), 2x32 cores, 768 GB/node
- Large Memory nodes (separate resource):
Notes:
- GPU nodes are charged differently than ordinary computing nodes.
- Access to the CPU nodes are handled through the 'Kebnekaise' resource.
- Access to the Large Memory nodes are handled through the 'Kebnekaise Large Memory' resource.
- New nodes will be procured on a semi-regular basis.
|
|
Kebnekaise Large Memory |
HPC2N |
Local resource for researchers in the HPC2N consortium |
|
This resource is for access to the Large Memory nodes in Kebnekaise.
For CPU nodes see resource 'Kebnekaise'.
For GPU nodes see resource 'Kebnekaise GPU'.
Kebnekaise is a heterogeneous computing resource currently consisting of:
- Compute nodes (separate resource):
- GPU nodes (separate resource):
- 10 Intel® Xeon Gold 6132 Processor (Skylake), 2x14 cores, 192 GB/node
- 2 AMD® EPYC 7413 (Zen3), 2x24 cores, 512 GB/node
- 1 AMD® EPYC 7413 (Zen3), 2x24 cores, 512 GB/node
- 1 AMD® EPYC 9254 (Zen4), 2x24 cores, 384 GB/node
- 10 AMD® EPYC 9254 (Zen4), 2x24 cores, 384 GB/node
- 2 AMD® EPYC 9454 (Zen4), 2x48 cores, 768 GB/node
- 1 AMD® EPYC 9334 (Zen4), 2x32 cores, 768 GB/node
- 2 AMD® EPYC 9334 (Zen4), 2x32 cores, 768 GB/node
- Large Memory nodes:
Notes:
- Access to the CPU nodes are handled through the 'Kebnekaise' resource.
- Access to the GPU nodes are handled through the 'Kebnekaise GPU' resource.
- New nodes will be procured on a semi-regular basis.
|
|
COSMOS |
LUNARC |
Local resource for Lund university researchers. |
|
COSMOS represent a significant increase in computational capacity and will offer access to modern hardware including GPUs. Through the LUNARC Desktop new and existing users will be able to draw upon the benefits of high performance computing (HPC) while not having to be burdened by the intricacies of HPC utilisation. At the same time, users proficient in HPC usage will still be able to make use of the computational power represented in the interconnected nodes of COSMOS.
COSMOS consists out of 182 compute nodes funded by Lund university. Each node has two AMD 7413 processors (Milan), offering 48 compute cores per node. The nodes have 256 GB ram installed. In addition to the CPU nodes there are also 6 NVIDIA A100 nodes and 6 NVIDIA A40 GPU Nodes. For more specs see below.
COSMOS also features Intel partitions with Intel processors (Caskade Lake), offering 32 compute cores each. There are 22 CPU nodes, 5 nodes with NVIDIA A40 GPUs and four nodes with A100 GPUs within the Intel partitions.
|
|
COSMOS-SENS-COMPUTE |
LUNARC |
Local resource for Lund university researchers. |
|
COSMOS-SENS is the Lund university compute resource and is operated by LUNARC
|
|
Sigma |
NSC |
Local resource for Linköping University researchers |
|
Sigma, sigma.nsc.liu.se, runs a Rocky Linux 9 version of the NSC Cluster Software Environment. This means that most things are very familiar to Gamma users.
You still use Slurm (e.g sbatch, interactive, ...) to submit your jobs. ThinLinc is available on the login node. Applications will still be selected using "module".
All Sigma compute nodes have 32 CPU cores. There is 104 "thin" nodes with 96 GiB of primary memory (RAM) and 4 "fat" nodes with 384 GiB. Each compute node have a local SSD disk where applications can store temporary files (approximately 200GB per node).
All Sigma nodes are interconnected with a 100 Gbps Intel Omni-Path network which is also used to connect the existing storage. The Omni-Path network work in a similar way to the FDR Infiniband network in Gamma (e.g still a fat-tree topology).
Sigma have a capacity that exceeds the current computing capacity of Gamma. Sigma was made available to users on August 23, 2018.
|
|
Dardel |
PDC |
General purpose computational resource |
|
Dardel is a Cray EX system from Hewlett Packard Enterprise, based on AMD EPYC processors with an accompanying Lustre storage system.
The nodes are interconnected using Slingshot HPC Ethernet.
|
|
Pelle |
UPPMAX |
Uppsala University compute cluster |
|
This UU-funded resource was installed in late 2024.
It features standard compute nodes with 48 cores and 768 GB per node, many nodes with Nvidia T4 or L40s GPUs, and some nodes with Nvidia H100 GPUs.
|
|
Resource |
Centre |
Short Description |
|
Cephyr NOBACKUP |
C3SE |
Project storage primarily for Vera, also attached to Alvis |
|
Project storage based on Ceph with a total usable area of 5 PiB.
- 14 Storage servers, each with 3 NvMe:s (for database and journal).
- 7 JBOD, each with 42 14 TiB HDD:s.
|
|
Mimer |
C3SE |
Project storage attached to Alvis and Vera, dedicated for AI/ML |
|
Project storage attached to Alvis and Vera, dedicated for AI/ML
Mimer is an all-flashed based storage system based on as solution from WEKA IO.
It consists of an 0.6 PB all-flash tier and a 7 PB Ceph based bulk storage tier (with spinning disk).
|
|
Nobackup |
HPC2N |
HPC2N local storage connected to Kebnekaise |
|
Active project storage without backup for local HPC2N projects.
|
|
COSMOS-SENS-STORAGE |
LUNARC |
Local resource for Lund university researchers. |
|
COSMOS-SENS-STORAGE is a Lund University storage resource and is operated by LUNARC.
|
|
Centrestorage nobackup |
LUNARC |
Local resource for Lund university researchers. |
|
|
|
Centre Storage |
NSC |
Project storage attached to Tetralith and Sigma |
|
Project storage for NAISS as well as LiU Local projects with compute allocations on resources hosted by NSC.
Centre Storage @ NSC is designed for fast access from compute resources at NSC. It consists of one IBM ESS GL6S building block and one IBM ESS 5000 SC4 building block.
In total there are 946 spinning hard disks and a small number of NVRAM devices and SSDs which act as a cache to speed up small writes. The total disk space that is usable for storing files is approximately 6.9 PiB.
|
|
Klemming |
PDC |
Project storage attached to Dardel and Dardel-GPU |
|
Project storage for NAISS as well as PDC projects with compute allocations on resources hosted by PDC.
Klemming is designed for fast access from compute resources at PCD. It uses the Lustre parallel file system, which is optimized for handling data from many clients at the same time. The total size of Klemming is 12 PB.
|
|
Gorilla |
UPPMAX |
Local resource for UU research and education |
|
Gorilla is a large and fast Ceph-based file system mounted on Pelle and Maja.
|