NAISS
SUPR
SUPR
Resources

Compute, storage and cloud resources that are available via SUPR for the moment:

NAISS Resources

Nationally available resources funded and operated by NAISS.

Compute

Resource Centre Short Description
Alvis C3SE Accelerator-based resource dedicated to research using AI techni

The Alvis cluster is a national NAISS resource dedicated to Artificial Intelligence and Machine Learning research.

Note: Significant generation of training data is expected to be done elsewhere.

The system is built around Graphical Processing Units (GPUs) accelerator cards. The first phase of the resource has 160 NVIDIA T4, 44 V100, and 4 A100 GPUs. The second phase is based on 340 NVIDIA A40 and 336 A100 GPUs.

LUMI-C LUMI Sweden Swedish CPU share of the EuroHPC JU resource LUMI

LUMI is a general computational resource hosted by CSC in Finland.

LUMI, Large Unified Modern Infrastructure, is an HPE Cray EX supercomputer consisting of several partitions targeted for different use cases. The largest partition of the system is the LUMI-G partition consisting of GPU-accelerated nodes using AMD Instinct GPUs. In addition to this, there is a smaller CPU-only partition LUMI-C that features AMD Epyc CPUs and an auxiliary partition for data analytics with large memory nodes and some GPUs for data visualization.

The LUMI consortium countries are Finland, Belgium, the Czech Republic, Denmark, Estonia, Iceland, Norway, Poland, Sweden, and Switzerland. The acquisition and operation of the EuroHPC Supercomputer are jointly funded by the EuroHPC Joint Undertaking and the LUMI consortium. The Swedish Research Council has contributed approx. 3.5% of the funding. A corresponding share of the system is reserved for Swedish research, but researchers are encouraged to apply for resources from the JU part of LUMI and other EuroHPC resources.

LUMI-G LUMI Sweden Swedish GPU share of the EuroHPC JU resource LUMI

LUMI is a general computational resource hosted by CSC in Finland.

LUMI, Large Unified Modern Infrastructure, is an HPE Cray EX supercomputer consisting of several partitions targeted for different use cases. The largest partition of the system is the LUMI-G partition consisting of GPU-accelerated nodes using AMD Instinct GPUs. In addition to this, there is a smaller CPU-only partition LUMI-C that features AMD Epyc CPUs and an auxiliary partition for data analytics with large memory nodes and some GPUs for data visualization.

The LUMI consortium countries are Finland, Belgium, the Czech Republic, Denmark, Estonia, Iceland, Norway, Poland, Sweden, and Switzerland. The acquisition and operation of the EuroHPC Supercomputer are jointly funded by the EuroHPC Joint Undertaking and the LUMI consortium. The Swedish Research Council has contributed approx. 3.5% of the funding. A corresponding share of the system is reserved for Swedish research, but researchers are encouraged to apply for resources from the JU part of LUMI and other EuroHPC resources.

Tetralith NSC General purpose mainly CPU based resource

Tetralith is a general computational resource hosted by NSC at Linköping University.

Tetralith servers have two Intel Xeon Gold 6130 processors, providing 32 cores per server. 1844 of the servers are equipped with 96 GiB of primary memory and 64 servers with 384 GiB. All servers are interconnected with a 100 Gbit/s Intel Omni- Path network which is also used to connect the existing storage. Each server has a local SSD disk for ephemeral storage (approx. 200GiB per thin node, 900GiB per fat node). An IBM Spectrum Scale system comprises the centre storage. 170 of the Tetralith nodes are equipped with one NVIDIA Tesla T4 GPU each as well as a high- performance NVMe SSD scratch disk of 2TB.

Dardel PDC General purpose computational resource
Dardel is a Cray EX system from Hewlett Packard Enterprise, based on AMD EPYC processors with an accompanying Lustre storage system. The nodes are interconnected using Slingshot HPC Ethernet.
Dardel-GPU PDC General purpose accelerator based resource
Dardel-GPU is the accelerated partition based on AMD’s Instinct MI250X GPU of the Cray EX system from Hewlett Packard Enterprise. It has an accompanying Lustre storage system. The nodes are interconnected using Slingshot HPC Ethernet.
Bianca UPPMAX Cluster system for sensitive data.

Bianca is a research system dedicated to analysing sensitive personal data, or other types of sensitive data.

Bianca provides 4480 cores in the form of 204 dual CPU (Intel Xeon E5-2630 v3) Huawei XH620 V3 nodes with 128GB memory, 75 fat nodes with 256 GB of memory, 15 nodes with 512 GB of memory, ten nodes with two NVIDIA A100 40GB GPUs each.

Rackham UPPMAX Cluster for life science and general use. CPU only
Rackham provides 9720 cores in the form of 486 nodes with two 10-core Intel Xeon V4 CPUs each. 4 fat nodes have 1 TB of memory, 32 fat nodes have 256 GB, and the rest have 128 GB. The interconnect is Infiniband.

Storage

Resource Centre Short Description
Cephyr NOBACKUP C3SE Project storage primarily for Vera, also attached to Alvis
Project storage based on Ceph with a total usable area of 5 PiB.
  • 14 Storage servers, each with 3 NvMe:s (for database and journal).
  • 7 JBOD, each with 42 14 TiB HDD:s.
Mimer C3SE Project storage attached to Alvis and Vera, dedicated for AI/ML

Project storage attached to Alvis and Vera, dedicated for AI/ML

Mimer is an all-flashed based storage system based on as solution from WEKA IO. It consists of an 0.6 PB all-flash tier and a 7 PB Ceph based bulk storage tier (with spinning disk).

Storage LUMI Sweden Swedish share of project storage attached to LUMI-C and LUMI-G

Project storage for NAISS allocations on LUMI.

Storage is applied for using TB-hours. Flash storage, LUMI-F, is accounted at ten times the TB-hour rate, i.e. use of 1TB of Flash storage for one hour costs 10 TB-hours. Lustre storage, LUMI-P, is accounted at the TB-hour rate. CEPH object storage, LUMI-O, is accounted at ½ the TB-hour rate, i.e. use of 1TB of CEPH storage for one hour costs 0.5 TB-hours. The total size of the Swedish part LUMI storage system is 35 412 000 TB-hours.

Centre Storage NSC Project storage attached to Tetralith and Sigma

Project storage for NAISS as well as LiU Local projects with compute allocations on resources hosted by NSC.

Centre Storage @ NSC is designed for fast access from compute resources at NSC. It consists of one IBM ESS GL6S building block and one IBM ESS 5000 SC4 building block.

In total there are 946 spinning hard disks and a small number of NVRAM devices and SSDs which act as a cache to speed up small writes. The total disk space that is usable for storing files is approximately 6.9 PiB.

Klemming PDC Project storage attached to Dardel and Dardel-GPU

Project storage for NAISS as well as PDC projects with compute allocations on resources hosted by PDC.

Klemming is designed for fast access from compute resources at PCD. It uses the Lustre parallel file system, which is optimized for handling data from many clients at the same time. The total size of Klemming is 12 PB.

dCache Swestore National storage infrastructure for large-scale research data

Swestore is a Research Data Storage Infrastructure, intended for active research data and operated by the National Academic Infrastructure for Supercomputing in Sweden, NAISS,

The storage resources provided by Swestore are made available for free for academic research funded by VR and Formas through open calls such that the best Swedish research is supported and new research is facilitated.

The purpose of Swestore allocations, granted by National Allocations Committee (NAC), is to provide large scale data storage for “live” or “working” research data, also known as active research data.

See the documentation at: https://docs.swestore.se
Crex 1 UPPMAX Project storage attached to Rackham and Snowy
Active data storage for Rackham projects. Primarily for life science projects.

Cloud

Resource Centre Short Description
Cloud SSC Swedish Science Cloud provides Infrastructure as a Service; IaaS

Swedish Science Cloud (SSC) is a large-scale, geographically distributed OpenStack cloud Infrastructure as a Service (IaaS), intended for Swedish academic research provided by NAISS.

It is available free of charge to researchers at Swedish higher education institutions through open application procedures.

The SSC resources are not meant to be a replacement for NAISS supercomputing resources (HPC clusters). Rather, it should be seen as a complement, offering advanced functionality to users who need more flexible access to resources (for example more control over the operating systems and software environments), want to develop software as a service, or want to explore recent technology such as for “Big Data” (e.g. Apache Hadoop/Spark) or IoT applications.

Other National Resources

Resources funded outside of NAISS but which are nationally available, in some cases under special conditions. See conditions for access under each resource.

Compute

Resource Centre Short Description
Berzelius Compute NSC KAW financed resource dedicated for AI/ML research

Berzelius is an NVIDIA SuperPOD consisting of 94 DGX-A100 nodes, sporting a total of 752 NVIDIA A100 GPUs.

The SuperPOD uses the SLURM resource manager and job scheduler. The original 60 DGX-A100 nodes have 8x NVIDIA A100 GPUs (40GB), 128 CPU cores (2x AMD Epyc 7742), 1 TB of RAM and 15 TB of NVMe SSD local disk. The 34 newer DGX-A100 nodes have 8x NVIDIA A100 GPUs (80GB), 128 CPU cores (2x AMD Epyc 7742), 2 TB of RAM and 30 TB of NVMe SSD local disk. High performance central storage is available using 4x AI400X and 2x AI400X2 from DDN, serving 1.5 PB of storage space to all nodes of the cluster. All DGX-A100 GPUs have dedicated Mellanox HDR InfiniBand HBAs, that is, there are 8 Mellanox HDR HBAs per DGX-A100 node, connected in a full bisection bandwidth, fat-tree topology.

Storage

Resource Centre Short Description
Berzelius Storage NSC Project storage attached to Berzelius

High performance central storage is available using 4x AI400X and 2x AI400X2, serving a total of 1.5 PB of storage to all nodes of the cluster via a dedicated InfiniBand interconnect. Aggregate read IO performance is 320 GB/s from the central storage and the dedicated data interconnect bandwidth per node is 25 GB/s.

NSC centre storage (as available on Tetralith) is not accessible on Berzelius.

Spirula UPPMAX FAIR Storage resource at SciLifeLab
SciLifeLab DDLS funded storage resource. Managed in collaboration between UPPMAX and SciLifeLab Data Centre. S3 object storage.

Local and Regional Resources

Resources financed by individual universities or in regional collaborations between universities. Access is often limited to employees of the universities where the resources are located. See conditions for access under each resource.

Compute

Resource Centre Short Description
Vera C3SE Local resource for Chalmers researchers

The Vera cluster is built on Intel Xeon Gold 6130 (code-named "Skylake") CPU's with 32 cores per node and Intel Xeon Platinum 8358 (code-name "Icelake" CPU's with 64 cores per node.

For details see Vera hardware.

Kebnekaise HPC2N Local resource for researchers in the HPC2N consortium

This resource is for access to the CPU nodes in Kebnekaise.

For GPU nodes see resource 'Kebnekaise GPU'.

For large memory nodes see resource 'Kebnekaise Large Memory'.

 

Kebnekaise is a heterogeneous computing resource currently consisting of: Notes:
  1. Access to the GPU nodes are handled through the 'Kebnekaise GPU' resource.
  2. Access to the Large Memory nodes are handled through the 'Kebnekaise Large Memory' resource.
  3. New nodes will be procured on a semi-regular basis.
Kebnekaise GPU HPC2N Local resource for researchers in the HPC2N consortium

This resource is for access to the GPU nodes in Kebnekaise.

For CPU nodes see resource 'Kebnekaise'.

For large memory nodes see resource 'Kebnekaise Large Memory'.

 

Kebnekaise is a heterogeneous computing resource currently consisting of: Notes:
  1. GPU nodes are charged differently than ordinary computing nodes.
  2. Access to the CPU nodes are handled through the 'Kebnekaise' resource.
  3. Access to the Large Memory nodes are handled through the 'Kebnekaise Large Memory' resource.
  4. New nodes will be procured on a semi-regular basis.
Kebnekaise Large Memory HPC2N Local resource for researchers in the HPC2N consortium

This resource is for access to the Large Memory nodes in Kebnekaise.

For CPU nodes see resource 'Kebnekaise'.

For GPU nodes see resource 'Kebnekaise GPU'.

 

Kebnekaise is a heterogeneous computing resource currently consisting of: Notes:
  1. Access to the CPU nodes are handled through the 'Kebnekaise' resource.
  2. Access to the GPU nodes are handled through the 'Kebnekaise GPU' resource.
  3. New nodes will be procured on a semi-regular basis.
COSMOS LUNARC Local resource for Lund university researchers.
COSMOS represent a significant increase in computational capacity and will offer access to modern hardware including GPUs. Through the LUNARC Desktop new and existing users will be able to draw upon the benefits of high performance computing (HPC) while not having to be burdened by the intricacies of HPC utilisation. At the same time, users proficient in HPC usage will still be able to make use of the computational power represented in the interconnected nodes of COSMOS. COSMOS consists out of 182 compute nodes funded by Lund university. Each node has two AMD 7413 processors (Milan), offering 48 compute cores per node. The nodes have 256 GB ram installed. In addition to the CPU nodes there are also 6 NVIDIA A100 nodes and 6 NVIDIA A40 GPU Nodes. For more specs see below. COSMOS also features Intel partitions with Intel processors (Caskade Lake), offering 32 compute cores each. There are 22 CPU nodes, 5 nodes with NVIDIA A40 GPUs and four nodes with A100 GPUs within the Intel partitions.
COSMOS-SENS-COMPUTE LUNARC Local resource for Lund university researchers.
COSMOS-SENS is the Lund university compute resource and is operated by LUNARC
Sigma NSC Local resource for Linköping University researchers
Sigma, sigma.nsc.liu.se, runs a Rocky Linux 9 version of the NSC Cluster Software Environment. This means that most things are very familiar to Gamma users. You still use Slurm (e.g sbatch, interactive, ...) to submit your jobs. ThinLinc is available on the login node. Applications will still be selected using "module". All Sigma compute nodes have 32 CPU cores. There is 104 "thin" nodes with 96 GiB of primary memory (RAM) and 4 "fat" nodes with 384 GiB. Each compute node have a local SSD disk where applications can store temporary files (approximately 200GB per node). All Sigma nodes are interconnected with a 100 Gbps Intel Omni-Path network which is also used to connect the existing storage. The Omni-Path network work in a similar way to the FDR Infiniband network in Gamma (e.g still a fat-tree topology). Sigma have a capacity that exceeds the current computing capacity of Gamma. Sigma was made available to users on August 23, 2018.
Dardel PDC General purpose computational resource
Dardel is a Cray EX system from Hewlett Packard Enterprise, based on AMD EPYC processors with an accompanying Lustre storage system. The nodes are interconnected using Slingshot HPC Ethernet.
Pelle UPPMAX Uppsala University compute cluster
This UU-funded resource was installed in late 2024. It features standard compute nodes with 48 cores and 768 GB per node, many nodes with Nvidia T4 or L40s GPUs, and some nodes with Nvidia H100 GPUs.

Storage

Resource Centre Short Description
Cephyr NOBACKUP C3SE Project storage primarily for Vera, also attached to Alvis
Project storage based on Ceph with a total usable area of 5 PiB.
  • 14 Storage servers, each with 3 NvMe:s (for database and journal).
  • 7 JBOD, each with 42 14 TiB HDD:s.
Mimer C3SE Project storage attached to Alvis and Vera, dedicated for AI/ML

Project storage attached to Alvis and Vera, dedicated for AI/ML

Mimer is an all-flashed based storage system based on as solution from WEKA IO. It consists of an 0.6 PB all-flash tier and a 7 PB Ceph based bulk storage tier (with spinning disk).

Nobackup HPC2N HPC2N local storage connected to Kebnekaise
Active project storage without backup for local HPC2N projects.
COSMOS-SENS-STORAGE LUNARC Local resource for Lund university researchers.
COSMOS-SENS-STORAGE is a Lund University storage resource and is operated by LUNARC.
Centrestorage nobackup LUNARC Local resource for Lund university researchers.
Centre Storage NSC Project storage attached to Tetralith and Sigma

Project storage for NAISS as well as LiU Local projects with compute allocations on resources hosted by NSC.

Centre Storage @ NSC is designed for fast access from compute resources at NSC. It consists of one IBM ESS GL6S building block and one IBM ESS 5000 SC4 building block.

In total there are 946 spinning hard disks and a small number of NVRAM devices and SSDs which act as a cache to speed up small writes. The total disk space that is usable for storing files is approximately 6.9 PiB.

Klemming PDC Project storage attached to Dardel and Dardel-GPU

Project storage for NAISS as well as PDC projects with compute allocations on resources hosted by PDC.

Klemming is designed for fast access from compute resources at PCD. It uses the Lustre parallel file system, which is optimized for handling data from many clients at the same time. The total size of Klemming is 12 PB.

Gorilla UPPMAX Local resource for UU research and education
Gorilla is a large and fast Ceph-based file system mounted on Pelle and Maja.