NAISS
SUPR
SUPR
Resources

Compute, storage and cloud resources that are available via SUPR for the moment:

NAISS Resources

Nationally available resources funded and operated by NAISS.

Compute

Resource Centre Short Description
Alvis C3SE Accelerator-based resource dedicated to research using AI techni

The Alvis cluster is a national NAISS resource dedicated to Artificial Intelligence and Machine Learning research.

Note: Significant generation of training data is expected to be done elsewhere.

The system is built around Graphical Processing Units (GPUs) accelerator cards. The first phase of the resource has 160 NVIDIA T4, 44 V100, and 4 A100 GPUs. The second phase is based on 340 NVIDIA A40 and 336 A100 GPUs.

LUMI-C LUMI Sweden Swedish CPU share of the EuroHPC JU resource LUMI

LUMI is a general computational resource hosted by CSC in Finland.

LUMI, Large Unified Modern Infrastructure, is an HPE Cray EX supercomputer consisting of several partitions targeted for different use cases. The largest partition of the system is the LUMI-G partition consisting of GPU-accelerated nodes using AMD Instinct GPUs. In addition to this, there is a smaller CPU-only partition LUMI-C that features AMD Epyc CPUs and an auxiliary partition for data analytics with large memory nodes and some GPUs for data visualization.

The LUMI consortium countries are Finland, Belgium, the Czech Republic, Denmark, Estonia, Iceland, Norway, Poland, Sweden, and Switzerland. The acquisition and operation of the EuroHPC Supercomputer are jointly funded by the EuroHPC Joint Undertaking and the LUMI consortium. The Swedish Research Council has contributed approx. 3.5% of the funding. A corresponding share of the system is reserved for Swedish research, but researchers are encouraged to apply for resources from the JU part of LUMI and other EuroHPC resources.

LUMI-G LUMI Sweden Swedish GPU share of the EuroHPC JU resource LUMI

LUMI is a general computational resource hosted by CSC in Finland.

LUMI, Large Unified Modern Infrastructure, is an HPE Cray EX supercomputer consisting of several partitions targeted for different use cases. The largest partition of the system is the LUMI-G partition consisting of GPU-accelerated nodes using AMD Instinct GPUs. In addition to this, there is a smaller CPU-only partition LUMI-C that features AMD Epyc CPUs and an auxiliary partition for data analytics with large memory nodes and some GPUs for data visualization.

The LUMI consortium countries are Finland, Belgium, the Czech Republic, Denmark, Estonia, Iceland, Norway, Poland, Sweden, and Switzerland. The acquisition and operation of the EuroHPC Supercomputer are jointly funded by the EuroHPC Joint Undertaking and the LUMI consortium. The Swedish Research Council has contributed approx. 3.5% of the funding. A corresponding share of the system is reserved for Swedish research, but researchers are encouraged to apply for resources from the JU part of LUMI and other EuroHPC resources.

Tetralith NSC General purpose mainly CPU based resource

Tetralith is a general computational resource hosted by NSC at Linköping University.

Tetralith servers have two Intel Xeon Gold 6130 processors, providing 32 cores per server. 1844 of the servers are equipped with 96 GiB of primary memory and 64 servers with 384 GiB. All servers are interconnected with a 100 Gbit/s Intel Omni- Path network which is also used to connect the existing storage. Each server has a local SSD disk for ephemeral storage (approx. 200GiB per thin node, 900GiB per fat node). An IBM Spectrum Scale system comprises the centre storage. 170 of the Tetralith nodes are equipped with one NVIDIA Tesla T4 GPU each as well as a high- performance NVMe SSD scratch disk of 2TB.

Dardel PDC General purpose computational resource
Dardel is a Cray EX system from Hewlett Packard Enterprise, based on AMD EPYC processors with an accompanying Lustre storage system. The nodes are interconnected using Slingshot HPC Ethernet.
Dardel-GPU PDC General purpose accelerator based resource
Dardel-GPU is the accelerated partition based on AMD’s Instinct MI250X GPU of the Cray EX system from Hewlett Packard Enterprise. It has an accompanying Lustre storage system. The nodes are interconnected using Slingshot HPC Ethernet.
Bianca UPPMAX Cluster system for sensitive data.

Bianca is a research system dedicated to analysing sensitive personal data, or other types of sensitive data.

Bianca provides 4480 cores in the form of 204 dual CPU (Intel Xeon E5-2630 v3) Huawei XH620 V3 nodes with 128GB memory, 75 fat nodes with 256 GB of memory, 15 nodes with 512 GB of memory, ten nodes with two NVIDIA A100 40GB GPUs each.

Rackham UPPMAX Cluster for life science and general use. CPU only
Rackham provides 9720 cores in the form of 486 nodes with two 10-core Intel Xeon V4 CPUs each. 4 fat nodes have 1 TB of memory, 32 fat nodes have 256 GB, and the rest have 128 GB. The interconnect is Infiniband.

Storage

Resource Centre Short Description
Cephyr NOBACKUP C3SE Project storage primarily for Vera, also attached to Alvis
Project storage based on Ceph with a total usable area of 5 PiB.
  • 14 Storage servers, each with 3 NvMe:s (for database and journal).
  • 7 JBOD, each with 42 14 TiB HDD:s.
Mimer C3SE Project storage attached to Alvis and Vera, dedicated for AI/ML

Project storage attached to Alvis and Vera, dedicated for AI/ML

Mimer is an all-flashed based storage system based on as solution from WEKA IO. It consists of an 0.6 PB all-flash tier and a 7 PB Ceph based bulk storage tier (with spinning disk).

Storage LUMI Sweden Swedish share of project storage attached to LUMI-C and LUMI-G

Project storage for NAISS allocations on LUMI.

Storage is applied for using TB-hours. Flash storage, LUMI-F, is accounted at ten times the TB-hour rate, i.e. use of 1TB of Flash storage for one hour costs 10 TB-hours. Lustre storage, LUMI-P, is accounted at the TB-hour rate. CEPH object storage, LUMI-O, is accounted at ½ the TB-hour rate, i.e. use of 1TB of CEPH storage for one hour costs 0.5 TB-hours. The total size of the Swedish part LUMI storage system is 35 412 000 TB-hours.

Centre Storage NSC Project storage attached to Tetralith and Sigma

Project storage for NAISS as well as LiU Local projects with compute allocations on resources hosted by NSC.

Centre Storage @ NSC is designed for fast access from compute resources at NSC. It consists of one IBM ESS GL6S building block and one IBM ESS 5000 SC4 building block.

In total there are 946 spinning hard disks and a small number of NVRAM devices and SSDs which act as a cache to speed up small writes. The total disk space that is usable for storing files is approximately 6.9 PiB.

Klemming PDC Project storage attached to Dardel and Dardel-GPU

Project storage for NAISS as well as PDC projects with compute allocations on resources hosted by PDC.

Klemming is designed for fast access from compute resources at PCD. It uses the Lustre parallel file system, which is optimized for handling data from many clients at the same time. The total size of Klemming is 12 PB.

dCache Swestore National storage infrastructure for large-scale research data

Swestore is a Research Data Storage Infrastructure, intended for active research data and operated by the National Academic Infrastructure for Supercomputing in Sweden, NAISS,

The storage resources provided by Swestore are made available for free for academic research funded by VR and Formas through open calls such that the best Swedish research is supported and new research is facilitated.

The purpose of Swestore allocations, granted by National Allocations Committee (NAC), is to provide large scale data storage for “live” or “working” research data, also known as active research data.

See the documentation at: https://docs.swestore.se
Crex 1 UPPMAX Project storage attached to Rackham and Snowy
Active data storage for Rackham projects. Primarily for life science projects.
Cygnus /proj UPPMAX Backed-up storage for Bianca (NAISS SENS)
Cygnus is the new storage resource attached to Bianca, the NAISS SENS research cluster. The /proj area is backed up.
Cygnus /proj/nobackup UPPMAX Project storage for Bianca (NAISS SENS) without backup.
Cygnus is the new storage resource attached to Bianca, the NAISS SENS research cluster. The /proj/nobackup area is not backed up.

Cloud

Resource Centre Short Description
Cloud SSC Swedish Science Cloud provides Infrastructure as a Service; IaaS

Swedish Science Cloud (SSC) is a geographically distributed OpenStack cloud Infrastructure as a Service (IaaS), intended for Swedish academic research provided by NAISS.

It is available free of charge to researchers at Swedish higher education institutions through open application procedures.

The SSC resources are not meant to be a replacement for NAISS supercomputing resources (HPC clusters). Rather, it should be seen as a complement, offering advanced functionality to users who need more flexible access to resources (for example more control over the operating systems and software environments), want to develop software as a service, or want to explore recent technology such as for “Big Data” (e.g. Apache Hadoop/Spark) or IoT applications.

Examples of applications that must run in the normal HPC clusters since the SSC isn't built for those purposes.

  • AI/LLM/Deep Learning and other applications relying on GPU resources cannot be run in the SSC due to the limited access to GPU's.
  • Applications assuming high performance or large volumes of storage, network, or cores.
  • Benchmarking will not perform well within SSC due to performance limitations.
  • SSC isn't classified for Sensitive data, those project MUST use Cluster system for sensitive data within NAISS.

Other National Resources

Resources funded outside of NAISS but which are nationally available, in some cases under special conditions. See conditions for access under each resource.

Compute

Resource Centre Short Description
Berzelius Ampere NSC KAW financed resource dedicated for AI/ML research

Berzelius Ampere is an NVIDIA® SuperPOD consisting of 94 NVIDIA® DGX-A100 compute nodes supplied by Atos/Eviden and 8 CPU nodes also supplied by Eviden. The original 60 "thin" DGX-A100 nodes are each equipped with 8 NVIDIA® A100 Tensor Core GPUs, 2 AMD Epyc™ 7742 CPUs, 1 TB RAM and 15 TB of local NVMe SSD storage. The A100 GPUs have 40 GB on-board HBM2 VRAM. The 34 newer DGX-A100 nodes "fat" are each equipped with 8 NVIDIA® A100 Tensor Core GPUs, 2 AMD Epyc™ 7742 CPUs, 2 TB RAM and 30 TB of local NVMe SSD storage. The A100 GPUs have 80 GB on-board HBM2 VRAM. The CPU nodes are each equipped with 2 AMD Epyc™ 9534 CPUs, 1.1 TB RAM and 6.4 TB of local NVMe SSD storage.

Fast compute interconnect is provided via 8x NVIDIA® Mellanox® HDR per DGX connected in a non-blocking fat-tree topology. In addition, every node is equipped with NVIDIA® Mellanox® HDR dedicated storage interconnect.

All nodes have a local disk where applications can store temporary files. The size of this disk (available to jobs as `/scratch/local`) is 15 TB on "thin" nodes, 30 TB on "fat" nodes, and 6.4 TB on CPU nodes, and is shared between all jobs using the node.

Berzelius Hopper NSC KAW financed resource dedicated for AI/ML research

The latest phase of the Berzelius service is Berzelius Hopper. Berzelius Hopper consist of 16 NVIDIA® DGX-H200 compute nodes supplied by Eviden and 8 CPU nodes also supplied by Eviden.

The DGX H200 are equiped with 8 NVIDIA® H200 141GB GPUs, 2 Intel® 8480C CPUs, and 2.1 TB RAM. The CPU nodes are each equipped with 2 AMD Epyc™ 9534 CPUs, 1.1 TB RAM and 6.4 TB of local NVMe SSD storage. The DGX H200 nodes are connected to a fast interconnect with 8x NVIDIA® Mellanox® NDR per DGX in a non-blocking fat-tree topology. This is a separate interconnect from that which connects the DGX A100 nodes in Berzelius Ampere.

All nodes have a local disk where applications can store temporary files. The size of this disk (available to jobs as `/scratch/local`) is 30 TB on H200 nodes, and 6.4 TB on CPU nodes, and is shared between all jobs using the node.

Berzelius Hopper is accessed through a new set of login nodes separate from those in the original Berzelius installation and also has new servers for other supporting tasks.

Berzelius Hopper is currently in a testpilot phase before being released for new projects in SUPR.

Storage

Resource Centre Short Description
SciLifeLab OMERO C3SE
Berzelius Storage NSC Project storage attached to Berzelius

Shared, central storage accessible from all Berzelius Ampere and Berzelius Hopper compute and login nodes is provided by a storage cluster from VAST Data consisting of 8 CBoxes and 3 DBoxes using an NVMe-oF architecture. The storage servers are connected end-to-end to the GPUs using a high bandwidth interconnect separate from the East-West compute interconnect. The installed physical storage capacity is 3 PB, but due to compression and deduplication this will be higher in practice.

NSC centre storage (as available on Tetralith) is not accessible on Berzelius.

Spirula UPPMAX FAIR Storage resource at SciLifeLab
SciLifeLab DDLS funded storage resource. Managed in collaboration between UPPMAX and SciLifeLab Data Centre. S3 object storage.

Local and Regional Resources

Resources financed by individual universities or in regional collaborations between universities. Access is often limited to employees of the universities where the resources are located. See conditions for access under each resource.

Compute

Resource Centre Short Description
Vera C3SE Local resource for Chalmers researchers

The Vera cluster is built on Intel Xeon Platinum 8358 (code-name "Icelake") and AMD EPYC 9354 (code-name "Zen4") CPU:s with 64 cores per node.

For details see Vera hardware.

Kebnekaise HPC2N Local resource for researchers in the HPC2N consortium

This resource is for access to the CPU nodes in Kebnekaise.

For GPU nodes see resource 'Kebnekaise GPU'.

For large memory nodes see resource 'Kebnekaise Large Memory'.

 

Kebnekaise is a heterogeneous computing resource currently consisting of: Notes:
  1. Access to the GPU nodes are handled through the 'Kebnekaise GPU' resource.
  2. Access to the Large Memory nodes are handled through the 'Kebnekaise Large Memory' resource.
  3. New nodes will be procured on a semi-regular basis.
Kebnekaise GPU HPC2N Local resource for researchers in the HPC2N consortium

This resource is for access to the GPU nodes in Kebnekaise.

For CPU nodes see resource 'Kebnekaise'.

For large memory nodes see resource 'Kebnekaise Large Memory'.

 

Kebnekaise is a heterogeneous computing resource currently consisting of: Notes:
  1. GPU nodes are charged differently than ordinary computing nodes.
  2. Access to the CPU nodes are handled through the 'Kebnekaise' resource.
  3. Access to the Large Memory nodes are handled through the 'Kebnekaise Large Memory' resource.
  4. New nodes will be procured on a semi-regular basis.
Kebnekaise Large Memory HPC2N Local resource for researchers in the HPC2N consortium

This resource is for access to the Large Memory nodes in Kebnekaise.

For CPU nodes see resource 'Kebnekaise'.

For GPU nodes see resource 'Kebnekaise GPU'.

 

Kebnekaise is a heterogeneous computing resource currently consisting of: Notes:
  1. Access to the CPU nodes are handled through the 'Kebnekaise' resource.
  2. Access to the GPU nodes are handled through the 'Kebnekaise GPU' resource.
  3. New nodes will be procured on a semi-regular basis.
COSMOS LUNARC Local resource for Lund university researchers.
COSMOS represent a significant increase in computational capacity and will offer access to modern hardware including GPUs. Through the LUNARC Desktop new and existing users will be able to draw upon the benefits of high performance computing (HPC) while not having to be burdened by the intricacies of HPC utilisation. At the same time, users proficient in HPC usage will still be able to make use of the computational power represented in the interconnected nodes of COSMOS. COSMOS consists out of 182 compute nodes funded by Lund university. Each node has two AMD 7413 processors (Milan), offering 48 compute cores per node. The nodes have 256 GB ram installed. In addition to the CPU nodes there are also 6 NVIDIA A100 nodes and 6 NVIDIA A40 GPU Nodes. For more specs see below. COSMOS also features Intel partitions with Intel processors (Caskade Lake), offering 32 compute cores each. There are 22 CPU nodes, 5 nodes with NVIDIA A40 GPUs and four nodes with A100 GPUs within the Intel partitions.
COSMOS-SENS-COMPUTE LUNARC Local resource for Lund university researchers.
COSMOS-SENS is the Lund university compute resource and is operated by LUNARC
Sigma NSC Local resource for Linköping University researchers
Sigma, sigma.nsc.liu.se, runs a Rocky Linux 9 version of the NSC Cluster Software Environment. This means that most things are very familiar to Gamma users. You still use Slurm (e.g sbatch, interactive, ...) to submit your jobs. ThinLinc is available on the login node. Applications will still be selected using "module". All Sigma compute nodes have 32 CPU cores. There is 104 "thin" nodes with 96 GiB of primary memory (RAM) and 4 "fat" nodes with 384 GiB. Each compute node have a local SSD disk where applications can store temporary files (approximately 200GB per node). All Sigma nodes are interconnected with a 100 Gbps Intel Omni-Path network which is also used to connect the existing storage. The Omni-Path network work in a similar way to the FDR Infiniband network in Gamma (e.g still a fat-tree topology). Sigma have a capacity that exceeds the current computing capacity of Gamma. Sigma was made available to users on August 23, 2018.
Dardel PDC General purpose computational resource
Dardel is a Cray EX system from Hewlett Packard Enterprise, based on AMD EPYC processors with an accompanying Lustre storage system. The nodes are interconnected using Slingshot HPC Ethernet.
Pelle UPPMAX Uppsala University compute cluster
This UU-funded resource was installed in late 2024. It features standard compute nodes with 48 cores and 768 GB per node, many nodes with Nvidia T4 or L40s GPUs, and some nodes with Nvidia H100 GPUs.

Storage

Resource Centre Short Description
Cephyr NOBACKUP C3SE Project storage primarily for Vera, also attached to Alvis
Project storage based on Ceph with a total usable area of 5 PiB.
  • 14 Storage servers, each with 3 NvMe:s (for database and journal).
  • 7 JBOD, each with 42 14 TiB HDD:s.
Cephyr S3 C3SE S3 object storage running on Ceph
S3 object storage based on Ceph.
  • 14 Storage servers, each with 3 NvMe:s (for database and journal).
  • 7 JBOD, each with 42 14 TiB HDD:s.
Mimer C3SE Project storage attached to Alvis and Vera, dedicated for AI/ML

Project storage attached to Alvis and Vera, dedicated for AI/ML

Mimer is an all-flashed based storage system based on as solution from WEKA IO. It consists of an 0.6 PB all-flash tier and a 7 PB Ceph based bulk storage tier (with spinning disk).

CFL_Nobackup HPC2N Computational Forestry Lab storage connected to Kebnekaise
This storage is only available for projects from Computational Forestry Lab. There is no backup on this resource.
Nobackup HPC2N HPC2N local storage connected to Kebnekaise
Active project storage without backup for local HPC2N projects.
COSMOS-SENS-STORAGE LUNARC Local resource for Lund university researchers.
COSMOS-SENS-STORAGE is a Lund University storage resource and is operated by LUNARC.
Centrestorage nobackup LUNARC Local resource for Lund university researchers.
Centre Storage NSC Project storage attached to Tetralith and Sigma

Project storage for NAISS as well as LiU Local projects with compute allocations on resources hosted by NSC.

Centre Storage @ NSC is designed for fast access from compute resources at NSC. It consists of one IBM ESS GL6S building block and one IBM ESS 5000 SC4 building block.

In total there are 946 spinning hard disks and a small number of NVRAM devices and SSDs which act as a cache to speed up small writes. The total disk space that is usable for storing files is approximately 6.9 PiB.

Klemming PDC Project storage attached to Dardel and Dardel-GPU

Project storage for NAISS as well as PDC projects with compute allocations on resources hosted by PDC.

Klemming is designed for fast access from compute resources at PCD. It uses the Lustre parallel file system, which is optimized for handling data from many clients at the same time. The total size of Klemming is 12 PB.

Gorilla UPPMAX Local resource for UU research and education
Gorilla is a large and fast Ceph-based file system mounted on Pelle and Maja.