To apply, you must be a scientist in Swedish academia, at least at the level of PhD student.
Deadlines and Decisions
Proposals are processed weekly.
Note that staff will be on vacation during the summer and proposals submitted in July will be processed at a reduced pace.
This round is open for proposals until 2026-01-01 00:00.
|
Resource |
Centre |
Upper Limit |
Default Storage |
Available |
Unit |
Note |
|
Alvis |
C3SE |
1 000 |
|
130 000 |
GPU-h/month |
The Alvis resource is dedicated for AI/ML research.
|
|
The Alvis resource is dedicated for research in and research using AI/ML techniques.
For general use of GPU:s please use Dardel GPU instead, for generation of training data use Dardel or Tetralith.
Allocations on Alvis will be scaled and transferred to the new NAISS system Arrhenius. It is NAISS's best estimate now that this will occur in the spring of 2026. More information will be announced as procurement and installation of Arrhenius progresses.
The Alvis cluster is a national NAISS resource dedicated to Artificial Intelligence and
Machine Learning research.
Note: Significant generation of training data is expected to be done elsewhere.
The system is built around Graphical Processing Units
(GPUs) accelerator cards. The first phase of the resource has 160 NVIDIA T4, 44
V100, and 4 A100 GPUs. The second phase is based on 340 NVIDIA A40 and 336
A100 GPUs.
|
|
Mimer |
C3SE |
— |
500 |
100 000 |
GiB |
|
|
Allocations on Mimer will be scaled and transferred to storage on the new NAISS system Arrhenius. It is NAISS's best estimate now that this will occur in the spring of 2026. More information will be announced as procurement and installation of Arrhenius progresses.
Project storage attached to Alvis and Vera, dedicated for AI/ML
Mimer is an all-flashed based storage system based on as solution from WEKA IO.
It consists of an 0.6 PB all-flash tier and a 7 PB Ceph based bulk storage tier (with spinning disk).
|
|
Tetralith |
NSC |
20 |
|
1 500 |
x 1000 core-h/month |
|
|
Projects will receive a default 500 GiB storage allocation on Centre Storage at NSC. If you need more storage, please apply for a Storage project and decline default storage from this compute proposal.
Allocations on Tetralith will be scaled and transferred to the new NAISS system Arrhenius. It is NAISS's best estimate now that this will occur in the spring of 2026. More information will be announced as procurement and installation of Arrhenius progresses.
Tetralith is a general computational resource hosted by NSC at Linköping University.
Tetralith servers have two Intel Xeon Gold 6130 processors, providing 32 cores per server. 1844 of the servers are equipped with 96 GiB of primary memory and 64 servers with 384 GiB. All servers are interconnected with a 100 Gbit/s Intel Omni- Path network which is also used to connect the existing storage. Each server has a local SSD disk for ephemeral storage (approx. 200GiB per thin node, 900GiB per fat node). An IBM Spectrum Scale system comprises the centre storage. 170 of the Tetralith nodes are equipped with one NVIDIA Tesla T4 GPU each as well as a high- performance NVMe SSD scratch disk of 2TB.
|
|
Centre Storage |
NSC |
— |
500 |
60 000 |
GiB |
|
|
If you need more than default storage, please apply for a Storage project and decline default storage from this compute proposal.
Allocations on Centre Storage will be scaled and transferred to storage on the new NAISS system Arrhenius. It is NAISS's best estimate now that this will occur in the spring of 2026. More information will be announced as procurement and installation of Arrhenius progresses.
Project storage for NAISS as well as LiU Local projects with compute allocations on resources hosted by NSC.
Centre Storage @ NSC is designed for fast access from compute resources at NSC. It consists of one IBM ESS GL6S building block and one IBM ESS 5000 SC4 building block.
In total there are 946 spinning hard disks and a small number of NVRAM devices and SSDs which act as a cache to speed up small writes. The total disk space that is usable for storing files is approximately 6.9 PiB.
|
|
Dardel |
PDC |
20 |
|
1 720 |
x 1000 core-h/month |
|
|
Dardel is a Cray EX system from Hewlett Packard Enterprise, based on AMD EPYC processors with an accompanying Lustre storage system.
The nodes are interconnected using Slingshot HPC Ethernet.
|
|
Dardel-GPU |
PDC |
200 |
|
6 160 |
GPU-h/month |
|
|
These GPUs are not nVIDIA GPUs but rather AMD GPUs, so if your software runs using CUDA, a certain amount of conversion of the code is needed.
You can read information about this at https://www.lumi-supercomputer.eu/preparing-codes-for-lumi-converting-cuda-applications-to-hip/
Reporting on GPU consumption on Dardel is not working yet.
Dardel-GPU is the accelerated partition based on AMD’s Instinct MI250X GPU of the Cray EX system from Hewlett Packard Enterprise. It has an accompanying Lustre storage system.
The nodes are interconnected using Slingshot HPC Ethernet.
|
|
Klemming |
PDC |
— |
500 |
300 000 |
GiB |
|
|
More information about project directories in
Klemming can be found at
https://www.pdc.kth.se/support/documents/data_management/lustre.html.
Project storage for NAISS as well as PDC projects with compute allocations on resources hosted by PDC.
Klemming is designed for fast access from compute resources at PCD. It uses the Lustre parallel file system, which is optimized for handling data from many clients at the same time. The total size of Klemming is 12 PB.
|
|
Cloud |
SSC |
20 000 |
|
500 000 |
Coins |
|
|
Allocations on Cloud will be scaled and transferred to the new NAISS system Arrhenius. It is NAISS's best estimate now that this will occur in the spring of 2026. More information will be announced as procurement and installation of Arrhenius progresses.
Swedish Science Cloud (SSC) is a geographically distributed OpenStack cloud Infrastructure as a Service (IaaS), intended for Swedish academic research provided by NAISS.
It is available free of charge to researchers at Swedish higher education institutions through open application procedures.
The SSC resources are not meant to be a replacement for NAISS supercomputing resources (HPC clusters). Rather, it should be seen as a complement, offering advanced functionality to users who need more flexible access to resources (for example more control over the operating systems and software environments), want to develop software as a service, or want to explore recent technology such as for “Big Data” (e.g. Apache Hadoop/Spark) or IoT applications.
Examples of applications that must run in the normal HPC clusters since the SSC isn't built for those purposes.
- AI/LLM/Deep Learning and other applications relying on GPU resources cannot be run in the SSC due to the limited access to GPU's.
- Applications assuming high performance or large volumes of storage, network, or cores.
- Benchmarking will not perform well within SSC due to performance limitations.
- SSC isn't classified for Sensitive data, those project MUST use Cluster system for sensitive data within NAISS.
|