This round has been closed as all proposals have been handled.
To apply, you must be a scientist in Swedish academia, at least at the level of PhD student.
Deadlines and Decisions
Proposals are processed weekly.
Note that staff will be on vacation during the summer and proposals submitted in July will be processed at a reduced pace.
This round is open for proposals until 2025-01-01 00:00.
|
Resource |
Centre |
Total Requested |
Upper Limit |
Default Storage |
Available |
Unit |
Note |
|
Alvis |
C3SE |
236 582 |
1 000 |
|
80 000 |
GPU-h/month |
The Alvis resource is dedicated for AI/ML research.
|
|
The Alvis resource is dedicated for research in and research using AI/ML techniques.
For general use of GPU:s please use Dardel GPU instead, for generation of training data use Dardel or Tetralith.
The Alvis cluster is a national NAISS resource dedicated to Artificial Intelligence and
Machine Learning research.
Note: Significant generation of training data is expected to be done elsewhere.
The system is built around Graphical Processing Units
(GPUs) accelerator cards. The first phase of the resource has 160 NVIDIA T4, 44
V100, and 4 A100 GPUs. The second phase is based on 340 NVIDIA A40 and 336
A100 GPUs.
|
|
Mimer |
C3SE |
165 500 |
— |
500 |
100 000 |
GiB |
|
|
Project storage attached to Alvis and Vera, dedicated for AI/ML
Mimer is an all-flashed based storage system based on as solution from WEKA IO.
It consists of an 0.6 PB all-flash tier and a 7 PB Ceph based bulk storage tier (with spinning disk).
|
|
Tetralith |
NSC |
1 985 |
10 |
|
1 500 |
x 1000 core-h/month |
|
|
Projects will receive a default 500 GiB storage allocation on Centre Storage at NSC. If you need more storage, please apply for a Storage project and decline default storage from this compute proposal.
Tetralith is a general computational resource hosted by NSC at Linköping University.
Tetralith servers have two Intel Xeon Gold 6130 processors, providing 32 cores per server. 1844 of the servers are equipped with 96 GiB of primary memory and 64 servers with 384 GiB. All servers are interconnected with a 100 Gbit/s Intel Omni- Path network which is also used to connect the existing storage. Each server has a local SSD disk for ephemeral storage (approx. 200GiB per thin node, 900GiB per fat node). An IBM Spectrum Scale system comprises the centre storage. 170 of the Tetralith nodes are equipped with one NVIDIA Tesla T4 GPU each as well as a high- performance NVMe SSD scratch disk of 2TB.
|
|
Centre Storage |
NSC |
93 500 |
— |
500 |
60 000 |
GiB |
|
|
If you need more than default storage, please apply for a Storage project and decline this default storage from this compute proposal.
Project storage for NAISS as well as LiU Local projects with compute allocations on resources hosted by NSC.
Centre Storage @ NSC is designed for fast access from compute resources at NSC. It consists of one IBM ESS GL6S building block and one IBM ESS 5000 SC4 building block.
In total there are 946 spinning hard disks and a small number of NVRAM devices and SSDs which act as a cache to speed up small writes. The total disk space that is usable for storing files is approximately 6.9 PiB.
|
|
Dardel |
PDC |
4 308 |
10 |
|
1 720 |
x 1000 core-h/month |
|
|
Dardel is a Cray EX system from Hewlett Packard Enterprise, based on AMD EPYC processors with an accompanying Lustre storage system.
The nodes are interconnected using Slingshot HPC Ethernet.
|
|
Dardel-GPU |
PDC |
12 410 |
200 |
|
6 160 |
GPU-h/month |
|
|
These GPUs are not nVIDIA GPUs but rather AMD GPUs, so if your software runs using CUDA, a certain amount of conversion of the code is needed.
You can read information about this at https://www.lumi-supercomputer.eu/preparing-codes-for-lumi-converting-cuda-applications-to-hip/
Reporting on GPU consumption on Dardel is not working yet.
Dardel-GPU is the accelerated partition based on AMD’s Instinct MI250X GPU of the Cray EX system from Hewlett Packard Enterprise. It has an accompanying Lustre storage system.
The nodes are interconnected using Slingshot HPC Ethernet.
|
|
Klemming |
PDC |
176 628 |
— |
500 |
300 000 |
GiB |
|
|
More information about project directories in
Klemming can be found at
https://www.pdc.kth.se/support/documents/data_management/lustre.html.
Project storage for NAISS as well as PDC projects with compute allocations on resources hosted by PDC.
Klemming is designed for fast access from compute resources at PCD. It uses the Lustre parallel file system, which is optimized for handling data from many clients at the same time. The total size of Klemming is 12 PB.
|
|
Rackham |
UPPMAX |
2 396 |
10 |
|
1 500 |
x 1000 core-h/month |
Restrictive policy for NEW projects on Rackham.
|
|
Rackham will be decommissioned on 2024-12-31. No allocations will be made beyond this date. See https://www.uu.se/centrum/uppmax/nyheter/nyheter/2024-02-16-rackham-end-of-life
Few new Small-scale projects will be accepted. Continuation proposals are welcome. New projects must carefully describe a plan for the project from now until 2024-12-31. UU-affiliated projects can be moved automatically to the new local system. Other projects must have a clear and concrete exit plan.
Rackham provides 9720 cores in the form of 486 nodes with two 10-core Intel Xeon V4 CPUs each. 4 fat nodes have 1 TB of memory, 32 fat nodes have 256 GB, and the rest have 128 GB.
The interconnect is Infiniband.
|
|
Crex 1 |
UPPMAX |
33 792 |
— |
128 |
100 000 |
GiB |
Crex will be decommissioned on 2024-12-31.
|
|
Crex will be decommissioned on 2024-12-31. All data remaining on the system on that date will be lost. See https://www.uu.se/centrum/uppmax/nyheter/nyheter/2024-02-16-rackham-end-of-life
Backup is available. Use "nobackup" in directory names to exempt data from backup.
Active data storage for Rackham projects. Primarily for life science projects.
|