This round has been closed as all proposals have been handled.
|
Resource |
Centre |
Total Requested |
Upper Limit |
Available |
Unit |
Note |
|
Cephyr NOBACKUP |
C3SE |
101 248 |
5 000 |
100 000 |
GiB |
Storage attached to Alvis and Vera at C3SE
|
|
Project storage based on Ceph with a total usable area of 5 PiB.
- 14 Storage servers, each with 3 NvMe:s (for database and journal).
- 7 JBOD, each with 42 14 TiB HDD:s.
|
|
Mimer |
C3SE |
23 050 |
5 000 |
100 000 |
GiB |
Currently only available from Alvis.
|
|
Currently only available from Alvis.
Project storage attached to Alvis and Vera, dedicated for AI/ML
Mimer is an all-flashed based storage system based on as solution from WEKA IO.
It consists of an 0.6 PB all-flash tier and a 7 PB Ceph based bulk storage tier (with spinning disk).
|
|
Nobackup |
HPC2N |
78 100 |
3 000 |
60 000 |
GiB |
Nobackup is, due to age, no longer available in SNIC rounds. HPC2N partner sites can apply for time in ‘HPC2N Local’ rounds.
|
|
HPC2N Nobackup resource will, due to its age, not be available for SNIC/NAIS Medium/Small rounds after 2022-12-31. We have therefore deactivated these resources in the corresponding rounds. For HPC2N partner sites there is a possibility to apply for resources in the ‘HPC2N Local Compute/Storage’ rounds with the caveat listed on the resources in the Local rounds.
Active project storage without backup for local HPC2N projects.
|
|
Centrestorage nobackup |
LUNARC |
70 500 |
5 000 |
500 000 |
GiB |
|
|
|
|
Centre Storage |
NSC |
198 100 |
5 000 |
500 000 |
GiB |
Storage on Centre Storage at NSC for compute projects allocated time on NSC resources.
|
|
Centre Storage @ NSC is designed for fast access from compute resources at NSC. The purpose is to provide storage for active data for SNIC compute projects allocated time on compute resources at NSC.
Apply for a file quota of 1 million files unless you have a good motivation for more. Proposals will be evaluated within a few working days.
Project storage for NAISS as well as LiU Local projects with compute allocations on resources hosted by NSC.
Centre Storage @ NSC is designed for fast access from compute resources at NSC. It consists of one IBM ESS GL6S building block and one IBM ESS 5000 SC4 building block.
In total there are 946 spinning hard disks and a small number of NVRAM devices and SSDs which act as a cache to speed up small writes. The total disk space that is usable for storing files is approximately 6.9 PiB.
|
|
Klemming |
PDC |
117 000 |
5 000 |
300 000 |
GiB |
Storage attached to Dardel at PDC
|
|
When a storage allocation has been approved, any data belonging to the project needs to be moved from the users' 'nobackup' directories and into the project directory as soon as possible. Any data left in the 'nobackup' directories after 2022-04-01 will be deleted. More information about project directories in Klemming can be found at https://www.pdc.kth.se/support/documents/data_management/klemming.html.
Project storage for NAISS as well as PDC projects with compute allocations on resources hosted by PDC.
Klemming is designed for fast access from compute resources at PCD. It uses the Lustre parallel file system, which is optimized for handling data from many clients at the same time. The total size of Klemming is 12 PB.
|
|
dCache |
Swestore |
8 |
10 |
500 |
TiB |
|
|
Swestore is a Research Data Storage Infrastructure, intended for active research data and operated by the National Academic Infrastructure for Supercomputing in Sweden, NAISS,
The storage resources provided by Swestore are made available for free for academic research funded by VR and Formas through open calls such that the best Swedish research is supported and new research is facilitated.
The purpose of Swestore allocations, granted by National Allocations Committee (NAC), is to provide large scale data storage for “live” or “working” research data, also known as active research data.
See the documentation at: https://docs.swestore.se
|
|
Crex 1 |
UPPMAX |
1 573 984 |
10 000 |
1 400 000 |
GiB |
UPPMAX storage system attached to Rackham and Snowy, suitable for active data.
|
|
UPPMAX storage system attached to Rackham and Snowy, suitable for active data. Only apply if you have a Compute project or proposal at UPPMAX. Not suitable for long-term data storage.
Backup is available. Use "nobackup" in directory names to exempt data from backup.
Note: Allocations on Crex are decided strictly on a monthly basis. Proposals submitted after the 23rd are decided the following month. See this page for detailed information: https://www.uppmax.uu.se/support/getting-started/applying-for-storage/
Active data storage for Rackham projects. Primarily for life science projects.
|