This round has been closed as all proposals have been handled.
Monthly evaluation of proposals during the year.
To apply, you must be a scientist in Swedish academia, at least at the level of assistant professor.
|
Resource |
Centre |
Total Requested |
Upper Limit |
Available |
Unit |
Note |
|
Kebnekaise |
HPC2N |
8 341 |
200 |
2 000 |
x 1000 core-h/month |
Proposals will be evaluated at the end of each month.
|
|
This resource is for access to the CPU nodes in Kebnekaise.
For GPU nodes see resource 'Kebnekaise GPU'.
For large memory nodes see resource 'Kebnekaise Large Memory'.
Kebnekaise is a heterogeneous computing resource currently consisting of:
- Compute nodes:
- GPU nodes (separate resource):
- 10 Intel® Xeon Gold 6132 Processor (Skylake), 2x14 cores, 192 GB/node
- 2 AMD® EPYC 7413 (Zen3), 2x24 cores, 512 GB/node
- 1 AMD® EPYC 7413 (Zen3), 2x24 cores, 512 GB/node
- 1 AMD® EPYC 9254 (Zen4), 2x24 cores, 384 GB/node
- 10 AMD® EPYC 9254 (Zen4), 2x24 cores, 384 GB/node
- 2 AMD® EPYC 9454 (Zen4), 2x48 cores, 768 GB/node
- 1 AMD® EPYC 9334 (Zen4), 2x32 cores, 768 GB/node
- 2 AMD® EPYC 9334 (Zen4), 2x32 cores, 768 GB/node
- Large Memory nodes (separate resource):
Notes:
- Access to the GPU nodes are handled through the 'Kebnekaise GPU' resource.
- Access to the Large Memory nodes are handled through the 'Kebnekaise Large Memory' resource.
- New nodes will be procured on a semi-regular basis.
|
|
Kebnekaise Large Memory |
HPC2N |
121 |
10 |
50 |
x 1000 core-h/month |
Proposals will be evaluated at the end of each month.
|
|
To get access to the Kebnekaise Large Memory resource the proposal must clearly show a need for it, including expected memory size required, and a reason for why the normal nodes are not suitable.
This resource is for access to the Large Memory nodes in Kebnekaise.
For CPU nodes see resource 'Kebnekaise'.
For GPU nodes see resource 'Kebnekaise GPU'.
Kebnekaise is a heterogeneous computing resource currently consisting of:
- Compute nodes (separate resource):
- GPU nodes (separate resource):
- 10 Intel® Xeon Gold 6132 Processor (Skylake), 2x14 cores, 192 GB/node
- 2 AMD® EPYC 7413 (Zen3), 2x24 cores, 512 GB/node
- 1 AMD® EPYC 7413 (Zen3), 2x24 cores, 512 GB/node
- 1 AMD® EPYC 9254 (Zen4), 2x24 cores, 384 GB/node
- 10 AMD® EPYC 9254 (Zen4), 2x24 cores, 384 GB/node
- 2 AMD® EPYC 9454 (Zen4), 2x48 cores, 768 GB/node
- 1 AMD® EPYC 9334 (Zen4), 2x32 cores, 768 GB/node
- 2 AMD® EPYC 9334 (Zen4), 2x32 cores, 768 GB/node
- Large Memory nodes:
Notes:
- Access to the CPU nodes are handled through the 'Kebnekaise' resource.
- Access to the GPU nodes are handled through the 'Kebnekaise GPU' resource.
- New nodes will be procured on a semi-regular basis.
|
|
Tetralith |
NSC |
22 441 |
200 |
11 500 |
x 1000 core-h/month |
Applications are normally evaluated during the last week each month.
|
|
Submit your proposal at least one week before the end of a month to be considered for an allocation from the first of the following month. Received proposals will be evaluated against each other and time that become available as project ends at the end of a month will be allocated to the proposed projects accordingly.
Tetralith is a general computational resource hosted by NSC at Linköping University.
Tetralith servers have two Intel Xeon Gold 6130 processors, providing 32 cores per server. 1844 of the servers are equipped with 96 GiB of primary memory and 64 servers with 384 GiB. All servers are interconnected with a 100 Gbit/s Intel Omni- Path network which is also used to connect the existing storage. Each server has a local SSD disk for ephemeral storage (approx. 200GiB per thin node, 900GiB per fat node). An IBM Spectrum Scale system comprises the centre storage. 170 of the Tetralith nodes are equipped with one NVIDIA Tesla T4 GPU each as well as a high- performance NVMe SSD scratch disk of 2TB.
|
|
Beskow |
PDC |
10 417 |
200 |
7 800 |
x 1000 core-h/month |
A small allocation on Tegner will be appended for allocations that are granted on Beskow.
|
|
A small allocation on Tegner for pre/post-processing will be appended for allocations that are granted on Beskow.
Any data belonging to the
project needs to be moved from the users' 'nobackup' directories and
into the project directory as soon as possible. 30 days after the
storage allocation starts, the 25GiB quota will be enforced in the
'nobackup' directories. More information about project directories in
Klemming can be found at
https://www.pdc.kth.se/support/documents/data_management/lustre.html.
|
|
Dardel |
PDC |
1 892 |
200 |
10 700 |
x 1000 core-h/month |
Dardel is the new cluster at PDC and will be available end of November 2021
|
|
Dardel is the new cluster at PDC and will be available end of November 2021. proposal sent in for Dardel allocations can start at the earliest 2021-12-01
Dardel is a Cray EX system from Hewlett Packard Enterprise, based on AMD EPYC processors with an accompanying Lustre storage system.
The nodes are interconnected using Slingshot HPC Ethernet.
|
|
Tegner |
PDC |
0 |
— |
140 |
x 1000 core-h/month |
|
|
Tegner is the pre/post processing cluster for Beskow
|
|
Rackham |
UPPMAX |
7 033 |
200 |
3 000 |
x 1000 core-h/month |
|
|
Rackham provides 9720 cores in the form of 486 nodes with two 10-core Intel Xeon V4 CPUs each. 4 fat nodes have 1 TB of memory, 32 fat nodes have 256 GB, and the rest have 128 GB.
The interconnect is Infiniband.
|