NAISS
SUPR
SUPR
SNIC Small Compute 2021

Decided

This round has been closed as all proposals have been handled.

See further information.

Resources

Resource Centre Total
Requested
Upper
Limit
Available Unit Note
Kebnekaise HPC2N 846 5 200 x 1000 core-h/month Proposals will be evaluated once per week

This resource is for access to the CPU nodes in Kebnekaise.

For GPU nodes see resource 'Kebnekaise GPU'.

For large memory nodes see resource 'Kebnekaise Large Memory'.

 

Kebnekaise is a heterogeneous computing resource currently consisting of: Notes:
  1. Access to the GPU nodes are handled through the 'Kebnekaise GPU' resource.
  2. Access to the Large Memory nodes are handled through the 'Kebnekaise Large Memory' resource.
  3. New nodes will be procured on a semi-regular basis.
Kebnekaise Large Memory HPC2N 86 5 40 x 1000 core-h/month Proposals will be evaluated once per week
To get access to the Kebnekaise Large Memory resource the proposal must clearly show a need for it, including expected memory size required, and a reason for why the normal nodes are not suitable.

This resource is for access to the Large Memory nodes in Kebnekaise.

For CPU nodes see resource 'Kebnekaise'.

For GPU nodes see resource 'Kebnekaise GPU'.

 

Kebnekaise is a heterogeneous computing resource currently consisting of: Notes:
  1. Access to the CPU nodes are handled through the 'Kebnekaise' resource.
  2. Access to the GPU nodes are handled through the 'Kebnekaise GPU' resource.
  3. New nodes will be procured on a semi-regular basis.
Tetralith NSC 577 5 200 x 1000 core-h/month Access to Tetralith at NSC.
Proposals will be evaluated within a few working days. Projects will receive a default 500 GiB storage allocation on Centre Storage at NSC. For additional storage, please apply for a Storage project.

Tetralith is a general computational resource hosted by NSC at Linköping University.

Tetralith servers have two Intel Xeon Gold 6130 processors, providing 32 cores per server. 1844 of the servers are equipped with 96 GiB of primary memory and 64 servers with 384 GiB. All servers are interconnected with a 100 Gbit/s Intel Omni- Path network which is also used to connect the existing storage. Each server has a local SSD disk for ephemeral storage (approx. 200GiB per thin node, 900GiB per fat node). An IBM Spectrum Scale system comprises the centre storage. 170 of the Tetralith nodes are equipped with one NVIDIA Tesla T4 GPU each as well as a high- performance NVMe SSD scratch disk of 2TB.

Dardel PDC 68 5 850 x 1000 core-h/month Dardel is the new cluster at PDC and will be available end of November 2021
Dardel is the new cluster at PDC and will be available end of November 2021. Projects on Dardel will start 2021-12-01 at the earliest

Dardel is a Cray EX system from Hewlett Packard Enterprise, based on AMD EPYC processors with an accompanying Lustre storage system. The nodes are interconnected using Slingshot HPC Ethernet.
Rackham UPPMAX 1 992 5 1 000 x 1000 core-h/month UPPMAX compute resource
UPPMAX compute resource. Mounts the Crex file system. Projects will receive a default 128 GB storage allocation. For additional storage, please apply for a Storage project.

Rackham provides 9720 cores in the form of 486 nodes with two 10-core Intel Xeon V4 CPUs each. 4 fat nodes have 1 TB of memory, 32 fat nodes have 256 GB, and the rest have 128 GB. The interconnect is Infiniband.

Click above to show more information about the resource.