NAISS
SUPR
SUPR
SNAC Large, Fall 2018

Decided

This round has already been decided by the committee and is not open for proposals.

See further information.

Resources

Resource Centre Total
Requested
Upper
Limit
Available Unit Note
Hebbe C3SE 1 245 600 x 1000 core-h/month
The Hebbe cluster is built on Intel Xeon E5-2650v3 (code-named "haswell") CPU's. The system has a total of 323 compute nodes (total of 6480 cores) with 27 TiB of RAM and 6 GPUs. More specific:
  • 260 x 64 GB of RAM (249 of these available for SNIC users)
  • 46 x 128 GB of RAM (31 of these available for SNIC users)
  • 7 x 256 GB of RAM (not available for SNIC users)
  • 3 x 512 GB of RAM (1 of these available for SNIC users)
  • 1 x 1024 GB of RAM
  • 4 x 64 GB of RAM and NVIDIA Tesla K40 GPU (2 of these available for SNIC users)
  • 2 x 256 GB of RAM and NVIDIA k4200 for remote graphics
Each node have 2 CPUs with 10 cores each. There's a 10Gigabit Ethernet network used for logins, and a dedicated management network and an Infiniband high-speed/low-latency network for parallel computations and filesystem access. The nodes are equipped with Mellanox ConnectX-3 FDR Infiniband 56Gbps HCA's.
Kebnekaise HPC2N 8 125 3 200 x 1000 core-h/month

This resource is for access to the CPU nodes in Kebnekaise.

For GPU nodes see resource 'Kebnekaise GPU'.

For large memory nodes see resource 'Kebnekaise Large Memory'.

 

Kebnekaise is a heterogeneous computing resource currently consisting of: Notes:
  1. Access to the GPU nodes are handled through the 'Kebnekaise GPU' resource.
  2. Access to the Large Memory nodes are handled through the 'Kebnekaise Large Memory' resource.
  3. New nodes will be procured on a semi-regular basis.
Kebnekaise Large Memory HPC2N 695 450 x 1000 core-h/month

This resource is for access to the Large Memory nodes in Kebnekaise.

For CPU nodes see resource 'Kebnekaise'.

For GPU nodes see resource 'Kebnekaise GPU'.

 

Kebnekaise is a heterogeneous computing resource currently consisting of: Notes:
  1. Access to the CPU nodes are handled through the 'Kebnekaise' resource.
  2. Access to the GPU nodes are handled through the 'Kebnekaise GPU' resource.
  3. New nodes will be procured on a semi-regular basis.
Aurora LUNARC 600 500 x 1000 core-h/month
Aurora is the Lund university compute resource and is operated by LUNARC
Tetralith NSC 24 620 14 500 x 1000 core-h/month

Tetralith is a general computational resource hosted by NSC at Linköping University.

Tetralith servers have two Intel Xeon Gold 6130 processors, providing 32 cores per server. 1844 of the servers are equipped with 96 GiB of primary memory and 64 servers with 384 GiB. All servers are interconnected with a 100 Gbit/s Intel Omni- Path network which is also used to connect the existing storage. Each server has a local SSD disk for ephemeral storage (approx. 200GiB per thin node, 900GiB per fat node). An IBM Spectrum Scale system comprises the centre storage. 170 of the Tetralith nodes are equipped with one NVIDIA Tesla T4 GPU each as well as a high- performance NVMe SSD scratch disk of 2TB.

Beskow PDC 20 764 11 200 x 1000 core-h/month
Tegner PDC 0 210 x 1000 core-h/month
Pre/post system for the beskow cluster. All approved Beskow allocation will get 1/60 of the beskow corehours on Tegner

Tegner is the pre/post processing cluster for Beskow
Crex 1 UPPMAX 20 000 1 000 000 GiB Storage resource attached to Rackham
Crex is the centre storage at UPPMAX, attached to the Rackham compute cluster. Proposals requesting Crex storage in SNAC LARGE must also include requests for compute resources totalling more the limits of SNAC MEDIUM (100 kch/month), at least part of which are on Rackham.

Active data storage for Rackham projects. Primarily for life science projects.
Rackham UPPMAX 1 100 1 000 x 1000 core-h/month
Rackham provides 9720 cores in the form of 486 nodes with two 10-core Intel Xeon V4 CPUs each. 4 fat nodes have 1 TB of memory, 32 fat nodes have 256 GB, and the rest have 128 GB. The interconnect is Infiniband.

Click above to show more information about the resource.