This round has already been decided by the committee and is not open for proposals.
|
Resource |
Centre |
Total Requested |
Upper Limit |
Available |
Unit |
Note |
|
Hebbe |
C3SE |
1 290 |
— |
600 |
x 1000 core-h/month |
|
|
The Hebbe cluster is built on Intel Xeon E5-2650v3 (code-named "haswell") CPU's.
The system has a total of 323 compute nodes (total of 6480 cores) with 27 TiB of RAM and 6 GPUs. More specific:
- 260 x 64 GB of RAM (249 of these available for SNIC users)
- 46 x 128 GB of RAM (31 of these available for SNIC users)
- 7 x 256 GB of RAM (not available for SNIC users)
- 3 x 512 GB of RAM (1 of these available for SNIC users)
- 1 x 1024 GB of RAM
- 4 x 64 GB of RAM and NVIDIA Tesla K40 GPU (2 of these available for SNIC users)
- 2 x 256 GB of RAM and NVIDIA k4200 for remote graphics
Each node have 2 CPUs with 10 cores each.
There's a 10Gigabit Ethernet network used for logins, and a dedicated management network and an Infiniband high-speed/low-latency network for parallel computations and filesystem access. The nodes are equipped with Mellanox ConnectX-3 FDR Infiniband 56Gbps HCA's.
|
|
Abisko |
HPC2N |
5 724 |
— |
2 500 |
x 1000 core-h/month |
|
|
Abisko has reached its intended End-of-Life. As Abisko is quite a popular resource, HPC2N have decided to make it available for the Large Fall 2017 allocation round. Allocations may be reduced and/or moved to other resources, depending on availability of free time on other resources, due to parts failing and other causes.
The cluster has 15744 cores with a peak performance of over 150 Tflops/s. For high parallel performance, the system is equipped with a high bandwidth, low latency QDR InfiniBand interconnect, with full bisectional bandwidth. All nodes have at least 2 GB/core and some nodes have over 8 GB/core. For more information about the system and available software see the HPC2N web-pages.
|
|
Kebnekaise |
HPC2N |
7 429 |
— |
3 200 |
x 1000 core-h/month |
|
|
This resource is for access to the CPU nodes in Kebnekaise.
For GPU nodes see resource 'Kebnekaise GPU'.
For large memory nodes see resource 'Kebnekaise Large Memory'.
Kebnekaise is a heterogeneous computing resource currently consisting of:
- Compute nodes:
- GPU nodes (separate resource):
- 10 Intel® Xeon Gold 6132 Processor (Skylake), 2x14 cores, 192 GB/node
- 2 AMD® EPYC 7413 (Zen3), 2x24 cores, 512 GB/node
- 1 AMD® EPYC 7413 (Zen3), 2x24 cores, 512 GB/node
- 1 AMD® EPYC 9254 (Zen4), 2x24 cores, 384 GB/node
- 10 AMD® EPYC 9254 (Zen4), 2x24 cores, 384 GB/node
- 2 AMD® EPYC 9454 (Zen4), 2x48 cores, 768 GB/node
- 1 AMD® EPYC 9334 (Zen4), 2x32 cores, 768 GB/node
- 2 AMD® EPYC 9334 (Zen4), 2x32 cores, 768 GB/node
- Large Memory nodes (separate resource):
Notes:
- Access to the GPU nodes are handled through the 'Kebnekaise GPU' resource.
- Access to the Large Memory nodes are handled through the 'Kebnekaise Large Memory' resource.
- New nodes will be procured on a semi-regular basis.
|
|
Kebnekaise Large Memory |
HPC2N |
281 |
— |
450 |
x 1000 core-h/month |
|
|
This resource is for access to the Large Memory nodes in Kebnekaise.
For CPU nodes see resource 'Kebnekaise'.
For GPU nodes see resource 'Kebnekaise GPU'.
Kebnekaise is a heterogeneous computing resource currently consisting of:
- Compute nodes (separate resource):
- GPU nodes (separate resource):
- 10 Intel® Xeon Gold 6132 Processor (Skylake), 2x14 cores, 192 GB/node
- 2 AMD® EPYC 7413 (Zen3), 2x24 cores, 512 GB/node
- 1 AMD® EPYC 7413 (Zen3), 2x24 cores, 512 GB/node
- 1 AMD® EPYC 9254 (Zen4), 2x24 cores, 384 GB/node
- 10 AMD® EPYC 9254 (Zen4), 2x24 cores, 384 GB/node
- 2 AMD® EPYC 9454 (Zen4), 2x48 cores, 768 GB/node
- 1 AMD® EPYC 9334 (Zen4), 2x32 cores, 768 GB/node
- 2 AMD® EPYC 9334 (Zen4), 2x32 cores, 768 GB/node
- Large Memory nodes:
Notes:
- Access to the CPU nodes are handled through the 'Kebnekaise' resource.
- Access to the GPU nodes are handled through the 'Kebnekaise GPU' resource.
- New nodes will be procured on a semi-regular basis.
|
|
Aurora |
LUNARC |
1 020 |
— |
500 |
x 1000 core-h/month |
|
|
Aurora is the Lund university compute resource and is operated by LUNARC
|
|
Triolith |
NSC |
11 529 |
— |
3 600 |
x 1000 core-h/month |
Allocations will be scaled and transfered to a new resource from Q3, 2018.
|
|
Please note that Triolith will be replaced with a new resource during 2018. NSC currently plan for the replacement to take place in Q3, 2018. At that point project allocations will be scaled and transfered to the new resource. Installation of the new system is likely to be done in stages with an overlap in operation between parts of the new system and parts of Triolith.
Triolith (triolith.nsc.liu.se) was a capability cluster with a total of 24320 cores and a peak performance of 428 Tflops/s. However, Triolith was shrunk by 576 nodes on April 3rd, 2017 as a result of a delay in funding a replacement system and now has a peak performance of 260 Teraflop/sec and 16,368 compute cores. It is equipped with a fast interconnect for high performance for parallel applications. The operating system is CentOS 6.x x86_64. Each of the 1520 (now 944) HP SL230s compute servers is equipped with two Intel E5-2660 (2.2 GHz Sandybridge) processors with 8 cores each (i.e. 16 cores per compute server). 56 of the compute servers have 128 GiB memory each and the remaining 888 have 32 GiB each. The fast interconnect is Infiniband from Mellanox (FDR IB, 56 Gb/s) in a 2:1 blocking configuration.
Triolith have been replaced with a new system, Tetralith, that was made available to users on August 23, 2018. NSC currently plan to keep Triolith in operation and available to users until September 21st, 2018. After that, Triolith will be permanently shut down and decommissioned.
|
|
Beskow |
PDC |
26 669 |
— |
11 200 |
x 1000 core-h/month |
|
|
|
|
Tegner |
PDC |
0 |
— |
210 |
x 1000 core-h/month |
|
|
Tegner is the pre/post processing cluster for Beskow
|
|
Rackham |
UPPMAX |
1 835 |
— |
650 |
x 1000 core-h/month |
|
|
Rackham provides 9720 cores in the form of 486 nodes with two 10-core Intel Xeon V4 CPUs each. 4 fat nodes have 1 TB of memory, 32 fat nodes have 256 GB, and the rest have 128 GB.
The interconnect is Infiniband.
|