NAISS
SUPR
SUPR
DCS 2018

Decided

This round has been closed as all proposals have been handled.

See further information.

Resources

Resource Centre Total
Requested
Upper
Limit
Available Unit Note
Centre Storage NSC 0 2 000 000 GiB

Project storage for NAISS as well as LiU Local projects with compute allocations on resources hosted by NSC.

Centre Storage @ NSC is designed for fast access from compute resources at NSC. It consists of one IBM ESS GL6S building block and one IBM ESS 5000 SC4 building block.

In total there are 946 spinning hard disks and a small number of NVRAM devices and SSDs which act as a cache to speed up small writes. The total disk space that is usable for storing files is approximately 6.9 PiB.

DCS NSC 1 100 2 000 TiB Evaluation of large storage allocations usually coincide with the processing of SNAC large compute allocations.
Applications for large storage allocations will be evaluated at least twice per year. Usually to coincide with the processing of SNAC large compute allocations.

NSC offers large (>50 TiB) storage allocations on our new high performance Centre Storage/DCS system. Importantly, these large storage (DCS) allocations are for projects requiring active storage, NOT archiving. Alternative archiving resources are available through SNIC (see e.g. http://docs.snic.se/wiki/SweStore). DCS applications should demonstrate how data stored on the new Centre Storage will be used e.g. data processing/reduction, data mining, visualization, analytics etc. Proposals will be evaluated at least twice per year. Usually to coincide with the processing of SNAC large compute allocations.
Tetralith NSC 0 35 x 1000 core-h/month This is core time for the special analysis nodes on Tetralith.

Tetralith is a general computational resource hosted by NSC at Linköping University.

Tetralith servers have two Intel Xeon Gold 6130 processors, providing 32 cores per server. 1844 of the servers are equipped with 96 GiB of primary memory and 64 servers with 384 GiB. All servers are interconnected with a 100 Gbit/s Intel Omni- Path network which is also used to connect the existing storage. Each server has a local SSD disk for ephemeral storage (approx. 200GiB per thin node, 900GiB per fat node). An IBM Spectrum Scale system comprises the centre storage. 170 of the Tetralith nodes are equipped with one NVIDIA Tesla T4 GPU each as well as a high- performance NVMe SSD scratch disk of 2TB.

Triolith NSC 0 35 x 1000 core-h/month Triolith has been replaced by Tetralith.
Triolith (triolith.nsc.liu.se) was a capability cluster with a total of 24320 cores and a peak performance of 428 Tflops/s. However, Triolith was shrunk by 576 nodes on April 3rd, 2017 as a result of a delay in funding a replacement system and now has a peak performance of 260 Teraflop/sec and 16,368 compute cores. It is equipped with a fast interconnect for high performance for parallel applications. The operating system is CentOS 6.x x86_64. Each of the 1520 (now 944) HP SL230s compute servers is equipped with two Intel E5-2660 (2.2 GHz Sandybridge) processors with 8 cores each (i.e. 16 cores per compute server). 56 of the compute servers have 128 GiB memory each and the remaining 888 have 32 GiB each. The fast interconnect is Infiniband from Mellanox (FDR IB, 56 Gb/s) in a 2:1 blocking configuration. Triolith have been replaced with a new system, Tetralith, that was made available to users on August 23, 2018. NSC currently plan to keep Triolith in operation and available to users until September 21st, 2018. After that, Triolith will be permanently shut down and decommissioned.

Click above to show more information about the resource.