NAISS
SUPR
SUPR
C3SE Local 2017

Decided

This round has been closed as all proposals have been handled.

Note! This round is only available for certain groups at Chalmers and Gothenburg University connected to C3SE.

See further information.

Resources

Resource Centre Total
Requested
Upper
Limit
Available Unit Note
Glenn C3SE 5 475 1 030 4 000 x 1000 core-h/month
The Glenn cluster is built on AMD Opteron 6220 (code-named "Interlagos") CPU's. The system consists of: In total 379 compute nodes (total of 6080 cores) and 18.1 TB of RAM. More specific:
  • 224 nodes with 16 cores and 32 GB of RAM
  • 135 nodes with 16 cores and 64 GB of RAM
  • 13 nodes with 16 cores and 128 GB of RAM
  • 1 node with 32 cores and 512 GB of RAM
  • 4 nodes with 16 cores, 32 GB of RAM and 1 NVidia Fermi M2050 GPU
  • 2 nodes with 16 cores, 32 GB of RAM and 1 PCoIP adapter for remote graphics
There are also 3 system servers used for accessing and managing the cluster. There's a Gigabit Ethernet network used for logins and file system access, a dedicated management network and an Infiniband high-speed/low-latency network for parallell computations. The nodes are equipped with Mellanox ConnectX-2 QDR Infiniband 40Gbps HCA's. The server and compute node hardware is built by Supermicro and delivered by South Pole.
Hebbe C3SE 1 654 205 1 250 x 1000 core-h/month
The Hebbe cluster is built on Intel Xeon E5-2650v3 (code-named "haswell") CPU's. The system has a total of 323 compute nodes (total of 6480 cores) with 27 TiB of RAM and 6 GPUs. More specific:
  • 260 x 64 GB of RAM (249 of these available for SNIC users)
  • 46 x 128 GB of RAM (31 of these available for SNIC users)
  • 7 x 256 GB of RAM (not available for SNIC users)
  • 3 x 512 GB of RAM (1 of these available for SNIC users)
  • 1 x 1024 GB of RAM
  • 4 x 64 GB of RAM and NVIDIA Tesla K40 GPU (2 of these available for SNIC users)
  • 2 x 256 GB of RAM and NVIDIA k4200 for remote graphics
Each node have 2 CPUs with 10 cores each. There's a 10Gigabit Ethernet network used for logins, and a dedicated management network and an Infiniband high-speed/low-latency network for parallel computations and filesystem access. The nodes are equipped with Mellanox ConnectX-3 FDR Infiniband 56Gbps HCA's.

Click above to show more information about the resource.