General information
Summary
Arina has 3.728 cores distributed in 3.664 xeon cores, and 64 Itanium2 cores with RAM memory ranging from 16 to 512 GB per node. Four of the xeon nodes have 2 Nvidia Tesla cards each, another node with two Nvidia Kepler 20, and another one with 2 Nvidia Kepler 40 . There are 3 high performance file systems based on Lustre, one with 4.2 TB for the Itanium2 and Opteron nodes and other 2 with 22 and 40 TB for the Xeon nodes. All the nodes are connected with a high bandwidth and low latency Infiniband network.
Other resources
Arina has 3 servers, arina (Itanium2), katramila and guinness (Xeon) where researches can use to access the cluster. This servers work as connection, visualization, compilation and test servers. All the servers mount the same /home
file system using NFS. Every night the/home is backed up.
When a computing node writes in /home
the data is sent using the NFS protocol using an ethernet slow connection. We strongly recommend to use during the calculations the the local /scratch
or the shared /gscratch
filesystems to storage temporary data.
Features of the Xeon nodes
- BullX cluster with Red Hat Linux AS, AS6 and AS7 operative system.
- 464 Xeon 5420 at 2.3 GHz cores, 648 5645 cores at 2.4 GHz, 630 Xeon 2680v2 at 2.8 GHz cores and 32 Xeon 4620v2 at 2.6 GHz cores in the shared memory node (512 GB).
- 1.876 Xeon 2680v4 cores 2.8GHz.
- RAM memory which ranges from 24 to 512 GB of RAM per node.
- 40 Gb/s QDR and 56 Gb/s FDR High bandwidth and low latency Infiniband network.
- Main server for compile Guinness and Katramila.
Nodes with GPGPUs
They are similar to the previous ones:
- Two nodes with Nvidia Tesla 20 C2050 cards, Xeon 5420 at 2.3 GHz processor, 24 GB RAM and two QDR infiniband network cards.
- Two nodes with Nvidia Tesla 20 C2070 cards, Xeon 5420 at 2.3 GHz processor, 24 GB RAM and two QDR infiniband network cards.
- One node with Nvidia Tesla 20 cards, Xeon 2680v2 at 2.8 GHz GHz processor, 128 GB RAM and FDR infiniband network card.
- One node with Nvidia Tesla 40 cards, Xeon 2680v4 at 2.4 GHz GHz processor, 128 GB RAM and FDR infiniband network card.
Specific features of the nodes:
Amount | Name | Type | Proc | Cores | Speed | RAM (GB) | Disk (GB) | Label |
1 | Guinness | Server | 2 | 8 | 2.3 GHz | 12 | — | — |
1 | Katramila | Server | 2 | 8 | 2.6 GHz | 128 | — | — |
1 | cn3 | Node | 2 | 8 | 2.3 GHz | 96 | 250 | xeon,xeon8 |
17 | cn4-20 | Node | 2 | 8 | 2.3 GHz | 24 | 250 | xeon,xeon8 |
36 | cn21-56 | Node | 2 | 8 | 2.3 GHz | 48 | 250 | xeon,xeon8 |
4 | cn57-60 | GPU node | 2 | 8 | 2.3 GHz | 24 | 250 | xeon,xeon8,gpu |
18 | cn61-78 | Node | 2 | 12 | 2.4 GHz | 48 | 250 | xeon,xeon12 |
1 | cn79 | Node | 2 | 12 | 2.4 GHz | 96 | 250 | xeon,xeon12 |
35 | cn80-cn114 | Node | 2 | 12 | 2.4 GHz | 24 | 250 | xeon,xeon12 |
30 | nb1-nb30 | Node | 2 | 20 | 2.8 GHz | 128 | 128 | xeon,xeon20 |
1 | nb31 | Node | 2 | 32 | 2.6 GHz | 512 | 1000 | xeon,xeon20 |
1 | nb32 | GPU node | 2 | 20 | 2.8 GHz | 128 | 128 | xeon,xeon20,gpu,rh7 |
66 | nd1-nd14,nd16-nd67 | Node | 2 | 28 | 2.4 GHz | 128 | 128 | xeon,xeon28,rh7 |
1 | nd15 | GPU Node | 2 | 28 | 2.4 GHz | 128 | 128 | xeon,xeon28,rh7,gpu |
The labels are used to select specific computing nodes.
Features of the Itanium2 nodes
- HP Integrity Server cluster with Red Hat Linux AS 4 operative system.
- 128 Itanium2 cores at 1.3 y 1.6 GHz.
- Between 4 and 128 GB of RAM per node.
- 10 Gb/s and 20 Gb/s high bandwith and low latency Infiniband SDR and DDR network.
- Main server to compile arina.
Specific features of the nodes:
Amount | Name | Type | Proc | Cores | Speed | RAM (GB) | Disk (GB) | Label |
1 | Arina | Server | 2 | 4 | 1.6 GHz | 8 | — | — |
1 | Arinaz | Server | 2 | 2 | 1.3 GHz | 2 | — | — |
4 | cndXX | Node | 4 | 8 | 1.6 GHz | 16 | 550 | itaniumb |
10 | cndXX | Node | 4 | 8 | 1.6 GHz | 32 | 550 | itaniumb |
1 | cnd43 | Node | 4 | 8 | 1.6 GHz | 64 | 550 | itaniumb |
All the Itanium2 computing nodes have the itanium label. The labels are used to select specific computing nodes.
High performance filesystems
In addition to the local file systems (/scratch) nodes there are three shared file systems.
- HP-SFS with 4.7 TB, parallel write/read in disk using Lustre technology. It can write at 400MB/s and read at 600MB/s ( a normal disk can do it at about 40-60 MB/s) for Itanium2 and Opteron nodes.
- Lustre with 22 TB for Xeon (xeon8 and xeon12) nodes.
- Lustre with 40 TB, 6.9 GB/s write speed and 5.1 GB/s read speed for Xeon (xeon20) nodes.