Category Archives: Arina_en

Arina

Computational Resources

Arina is composed of 2 subclusters: kalk2020 and kalk2017-katramila

As a whole, Arina features 147 nodes, which provide 196 multicore processors (CPUs) containing a total amount of 4396 cores.

Arina is equiped with two high performance file systems which provide a storage with total net capacity of 120 TB.

Within each subcluster the nodes are connected to each other via an Infiniband network characterized by high bandwith and low latency for intranode communications.

Arina specifics

Subcluster kalk2020

The sublcluster kalk2020 features the following computing nodes:

In total, this set of nodes is composed of 45 computing nodes, which provide 90 multicore processors containing  a total amount of 1800 cores.

This subcluster is endowed with a Parallel Cluster File System (BeeGFS) with a net capacity of 80 TB. Its storage is shared across the entire kalk2020 node compound.

The SLURM queue system is devoted to the management of the jobs submitted to kalk2020.

The intranode communication is speeded up thanks to an infiniband network with a transfer speed up to 100 Gb/s (EDR).

Subcluster kalk2017-katramila

This subcluster includes two sets of computing nodes, namely kalk2017 and katramila.

The kalk2017 node compound is composed of:

In total, it features 68 computing nodes, which provide 136 multicore processors containing  a total amount of 1904 cores.

Each GPU Nvidia Tesla K40m features 2880 GPU cores and 12 GB of integrated GPU RAM.

The katramila node compound is composed of:

In total, it features 34 computing nodes, which provide 70 multicore processors containing  a total amount of 692 cores.

Each GPU Nvidia Tesla K20m features 2496 GPU cores and 5 GB of integrated GPU RAM.

These two sets of computing nodes share a Parallel Cluster File System (Lustre) with a net capacity of 40 TB. Hence, Its storage is shared across the entire node compound of both kalk2017 and katramila.

The TORQUE/MAUI queue system is devoted to the management of the jobs submitted to kalk2017-katramila. Unless otherwise specified in the job submission, the queue system automatically sends jobs to either kalk2017 or katramila computing nodes upon node availability and requested resources.

The intranode communication is speeded up thanks to an infiniband network with a transfer speed up to 56 Gb/s (FDR).

Espresso

General information

opEn-SourceP ackage for Research in Electronic Structure, Simulation, and Optimization

ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials (both norm-conserving and ultrasoft).

The 6.1 version is availabe.  The home page of the code is in  DEMOCRITOS National Simulation Center of the Italian INFM.

Quantum ESPRESSO builds onto newly-restructured electronic-structure codes (PWscf, PHONON, CP90, FPMD, Wannier) that have been developed and tested by some of the original authors of novel electronic-structure algorithms – from Car-Parrinello molecular dynamics to density-functional perturbation theory – and applied in the last twenty years by some of the leading materials modeling groups worldwide. Innovation and efficiency is still our main focus.

How to use

[intlink id=”4795″ type=”post”] See how to send espresso  section.[/intlink]

Monitorization

  • remote_vi: Shows the *.out file of espresso.
  • myjobs: During the execution of a job it shows the CPU and memory (SIZE) usage.

Benchmark

We show various benchmarks results for  ph.x and pw.xy in our  service the machines. The best are the Xeon nodes and scale well up to 32 cores. Notice that the communication network in the  Xeon nodes is better.

Tabla 1:Execution times pw.x (4.2.1 version).
System 8 cores 16 cores 32 cores
Xeon 1405 709 378
Itanium2 2614 1368 858
Opteron 2.4 4320 2020 1174
Core2duo 2.1
Tabla 2: Execution times ph.x (versión 4.2.1)
System 8 cores 16 cores 32 cores
Xeon 2504 1348 809
Itanium2 2968 1934 1391
Opteron 2.4 6240 3501 2033
Core2duo 2.1

More information

ESPRESSO Web page.

On Line Documentation.

ESPRESSO Wiki.

send_espresso

send_espresso

To launch espresso calculations to the queue system the send_espresso script is available. Executing it, send_espresso [Enter], the syntax of the command is shown:

send_espresso input Executable Nodes Procs_per_node Time Mem [``Otherqueue options'' ]
Input Name of the espresso input file without extension
Executable Name of the espresso program you want to use: pw.x, ph.x, cp.x,…
Nodos Number of nodes
Procs_per_node: Is the number of processors per node
Time: The walltime (in hh:mm:ss format) or the queue name
Mem Memory in  GB (without the unit)
[“Otras opciones de Torque”] See example bellow

Examples

Example1: send_espresso job1 pw.x 1 4 04:00:00 1
Example2: send_espresso job2 cp.x 2 4 192:00:00 8 "-W depend=afterany:1234"
Example3: send_espresso job5 pw.x 4 8 192:00:00 8 "-m bea -M email@adress.com"

Traditional way

The executables can be found in /software/Espresso, for instance to execute pw.x in queue script use

source /software/Espresso/compilervars.sh
/software/Espresso/bin/pw.x -npool ncores < input_file > output_file
In the -npool ncores option substitute ncores by the number of cores of the job.