Category Archives: Software_en
Conda Environments
Qbox
General Information
Version: 1.62.3
Qbox is a C++/MPI scalable parallel implementation of first-principles molecular dynamics (FPMD) based on the plane-wave, pseudopotential formalism. Qbox is designed for operation on large parallel computers.
How to use it:
To send qbox jobs to the queue we have created the send_qbox utility_
send_qbox JOBNAME NODES PROCS_PER_NODE[property] TIME
Executing send_box [Enter] more options will be shown. The program is installed in /software/qbox
More Information
On the Qbox Web page.
SAMtools, BCFtools and HTSlib 1.2
General Information
Samtools is a suite of programs for interacting with high-throughput sequencing data. It consists of three separate repositories:
Samtools
Reading/writing/editing/indexing/viewing SAM/BAM/CRAM format
BCFtools
Reading/writing BCF2/VCF/gVCF files and calling/filtering/summarising SNP and short indel sequence variants
HTSlib
A C library for reading/writing high-throughput sequencing data
Samtools and BCFtools both use HTSlib internally, but these source packages contain their own copies of htslib so they can be built independently.
How to use It
They are installed in /software/samtools-1.2/
, /software/bcftools-1.2/
and /software/htslib-1.2.1
respectibely.
Something like this should be added in the PBS script.
export PATH=/software/samtools-1.2/bin:/software/bcftools-1.2/bin:$PATH
export LD_LIBRARY_PATH=/software/htslib-1.2.1/lib:$LD_LIBRARY_PATH
More Information
PHENIX
General information
Versión dev-2229 (higher than 1.10) of PHENIX (Python-based Hierarchical ENvironment for Integrated Xtallography). PHENIX is a software suite for the automated determination of macromolecular structures using X-ray crystallography and other methods. It is ready to use with [intlink id=”1969″ type=”post”]AMBER[/intlink].
How to use
To execute the graphical interface in Guinness execute the command:
phenix &
To execute PHENIX in the queue system scripts the PHENIX working environment must be loaded first with the source
command. Execute for example:
phenix.xtriage my_data.sca [options]
More information
PHENIX web page.
Online documentation.
Documentation in pdf.
MCCCS Towhee 7.0.2
Towhee is a Monte Carlo molecular simulation code originally designed for the prediction of fluid phase equilibria using atom-based force fields and the Gibbs ensemble with particular attention paid to algorithms addressing molecule conformation sampling. The code has subsequently been extended to several ensembles, many different force fields, and solid (or at least porous) phases.
General Information
Towhee serves as a useful tool for the molecular simulation community and allows science to move forward more quickly by eliminating the need for individual research groups to rewrite routines that already exist and instead allows them to focus on algorithm advancement, force field development, and application to interesting systems.
Towhee may use different type of ensembles and Monte Carlo moves implemented into Towhee and can alos used different force fields included with the distribution. (See here for more information )
How to Use
send_towhee
- To send Towhee to the queue system use the send_gulp utility. When executed,
shows the command syntax, which is summarized below: - send_towhee JOBNAME NODES PROCS_PER_NODE TIME [ MEM ] [``Other queue options'' ]
JOBNAME: | Is the name of the Output. |
NODES: | Number of nodes. |
PROCS: | Number of processors. |
TIME: | Time requested to the queue system, format hh:mm:ss. |
MEM: | Optional. Memory in Gb ( It will used 1GB/core if not set). |
[``Other Torque Options'' ] | Optional. There is the possibility to pass more variables to the queuing system. See examples below. More information about this options |
Examples
We send a Towhee job1 to 1 node, 4 processors on that node, with a requested time of 4 hours . The results will be in the OUT file.
send_towhee OUT 1 4 04:00:00
We send job2 to 2 compuation nodes, 8 processors on each node, with a requested time of 192 hours, 8 GB of RAM and to start running after work 1234.arinab is finished:
send_towhee OUT 2 8 192:00:00 8 ``-W depend=afterany:1234'
We send the input job3 to 4 nodes and 4 processors on each node, with arequested time of 200:00:00 hours, 2 GB of RAM and we request to be send an email at the beginning and end of the calculation to the direction specified.
send_towhee OUT 4 4 200:00:00 2 ``-m be -M mi.email@ehu.es''
send_towhee command copies the contents of the directory from which the job is sent to /scratch or / gscratch, if we use 2 or more nodes. And there is where the calculation is done.
Jobs Monitoring
To facilitate monitoring and/or control of the Towhee calculations, you can use remote_vi
remote_vi JOBID
It show us the *.out file (only if it was sent using send_towhee).
More information
LAMMPS
LAMMPS (“Large-scale Atomic/Molecular Massively Parallel Simulator”) is a molecular dynamics program from Sandia National Laboratories. LAMMPS makes use of MPI for parallel communication and is a free open-source code, distributed under the terms of the GNU General Public License.
LAMMPS was originally developed under a Cooperative Research and Development Agreement (CRADA) between two laboratories from United States Department of Energy and three other laboratories from private sector firms. It is currently maintained and distributed by researchers at the Sandia National Laboratories. (Taken from Wikipedia). Jun-05-2019 version.
General Information
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state. It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.
In the most general sense, LAMMPS integrates Newton’s equations of motion for collections of atoms, molecules, or macroscopic particles that interact via short- or long-range forces with a variety of initial and/or boundary conditions. For computational efficiency LAMMPS uses neighbor lists to keep track of nearby particles. The lists are optimized for systems with particles that are repulsive at short distances, so that the local density of particles never becomes too large. On parallel machines, LAMMPS uses spatial-decomposition techniques to partition the simulation domain into small 3d sub-domains, one of which is assigned to each processor. Processors communicate and store “ghost” atom information for atoms that border their sub-domain. LAMMPS is most efficient (in a parallel sense) for systems whose particles fill a 3d rectangular box with roughly uniform density. Papers with technical details of the algorithms used in LAMMPS are listed in this section.
How to Use
send_lmp
- To send LAMMPS to the queue system use the send_lmp utility. When executed,
shows the command syntax, which is summarized below: send_lmp JOBNAME NODES PROCS_PER_NODE TIME [ MEM ] [``Other queue options'' ]
JOBNAME: Is the name of the input with extension. NODES: Number of nodes. PROCS: Number of processors. TIME: Time requested to the queue system, format hh:mm:ss. MEM: Optional. Memory in Gb ( It will used 1GB/core if not set). [``Other Torque Options'' ] Optional. There is the possibility to pass more variables to the queuing system.
See examples below. More information about this options
Examples
We send the lammps input job1 to 1 node, 4 processors on that node, with a requested time of 4 hours:
send_lmp job1.in 1 4 04:00:00
We send job2 to 2 compuation nodes, 8 processors on each node, with a requested time of 192 hours, 8 GB of RAM and to start running after work 1234.arinab is finished:
send_lmp job2.inp 2 8 192:00:00 8 ``-W depend=afterany:1234'
We send the input job3 to 4 nodes and 4 processors on each node, with arequested time of 200:00:00 hours, 2 GB of RAM and we request to be send an email at the beginning and end of the calculation to the direction specified.
send_lmp job.tpr 4 4 200:00:00 2 ``-m be -M mi.email@ehu.es''
send_lmp command copies the contents of the directory from which the job is sent to /scratch or / gscratch, if we use 2 or more nodes. And there is where the calculation is done.
Jobs Monitoring
To facilitate monitoring and/or control of the LAMMPS calculations, you can use remote_vi
remote_vi JOBID
It show us the *.out file (only if it was sent using send_lmp).
More information
Gaussview
5.0.9 version of Gaussview, GUI to create and analyze [intlink id=”12″ type=”post”]Gaussian[/intlink] jobs. In order to use it, execute:
gv
We strongly recommend to use it through an NX-client in Guinness. You can find information about how to configure NX-client correctly in the following step by step guide.
How to send Turbomole
send_turbo
To launch turbomole calculations to the queue system send_turbo
is available. Executing it, send_turbo without arguments the syntax of the command and examples are shown:
send_turbo "EXEC and Options" JOBNAME TIME[or QUEUE] PROCS[property] MEM [``Other queue options'' ]
- EXEC: Name of the Turbomole program you wnat to use.
- JOBNAME: Name of the Turbomole control file (usually control).
- PROCS: is the number of processors (you can not include the node type).
- TIME[or QUEUE]: the walltime (in hh:mm:ss format) or the queue name.
- MEM: memory in GB (without the unit).
- [“Other queue options”] see examples below.
Examples
To run Turbomole (jobex) with the control input file in 8 cores and 1 GB of RAM execute:
send_turbo jobex control 04:00:00 8 1
To run Turbomole (jobex -ri) with the control input file in 16 cores, 8 GB of RAM and after 1234 job has finished execute:
send_turbo jobex -ri control 192:00:00 16 8 ``-W depend=afterany:1234''
Turbomole
Presently TURBOMOLE is one of the fastest and most stable codes available for standard quantum chemical applications. Unlike many other programs, the main focus in the development of TURBOMOLE has not been to implement all new methods and functionals, but to provide a fast and stable code which is able to treat molecules of industrial relevance at reasonable time and memory requirements.
General information
- all standard and state of the art methods for ground state calculations (Hartree-Fock, DFT, MP2, CCSD(T))
- excited state calculations at different levels (full RPA, TDDFT, CIS(D), CC2, ADC(2), …)
- geometry optimizations, transition state searches, molecular dynamics calculations
- various properties and spectra (IR, UV/Vis, Raman, CD)
- fast and reliable code, approximations like RI are used to speed-up the calculations without introducing uncontrollable or unkown errors
- parallel version for almost all kind of jobs
- free graphical user interface
How to use it
The programme is in guinness at /software/TURBOMOLE.We have created the send_turbo
script to facilitate the way to send turbomole calculations to the queue. See [intlink id=”4755″ type=”post”]How to send Turbomole[/intlink].
TmoleX, is also available, to help the input creationd and analisys of the results. There is a free download of TmoleX that you can install in your PC or it is available on Guinness. To use TmoleX execute:
TmoleX
To cleanly stop a job after the current iteration, for example the 1234.arina job, use the command:
turbomole_stop 1234
Remember to delete the “stop” file in the directory if you want to resubmit the calculation.
More Infromation
GROMACS
General information
2018 version. GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.
It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.
How to use
send_gmx
To send gromacs to the queue system use the send_gmx
utility. When executed, shows the command syntax, which is summarized below:
send_gmx ``JOB and Options'' NODES PROCS_PER_NODE TIME MEM [``Other queue options'']
``JOB and Options'': | options for the calculation and input of GROMACS name extension. It is very important to keep the quotes. |
NODES: | Number of nodes. |
PROCS: | Number of processors. |
TIME: | Time requested to the queue system, format hh:mm:ss. |
MEM: | Memory in Gb. |
[``Otras opciones de Torque'' ] | There is the possibility to pass more variables to the queuing system. See examples below. [intlink id=”244″ type=”post”] More information about this options[/intlink] |
Examples
We send the gromacs input job1 to 1 node, 4 processors on that node, with a requested time of 4 hours and 1 GB of RAM:
send_gmx ``-s job1.tpr'' 1 4 04:00:00 1
We send job2 to 2 compuation nodes, 8 processors on each node, with a requested time of 192 hours, 8 GB of RAM and to start running after work 1234.arinab is finished:
send_gmx ``-s job2.tpr'' 2 8 192:00:00 8 ``-W depend=afterany:1234'
We send the input job3 to 4 nodes and 4 processors on each node, with a requested time of 200:00:00 hours, 2 GB of RAM and we request to be send an email at the beginning and end of the calculation to the direction specified.
send_gmx ``-s job.tpr'' 4 4 200:00:00 2 ``-m be -M mi.email@ehu.es''
send_gmx
command copies the contents of the directory from which the job is sent to /scratch
or /gscratch
, if we use 2 or more nodes. And there is where the calculation is done.
Jobs Monitoring
To facilitate monitoring and/or control of the gromacs calculations, you can use remote_vi which shows the md.log file (only if it was sent using send_gmx
).