Commit 05577be7 authored by Marco Govoni's avatar Marco Govoni
Browse files

Removed Examples folder

parent 754ccada
These are instructions on how to run the examples for WEST package.
These examples try to exercise all the programs and features
of the PW package.
If you find that any relevant feature isn't being tested,
please contact us (or even better, write and send us a new example).
To run the examples, you should follow this procedure:
1) Edit the "environment_variables" file from THIS directory,
setting the following variables as needed:
BIN_DIR = directory where executables reside
PSEUDO_DIR = directory where pseudopotential files reside
TMP_DIR = directory to be used as temporary storage area
If you have downloaded the full ESPRESSO distribution, you may set
BIN_DIR=$TOPDIR/bin and PSEUDO_DIR=$TOPDIR/pseudo, where $TOPDIR is
the root of the ESPRESSO source tree.
TMP_DIR must be a directory you have read and write access to, with
enough available space to host the temporary files produced by the
example runs, and possibly offering high I/O performance (i.e.,
don't use an NFS-mounted directory).
2) You have to specify a driver program (such as "poe" or "mpirun")
and the number of processors. This can be done by editing PARA_PREFIX
and PARA_POSTFIX variables (in the "environment_variables" file).
Parallel executables will be run by a command like this:
$PARA_PREFIX pw.x $PARA_POSTFIX < file.in > file.out
For example, if the command line is like this (as for an IBM SP):
poe pw.x -procs 4 < file.in > file.out
you should set PARA_PREFIX="poe", PARA_POSTFIX="-procs 4".
Ask your system administrator
for instructions.
3) To run a single example, go to the corresponding directory (for
instance, "example/example01") and execute:
./run_example
This will create a subdirectory "results", containing the input and
output files generated by the calculation.
Some examples take only a few seconds to run, while others may
require several minutes depending on your system.
4) In each example's directory, the "reference" subdirectory contains
verified output files, that you can check your results against.
The reference results were generated on a local cluster.
On different architectures the precise numbers could be slightly
different, in particular if different FFT dimensions are
automatically selected. For this reason, a plain "diff" of your
results against the reference data doesn't work, or at least, it
requires human inspection of the results.
-----------------------------------------------------------------------
LIST AND CONTENT OF THE EXAMPLES
For each example, more detailed information is provided by the README file
contained in the corresponding directory.
example01:
This example shows how to use pw.x, wstat.x and wfreq.x to compute
the GW electronic structure for SiH4.
The input parameters have not been converged in order to yield a quick
simulation time for testing purposes only.
example02:
This example shows how to use pw.x, wstat.x and wfreq.x to compute
the GW electronic structure for H2O.
The input parameters have not been converged in order to yield a quick
simulation time for testing purposes only.
#!/bin/bash
\rm -rf */results* >& /dev/null
# environment_variables -- settings for running West examples
######## YOU MAY NEED TO EDIT THIS FILE TO MATCH YOUR CONFIGURATION ########
# BIN_DIR = path of compiled executables
# Usually this is $PREFIX/bin, where $PREFIX is the root of the
# Quantum ESPRESSO source tree.
# PSEUDO_DIR = path of pseudopotentials required by the examples
# if required pseudopotentials are not found in $PSEUDO_DIR,
# example scripts will try to download them from NETWORK_PSEUDO
# TMP_DIR = temporary directory to be used by the examples
# Make sure that it is writable by you and that it doesn't contain
# any valuable data (EVERYTHING THERE WILL BE DESTROYED)
# The following should be good for most cases
PREFIX=`cd ../../.. ; pwd`
BIN_DIR=$PREFIX/bin
PSEUDO_DIR=$PREFIX/pseudo
# Beware: everything in $TMP_DIR will be destroyed !
TMP_DIR=$PREFIX/tempdir
# There should be no need to change anything below this line
NETWORK_PSEUDO=http://www.quantum-espresso.org/wp-content/uploads/upf_files/
# wget or curl needed if some PP has to be downloaded from web site
# script wizard will surely find a better way to find what is available
if test "`which curl`" = "" ; then
if test "`which wget`" = "" ; then
echo "wget or curl not found: will not be able to download missing PP"
else
WGET="wget -O"
# echo "wget found"
fi
else
WGET="curl -o"
# echo "curl found"
fi
# To run the WEST programs on a parallel machine, you may have to
# add the appropriate commands (poe, mpirun, mpprun...) and/or options
# (specifying number of processors, pools...) before and after the
# executable's name. That depends on how your machine is configured.
# For example on an IBM SP4:
#
# poe pw.x -procs 4 < file.in > file.out
# ^^^ PARA_PREFIX ^^^^^^^^ PARA_POSTFIX
#
# To run on a single processor, you can usually leave them empty.
# BEWARE: most tests and examples are devised to be run serially or on
# a small number of processors; do not use tests and examples to benchmark
# parallelism, do not run on too many processors
PARA_PREFIX="mpirun -n 2"
#
# available flags:
# -nimage : Number of images
#
PARA_POSTFIX=""
# function to test the exit status of a job
check_failure () {
# usage: check_failure $?
if test $1 != 0
then
$ECHO "Error condition encountered during test: exit status = $1"
$ECHO "Aborting"
exit 1
fi
}
Details
This example shows how to use pw.x, wstat.x and wfreq.x to compute
the GW electronic structure for SiH4.
The input parameters have not been converged in order to yield a quick
simulation time for testing purposes only.
# band E0[eV] Sx[eV] Vxcl[eV] Vxcnl[eV] EHF[eV]
1.000000 -13.273057 -17.544266 -11.085810 0.000000 -19.731513
2.000000 -8.231249 -15.540857 -10.940353 0.000000 -12.831753
# iks ib Eks[eV] Ein[eV] Sc_Ein[eV] Eout[eV] Diff.[eV]
1.000000 1.000000 -13.273057 -15.812042 3.830298 -15.847406 -0.035364
1.000000 2.000000 -8.231249 -11.832966 1.000516 -11.832966 0.000000
# iks ib Eks[eV] Ein[eV] Sc_Ein[eV] Eout[eV] Diff.[eV]
1.000000 1.000000 -13.273057 -13.273057 2.337309 -16.920141 -3.647085
1.000000 2.000000 -8.231249 -8.231249 0.446467 -11.867133 -3.635883
# iks ib Eks[eV] Ein[eV] Sc_Ein[eV] Eout[eV] Diff.[eV]
1.000000 1.000000 -13.273057 -16.920141 5.832835 -15.377353 1.542788
1.000000 2.000000 -8.231249 -11.867133 1.004027 -11.832966 0.034167
# iks ib Eks[eV] Ein[eV] Sc_Ein[eV] Eout[eV] Diff.[eV]
1.000000 1.000000 -13.273057 -15.377353 3.168889 -15.812042 -0.434689
1.000000 2.000000 -8.231249 -11.832966 1.000516 -11.832966 0.000000
# band E0[eV] EHF[eV] Eqp[eV] Eqp-E0[eV] Sc_Eqp[eV] Width[eV]
1.000000 -13.273057 -19.731513 -15.847406 -2.574349 3.830298 0.088256
2.000000 -8.231249 -12.831753 -11.832966 -3.601717 1.000516 0.007226
# iprt eigenv. conv.
1.000000 -1.292726 1.000000
2.000000 -1.231378 1.000000
3.000000 -1.231378 1.000000
4.000000 -1.231378 1.000000
5.000000 -0.819670 1.000000
6.000000 -0.819669 1.000000
7.000000 -0.819669 1.000000
8.000000 -0.638517 1.000000
9.000000 -0.635924 1.000000
10.000000 -0.635924 1.000000
# iprt eigenv. conv.
1.000000 -1.287405 0.000000
2.000000 -1.231072 0.000000
3.000000 -1.230706 0.000000
4.000000 -1.229563 0.000000
5.000000 -0.818134 0.000000
6.000000 -0.813446 0.000000
7.000000 -0.804904 0.000000
8.000000 -0.631856 0.000000
9.000000 -0.626678 0.000000
10.000000 -0.482622 0.000000
# iprt eigenv. conv.
1.000000 -1.292638 0.000000
2.000000 -1.231376 0.000000
3.000000 -1.231372 0.000000
4.000000 -1.231361 0.000000
5.000000 -0.819632 0.000000
6.000000 -0.819439 0.000000
7.000000 -0.818102 0.000000
8.000000 -0.635905 0.000000
9.000000 -0.635442 0.000000
10.000000 -0.565905 0.000000
# iprt eigenv. conv.
1.000000 -1.292726 0.000000
2.000000 -1.231378 1.000000
3.000000 -1.231378 1.000000
4.000000 -1.231378 0.000000
5.000000 -0.819669 0.000000
6.000000 -0.819668 0.000000
7.000000 -0.819649 0.000000
8.000000 -0.637401 0.000000
9.000000 -0.635921 0.000000
10.000000 -0.635756 0.000000
# iprt eigenv. conv.
1.000000 -1.292726 1.000000
2.000000 -1.231378 1.000000
3.000000 -1.231378 1.000000
4.000000 -1.231378 1.000000
5.000000 -0.819669 1.000000
6.000000 -0.819669 1.000000
7.000000 -0.819669 0.000000
8.000000 -0.638490 0.000000
9.000000 -0.635924 1.000000
10.000000 -0.635921 0.000000
# iprt eigenv. conv.
1.000000 -1.292726 1.000000
2.000000 -1.231378 1.000000
3.000000 -1.231378 1.000000
4.000000 -1.231378 1.000000
5.000000 -0.819670 1.000000
6.000000 -0.819669 1.000000
7.000000 -0.819669 1.000000
8.000000 -0.638513 0.000000
9.000000 -0.635924 1.000000
10.000000 -0.635923 1.000000
&control
calculation = 'scf'
restart_mode = 'from_scratch'
pseudo_dir = '/Users/marco/Work/WEST_PROJECT/QE_BEFORE_RELEASE/pseudo/'
outdir = '/Users/marco/Work/WEST_PROJECT/QE_BEFORE_RELEASE/tempdir/'
prefix = 'sih4'
wf_collect = .TRUE.
/
&system
ibrav = 1
celldm(1) = 20
nat = 5
ntyp = 2
ecutwfc = 25.0
nbnd = 10
/
&electrons
conv_thr = 1.d-12
diago_full_acc = .TRUE.
/
ATOMIC_SPECIES
Si 28.0855 Si.pz-vbc.UPF
H 1.00794 H.pz-vbc.UPF
ATOMIC_POSITIONS angstrom
Si 0.0000 0.0000 0.0000
H 0.8544 0.8544 0.8544
H -0.8544 -0.8544 0.8544
H -0.8544 0.8544 -0.8544
H 0.8544 -0.8544 -0.8544
K_POINTS {gamma}
Program PWSCF v.5.2.0 (svn rev. 11583) starts on 18Jun2015 at 17:36:13
This program is part of the open-source Quantum ESPRESSO suite
for quantum simulation of materials; please cite
"P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
URL http://www.quantum-espresso.org",
in publications or presentations arising from this work. More details at
http://www.quantum-espresso.org/quote
Parallel version (MPI & OpenMP), running on 8 processor cores
Number of MPI processes: 2
Threads/MPI process: 4
R & G space division: proc/nbgrp/npool/nimage = 2
Waiting for input...
Reading input from standard input
Current dimensions of program PWSCF are:
Max number of different atomic species (ntypx) = 10
Max number of k-points (npk) = 40000
Max angular momentum in pseudopotentials (lmaxx) = 3
file H.pz-vbc.UPF: wavefunction(s) 1S renormalized
gamma-point specific algorithms are used
Subspace diagonalization in iterative solution of the eigenvalue problem:
a serial algorithm will be used
Parallelization info
--------------------
sticks: dense smooth PW G-vecs: dense smooth PW
Min 1590 1590 395 67521 67521 8437
Max 1591 1591 398 67522 67522 8442
Sum 3181 3181 793 135043 135043 16879
Tot 1591 1591 397
bravais-lattice index = 1
lattice parameter (alat) = 20.0000 a.u.
unit-cell volume = 8000.0000 (a.u.)^3
number of atoms/cell = 5
number of atomic types = 2
number of electrons = 8.00
number of Kohn-Sham states= 10
kinetic-energy cutoff = 25.0000 Ry
charge density cutoff = 100.0000 Ry
convergence threshold = 1.0E-12
mixing beta = 0.7000
number of iterations used = 8 plain mixing
Exchange-correlation = SLA PZ NOGX NOGC ( 1 1 0 0 0 0)
celldm(1)= 20.000000 celldm(2)= 0.000000 celldm(3)= 0.000000
celldm(4)= 0.000000 celldm(5)= 0.000000 celldm(6)= 0.000000
crystal axes: (cart. coord. in units of alat)
a(1) = ( 1.000000 0.000000 0.000000 )
a(2) = ( 0.000000 1.000000 0.000000 )
a(3) = ( 0.000000 0.000000 1.000000 )
reciprocal axes: (cart. coord. in units 2 pi/alat)
b(1) = ( 1.000000 0.000000 0.000000 )
b(2) = ( 0.000000 1.000000 0.000000 )
b(3) = ( 0.000000 0.000000 1.000000 )
PseudoPot. # 1 for Si read from file:
Si.pz-vbc.UPF
MD5 check sum: 6dfa03ddd5817404712e03e4d12deb78
Pseudo is Norm-conserving, Zval = 4.0
Generated by new atomic code, or converted to UPF format
Using radial grid of 431 points, 2 beta functions with:
l(1) = 0
l(2) = 1
PseudoPot. # 2 for H read from file:
H.pz-vbc.UPF
MD5 check sum: 90becb985b714f09656c73597998d266
Pseudo is Norm-conserving, Zval = 1.0
Generated by new atomic code, or converted to UPF format
Using radial grid of 131 points, 0 beta functions with:
atomic species valence mass pseudopotential
Si 4.00 28.08550 Si( 1.00)
H 1.00 1.00794 H ( 1.00)
24 Sym. Ops. (no inversion) found
Cartesian axes
site n. atom positions (alat units)
1 Si tau( 1) = ( 0.0000000 0.0000000 0.0000000 )
2 H tau( 2) = ( 0.0807291 0.0807291 0.0807291 )
3 H tau( 3) = ( -0.0807291 -0.0807291 0.0807291 )
4 H tau( 4) = ( -0.0807291 0.0807291 -0.0807291 )
5 H tau( 5) = ( 0.0807291 -0.0807291 -0.0807291 )
number of k points= 1
cart. coord. in units 2pi/alat
k( 1) = ( 0.0000000 0.0000000 0.0000000), wk = 2.0000000
Dense grid: 67522 G-vectors FFT dimensions: ( 64, 64, 64)
Largest allocated arrays est. size (Mb) dimensions
Kohn-Sham Wavefunctions 0.64 Mb ( 4219, 10)
NL pseudopotentials 0.26 Mb ( 4219, 4)
Each V/rho on FFT grid 2.00 Mb ( 131072)
Each G-vector array 0.26 Mb ( 33761)
G-vector shells 0.01 Mb ( 846)
Largest temporary arrays est. size (Mb) dimensions
Auxiliary wavefunctions 1.29 Mb ( 4219, 40)
Each subspace H/S matrix 0.01 Mb ( 40, 40)
Each <psi_i|beta_j> matrix 0.00 Mb ( 4, 10)
Arrays for rho mixing 16.00 Mb ( 131072, 8)
Initial potential from superposition of free atoms
Check: negative starting charge= -0.010685
starting charge 7.99940, renormalised to 8.00000
negative rho (up, down): 1.069E-02 0.000E+00
Starting wfc are 8 randomized atomic wfcs + 2 random wfc
total cpu time spent up to now is 0.3 secs
Self-consistent Calculation
iteration # 1 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 1.00E-02, avg # of iterations = 5.0
negative rho (up, down): 3.104E-03 0.000E+00
total cpu time spent up to now is 0.9 secs
total energy = -12.37779928 Ry
Harris-Foulkes estimate = -12.51576949 Ry
estimated scf accuracy < 0.28116718 Ry
iteration # 2 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 3.51E-03, avg # of iterations = 4.0
negative rho (up, down): 7.640E-04 0.000E+00
total cpu time spent up to now is 1.3 secs
total energy = -12.40538561 Ry
Harris-Foulkes estimate = -12.40831109 Ry
estimated scf accuracy < 0.00658208 Ry
iteration # 3 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 8.23E-05, avg # of iterations = 9.0
negative rho (up, down): 2.295E-04 0.000E+00
total cpu time spent up to now is 1.9 secs
total energy = -12.40662841 Ry
Harris-Foulkes estimate = -12.40680206 Ry
estimated scf accuracy < 0.00040397 Ry
iteration # 4 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
WARNING: 1 eigenvalues not converged in regterg
c_bands: 1 eigenvalues not converged
ethr = 5.05E-06, avg # of iterations = 20.0
negative rho (up, down): 5.085E-07 0.000E+00
total cpu time spent up to now is 2.8 secs
total energy = -12.40669945 Ry
Harris-Foulkes estimate = -12.40669757 Ry
estimated scf accuracy < 0.00001184 Ry
iteration # 5 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 1.48E-07, avg # of iterations = 13.0
total cpu time spent up to now is 3.4 secs
total energy = -12.40670016 Ry
Harris-Foulkes estimate = -12.40670023 Ry
estimated scf accuracy < 0.00000017 Ry
iteration # 6 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
WARNING: 1 eigenvalues not converged in regterg
c_bands: 1 eigenvalues not converged
ethr = 2.12E-09, avg # of iterations = 20.0
total cpu time spent up to now is 4.3 secs
total energy = -12.40670020 Ry
Harris-Foulkes estimate = -12.40670021 Ry
estimated scf accuracy < 8.3E-09 Ry
iteration # 7 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
WARNING: 1 eigenvalues not converged in regterg
c_bands: 1 eigenvalues not converged
ethr = 1.04E-10, avg # of iterations = 20.0
total cpu time spent up to now is 5.2 secs
total energy = -12.40670020 Ry
Harris-Foulkes estimate = -12.40670020 Ry
estimated scf accuracy < 2.0E-10 Ry
iteration # 8 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 2.53E-12, avg # of iterations = 13.0
total cpu time spent up to now is 5.9 secs
total energy = -12.40670020 Ry
Harris-Foulkes estimate = -12.40670020 Ry
estimated scf accuracy < 5.1E-11 Ry
iteration # 9 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 6.43E-13, avg # of iterations = 4.0
total cpu time spent up to now is 6.4 secs
End of self-consistent calculation
k = 0.0000 0.0000 0.0000 ( 8440 PWs) bands (ev):
-13.2731 -8.2312 -8.2312 -8.2312 -0.4738 0.1471 0.1471 0.1471
0.9164 0.9164
highest occupied, lowest unoccupied level (ev): -8.2312 -0.4738
! total energy = -12.40670020 Ry
Harris-Foulkes estimate = -12.40670020 Ry
estimated scf accuracy < 9.8E-13 Ry
The total energy is the sum of the following terms:
one-electron contribution = -26.03191561 Ry
hartree contribution = 13.45254609 Ry
xc contribution = -4.94684256 Ry
ewald contribution = 5.11951188 Ry
convergence has been achieved in 9 iterations
Writing output data file sih4.save
init_run : 0.36s CPU 0.27s WALL ( 1 calls)
electrons : 9.30s CPU 6.09s WALL ( 1 calls)
Called by init_run:
wfcinit : 0.14s CPU 0.09s WALL ( 1 calls)
potinit : 0.12s CPU 0.08s WALL ( 1 calls)
Called by electrons:
c_bands : 7.48s CPU 4.85s WALL ( 9 calls)
sum_band : 1.09s CPU 0.68s WALL ( 9 calls)
v_of_rho : 0.43s CPU 0.25s WALL ( 10 calls)
mix_rho : 0.31s CPU 0.28s WALL ( 9 calls)
Called by c_bands:
init_us_2 : 0.05s CPU 0.05s WALL ( 19 calls)
regterg : 7.44s CPU 4.81s WALL ( 9 calls)
Called by sum_band:
Called by *egterg:
h_psi : 7.06s CPU 4.36s WALL ( 118 calls)
g_psi : 0.07s CPU 0.07s WALL ( 108 calls)
rdiaghg : 0.03s CPU 0.03s WALL ( 117 calls)
Called by h_psi:
add_vuspsi : 0.01s CPU 0.01s WALL ( 118 calls)
General routines
calbec : 0.02s CPU 0.02s WALL ( 118 calls)
fft : 0.87s CPU 0.56s WALL ( 39 calls)
fftw : 7.09s CPU 4.17s WALL ( 559 calls)
Parallel routines
fft_scatter : 0.83s CPU 0.84s WALL ( 598 calls)
PWSCF : 9.73s CPU 6.45s WALL
This run was terminated on: 17:36:20 18Jun2015
=------------------------------------------------------------------------------=
JOB DONE.
=------------------------------------------------------------------------------=
&input_west
qe_prefix = 'sih4',
west_prefix = 'sih4',
outdir = '/Users/marco/Work/WEST_PROJECT/QE_BEFORE_RELEASE/tempdir/',
/
&wstat_control
wstat_calculation = 'S'
n_pdep_eigen = 10
trev_pdep = 1.d-5
/
&WFREQ_CONTROL
wfreq_calculation = "XWGQ"
n_pdep_eigen_to_use = 10
qp_bandrange(1) = 1
qp_bandrange(2) = 2
macropol_calculation = "N"
n_lanczos = 30
n_imfreq = 100
n_refreq = 100
ecut_imfreq = 120.0
ecut_refreq = 3.0
n_secant_maxiter = 7
/
Program WFREQ v. 1.0.1 svn rev. 15 starts on 18Jun2015 at 17:37:51
This program is part of the open-source West suite
for massively parallel calculations of excited states in materials; please cite
"M. Govoni et al., J. Chem. Theory Comput. 11, 2680 (2015)
URL http://www.west-code.org",
in publications or presentations arising from this work.
Based on the Quantum ESPRESSO v. 5.2.0 svn rev. 11583
--------------------------------------------------------------------------------------------
**MPI** Parallelization Status
--------------------------------------------------------------------------------------------
2 1 1 1 2
--------------------------------------------------------------------------------------------
N = I X P X B X Z