Using device 1 (rank 1, local rank 1, local size 2) : Tesla V100-PCIE-16GB Using device 0 (rank 0, local rank 0, local size 2) : Tesla V100-PCIE-16GB running on 2 total cores distrk: each k-point on 2 cores, 1 groups distr: one band on 1 cores, 2 groups ******************************************************************************* You are running the GPU port of VASP! When publishing results obtained with this version, please cite: - M. Hacene et al., http://dx.doi.org/10.1002/jcc.23096 - M. Hutchinson and M. Widom, http://dx.doi.org/10.1016/j.cpc.2012.02.017 in addition to the usual required citations (see manual). GPU developers: A. Anciaux-Sedrakian, C. Angerer, and M. Hutchinson. ******************************************************************************* ----------------------------------------------------------------------------- | | | W W AA RRRRR N N II N N GGGG !!! | | W W A A R R NN N II NN N G G !!! | | W W A A R R N N N II N N N G !!! | | W WW W AAAAAA RRRRR N N N II N N N G GGG ! | | WW WW A A R R N NN II N NN G G | | W W A A R R N N II N N GGGG !!! | | | | Please note that VASP has recently been ported to GPU by means of | | OpenACC. You are running the CUDA-C GPU-port of VASP, which is | | deprecated and no longer actively developed, maintained, or | | supported. In the near future, the CUDA-C GPU-port of VASP will be | | dropped completely. We encourage you to switch to the OpenACC | | GPU-port of VASP as soon as possible. | | | ----------------------------------------------------------------------------- vasp.6.2.1 16May21 (build Apr 11 2022 11:03:26) complex MD_VERSION_INFO: Compiled 2022-04-11T18:25:55-UTC in devlin.sd.materialsdesign. com:/home/medea2/data/build/vasp6.2.1/16685/x86_64/src/src/build/gpu from svn 1 6685 This VASP executable licensed from Materials Design, Inc. POSCAR found type information on POSCAR SiO C H N POSCAR found : 5 types and 22 ions NWRITE = 1 NWRITE = 1 LDA part: xc-table for Pade appr. of Perdew WARNING: The GPU port of VASP has been extensively tested for: ALGO=Normal, Fast, and VeryFast. Other algorithms may produce incorrect results or yield suboptimal performance. Handle with care! POSCAR, INCAR and KPOINTS ok, starting setup creating 32 CUDA streams... creating 32 CUDA streams... creating 32 CUFFT plans with grid size 80 x 80 x 98... creating 32 CUFFT plans with grid size 80 x 80 x 98... FFT: planning ... WAVECAR not read entering main loop N E dE d eps ncg rms ort DAV: 1 0.529230387903E+03 0.52923E+03 -0.71049E+03 38 0.634E+02 DAV: 2 0.167822000291E+03 -0.36141E+03 -0.33559E+03 38 0.172E+02 DAV: 3 -0.209520834859E+03 -0.37734E+03 -0.35125E+03 38 0.186E+02 DAV: 4 -0.402012984628E+03 -0.19249E+03 -0.18461E+03 38 0.180E+02 DAV: 5 -0.137042441228E+03 0.26497E+03 -0.56365E+02 38 0.104E+02 gam= 0.000 g(H,U,f)= 0.117E+03 0.725E+00 0.225-189 ort(H,U,f) = 0.000E+00 0.000E+00 0.000E+00 SDA: 6 -0.108168676491E+03 0.28874E+02 -0.47124E+02 38 0.118E+03 0.000E+00 gam= 0.382 g(H,U,f)= 0.325E+02 0.159E+00-0.826-160 ort(H,U,f) =-0.280E+02 0.151E+00-0.142-158 DMP: 7 -0.127267881680E+03 -0.19099E+02 -0.88207E+01 38 0.327E+02-0.278E+02 gam= 0.382 g(H,U,f)= 0.115E+02 0.121E+00-0.982-151 ort(H,U,f) =-0.173E+01 0.171E+00-0.858-151 DMP: 8 -0.131289087079E+03 -0.40212E+01 -0.44164E+01 38 0.116E+02-0.156E+01 gam= 0.382 g(H,U,f)= 0.399E+01 0.923E-01-0.123-161 ort(H,U,f) =-0.266E+01 0.175E+00-0.116-161 DMP: 9 -0.132943456144E+03 -0.16544E+01 -0.12541E+01 38 0.408E+01-0.248E+01 gam= 0.382 g(H,U,f)= 0.189E+01 0.724E-01-0.111-141 ort(H,U,f) = 0.144E-01 0.151E+00-0.196-141 DMP: 10 -0.133602157038E+03 -0.65870E+00 -0.81147E+00 38 0.197E+01 0.165E+00 gam= 0.382 g(H,U,f)= 0.576E+00 0.415E-01-0.268-101 ort(H,U,f) =-0.193E+00 0.104E+00-0.128-100 DMP: 11 -0.133994937103E+03 -0.39278E+00 -0.23332E+00 38 0.617E+00-0.895E-01 gam= 0.382 g(H,U,f)= 0.249E+00 0.206E-01-0.549E-80 ort(H,U,f) =-0.216E-01 0.608E-01-0.264E-79 DMP: 12 -0.134120524452E+03 -0.12559E+00 -0.11395E+00 38 0.270E+00 0.392E-01 ***************************** Error running VASP parallel with MPI #!/bin/bash cd "/home/user/MD/TaskServer/Tasks/172.16.0.10-32000-task34701" export PATH="/home/user/MD/Linux-x86_64/IntelMPI5/bin:$PATH" export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/user/MD/Linux-x86_64/IntelMPI5/lib:/home/user/MD/TaskServer/Tools/vasp-gpu6.2.1/Linux-x86_64" "/home/user/MD/Linux-x86_64/IntelMPI5/bin/mpirun" -r ssh -np 2 "/home/user/MD/TaskServer/Tools/vasp-gpu6.2.1/Linux-x86_64/vasp_gpu" ----------------------------------------------------------------------------- | _ ____ _ _ _____ _ | | | | | _ \ | | | | / ____| | | | | | | | |_) | | | | | | | __ | | | | |_| | _ < | | | | | | |_ | |_| | | _ | |_) | | |__| | | |__| | _ | | (_) |____/ \____/ \_____| (_) | | | | internal error in: rot.F at line: 793 | | | | EDWAV: internal error, the gradient is not orthogonal 1 1 -1.421e-4 | | | | If you are not a developer, you should not encounter this problem. | | Please submit a bug report. | | | ----------------------------------------------------------------------------- application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0 [mpiexec@baba-2] handle_pmi_cmd (../../pm/pmiserv/pmiserv_cb.c:77): Unrecognized PMI command: abort | cleaning up processes [mpiexec@baba-2] control_cb (../../pm/pmiserv/pmiserv_cb.c:958): unable to process PMI command [mpiexec@baba-2] HYDT_dmxu_poll_wait_for_event (../../tools/demux/demux_poll.c:76): callback returned error status [mpiexec@baba-2] HYD_pmci_wait_for_completion (../../pm/pmiserv/pmiserv_pmci.c:500): error waiting for event [mpiexec@baba-2] main (../../ui/mpich/mpiexec.c:1130): process manager error waiting for completion *****************************