Using device 4 (rank 4, local rank 4, local size 6) : Tesla K80 Using device 1 (rank 1, local rank 1, local size 6) : Tesla K80 Using device 3 (rank 3, local rank 3, local size 6) : Tesla K80 Using device 2 (rank 2, local rank 2, local size 6) : Tesla K80 Using device 0 (rank 0, local rank 0, local size 6) : Tesla K80 Using device 5 (rank 5, local rank 5, local size 6) : Tesla K80 running on 6 total cores distrk: each k-point on 6 cores, 1 groups distr: one band on 1 cores, 6 groups ******************************************************************************* You are running the GPU port of VASP! When publishing results obtained with this version, please cite: - M. Hacene et al., http://dx.doi.org/10.1002/jcc.23096 - M. Hutchinson and M. Widom, http://dx.doi.org/10.1016/j.cpc.2012.02.017 in addition to the usual required citations (see manual). GPU developers: A. Anciaux-Sedrakian, C. Angerer, and M. Hutchinson. ******************************************************************************* ----------------------------------------------------------------------------- | | | W W AA RRRRR N N II N N GGGG !!! | | W W A A R R NN N II NN N G G !!! | | W W A A R R N N N II N N N G !!! | | W WW W AAAAAA RRRRR N N N II N N N G GGG ! | | WW WW A A R R N NN II N NN G G | | W W A A R R N N II N N GGGG !!! | | | | Please note that VASP has recently been ported to GPU by means of | | OpenACC. You are running the CUDA-C GPU-port of VASP, which is | | deprecated and no longer actively developed, maintained, or | | supported. In the near future, the CUDA-C GPU-port of VASP will be | | dropped completely. We encourage you to switch to the OpenACC | | GPU-port of VASP as soon as possible. | | | ----------------------------------------------------------------------------- vasp.6.2.1 16May21 (build Apr 11 2022 11:03:26) complex MD_VERSION_INFO: Compiled 2022-04-11T18:25:55-UTC in devlin.sd.materialsdesign. com:/home/medea2/data/build/vasp6.2.1/16685/x86_64/src/src/build/gpu from svn 1 6685 This VASP executable licensed from Materials Design, Inc. POSCAR found type information on POSCAR SnO H POSCAR found : 3 types and 22 ions NWRITE = 1 NWRITE = 1 NWRITE = 1 NWRITE = 1 NWRITE = 1 NWRITE = 1 ----------------------------------------------------------------------------- | | | W W AA RRRRR N N II N N GGGG !!! | | W W A A R R NN N II NN N G G !!! | | W W A A R R N N N II N N N G !!! | | W WW W AAAAAA RRRRR N N N II N N N G GGG ! | | WW WW A A R R N NN II N NN G G | | W W A A R R N N II N N GGGG !!! | | | | For optimal performance we recommend to set | | NCORE = 2 up to number-of-cores-per-socket | | NCORE specifies how many cores store one orbital (NPAR=cpu/NCORE). | | This setting can greatly improve the performance of VASP for DFT. | | The default, NCORE=1 might be grossly inefficient on modern | | multi-core architectures or massively parallel machines. Do your | | own testing! More info at https://www.vasp.at/wiki/index.php/NCORE | | Unfortunately you need to use the default for GW and RPA | | calculations (for HF NCORE is supported but not extensively tested | | yet). | | | ----------------------------------------------------------------------------- LDA part: xc-table for Pade appr. of Perdew WARNING: The GPU port of VASP has been extensively tested for: ALGO=Normal, Fast, and VeryFast. Other algorithms may produce incorrect results or yield suboptimal performance. Handle with care! ----------------------------------------------------------------------------- | | | W W AA RRRRR N N II N N GGGG !!! | | W W A A R R NN N II NN N G G !!! | | W W A A R R N N N II N N N G !!! | | W WW W AAAAAA RRRRR N N N II N N N G GGG ! | | WW WW A A R R N NN II N NN G G | | W W A A R R N N II N N GGGG !!! | | | | The distance between some ions is very small. Please check the | | nearest-neighbor list in the OUTCAR file. | | I HOPE YOU KNOW WHAT YOU ARE DOING! | | | ----------------------------------------------------------------------------- POSCAR, INCAR and KPOINTS ok, starting setup creating 32 CUDA streams... creating 32 CUDA streams... creating 32 CUDA streams... creating 32 CUDA streams... creating 32 CUDA streams... creating 32 CUDA streams... creating 32 CUFFT plans with grid size 90 x 24 x 24... creating 32 CUFFT plans with grid size 90 x 24 x 24... creating 32 CUFFT plans with grid size 90 x 24 x 24... creating 32 CUFFT plans with grid size 90 x 24 x 24... creating 32 CUFFT plans with grid size 90 x 24 x 24... creating 32 CUFFT plans with grid size 90 x 24 x 24... FFT: planning ... WAVECAR not read entering main loop N E dE d eps ncg rms rms(c) DAV: 1 0.244662064313E+04 0.24466E+04 -0.69161E+04 1158 0.188E+03 DAV: 2 0.443049042656E+03 -0.20036E+04 -0.19271E+04 1164 0.607E+02 DAV: 3 0.125714138158E+02 -0.43048E+03 -0.41220E+03 1176 0.247E+02 DAV: 4 -0.265750955913E+02 -0.39147E+02 -0.37165E+02 1164 0.844E+01 DAV: 5 -0.284849131380E+02 -0.19098E+01 -0.18995E+01 1380 0.177E+01 0.463E+01 ***************************** Error running VASP parallel with MPI #!/bin/bash cd "/home/user/MD/TaskServer/Tasks/140.123.79.184-32000-task43923" export PATH="/home/user/MD/Linux-x86_64/IntelMPI5/bin:$PATH" export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/user/MD/Linux-x86_64/IntelMPI5/lib:/home/user/MD/TaskServer/Tools/vasp-gpu6.2.1/Linux-x86_64" "/home/user/MD/Linux-x86_64/IntelMPI5/bin/mpirun" -r ssh -np 6 "/home/user/MD/TaskServer/Tools/vasp-gpu6.2.1/Linux-x86_64/vasp_gpu" forrtl: severe (174): SIGSEGV, segmentation fault occurred Image PC Routine Line Source vasp_gpu 0000000005445AD4 Unknown Unknown Unknown libpthread-2.22.s 00007FAE7DA72C70 Unknown Unknown Unknown vasp_gpu 000000000540E3EB Unknown Unknown Unknown vasp_gpu 0000000000F00842 Unknown Unknown Unknown vasp_gpu 0000000000F844A5 Unknown Unknown Unknown vasp_gpu 0000000001813C76 Unknown Unknown Unknown vasp_gpu 000000000043FC9E Unknown Unknown Unknown libc-2.22.so 00007FAE6DF98725 __libc_start_main Unknown Unknown vasp_gpu 000000000043FB29 Unknown Unknown Unknown forrtl: error (69): process interrupted (SIGINT) Image PC Routine Line Source vasp_gpu 0000000005445D70 Unknown Unknown Unknown libpthread-2.22.s 00007FE7BF5E9C70 Unknown Unknown Unknown libmpi.so.12 00007FE7B03EECFB PMPIDI_CH3I_Progr Unknown Unknown libmpi.so.12 00007FE7B05923D0 Unknown Unknown Unknown libmpi.so.12 00007FE7B039AAA0 Unknown Unknown Unknown libmpi.so.12 00007FE7B039E6E6 PMPI_Allreduce Unknown Unknown libmpifort.so.12 00007FE7B101EFF1 mpi_allreduce_ Unknown Unknown vasp_gpu 00000000004A7E64 Unknown Unknown Unknown vasp_gpu 0000000000F00C74 Unknown Unknown Unknown vasp_gpu 0000000000F844A5 Unknown Unknown Unknown vasp_gpu 0000000001813C76 Unknown Unknown Unknown vasp_gpu 000000000043FC9E Unknown Unknown Unknown libc-2.22.so 00007FE7AFB0F725 __libc_start_main Unknown Unknown vasp_gpu 000000000043FB29 Unknown Unknown Unknown forrtl: error (69): process interrupted (SIGINT) Image PC Routine Line Source vasp_gpu 0000000005445D70 Unknown Unknown Unknown libpthread-2.22.s 00007FC51C163C70 Unknown Unknown Unknown libmpi.so.12 00007FC50CF68CF0 PMPIDI_CH3I_Progr Unknown Unknown libmpi.so.12 00007FC50D10B243 Unknown Unknown Unknown libmpi.so.12 00007FC50CF144B8 Unknown Unknown Unknown libmpi.so.12 00007FC50CF186E6 PMPI_Allreduce Unknown Unknown libmpifort.so.12 00007FC50DB98FF1 mpi_allreduce_ Unknown Unknown vasp_gpu 00000000004A7E64 Unknown Unknown Unknown vasp_gpu 0000000000F00C74 Unknown Unknown Unknown vasp_gpu 0000000000F844A5 Unknown Unknown Unknown vasp_gpu 0000000001813C76 Unknown Unknown Unknown vasp_gpu 000000000043FC9E Unknown Unknown Unknown libc-2.22.so 00007FC50C689725 __libc_start_main Unknown Unknown vasp_gpu 000000000043FB29 Unknown Unknown Unknown forrtl: error (69): process interrupted (SIGINT) Image PC Routine Line Source vasp_gpu 0000000005445D70 Unknown Unknown Unknown libpthread-2.22.s 00007F08E43F2C70 Unknown Unknown Unknown libmpi.so.12 00007F08D51F7CFB PMPIDI_CH3I_Progr Unknown Unknown libmpi.so.12 00007F08D539B3D0 Unknown Unknown Unknown libmpi.so.12 00007F08D51A3AA0 Unknown Unknown Unknown libmpi.so.12 00007F08D51A76E6 PMPI_Allreduce Unknown Unknown libmpifort.so.12 00007F08D5E27FF1 mpi_allreduce_ Unknown Unknown vasp_gpu 00000000004A7E64 Unknown Unknown Unknown vasp_gpu 0000000000F00C74 Unknown Unknown Unknown vasp_gpu 0000000000F844A5 Unknown Unknown Unknown vasp_gpu 0000000001813C76 Unknown Unknown Unknown vasp_gpu 000000000043FC9E Unknown Unknown Unknown libc-2.22.so 00007F08D4918725 __libc_start_main Unknown Unknown vasp_gpu 000000000043FB29 Unknown Unknown Unknown forrtl: error (69): process interrupted (SIGINT) Image PC Routine Line Source vasp_gpu 0000000005445D70 Unknown Unknown Unknown libpthread-2.22.s 00007F4BB9C01C70 Unknown Unknown Unknown libmpi.so.12 00007F4BAAA06FFB PMPIDI_CH3I_Progr Unknown Unknown libmpi.so.12 00007F4BAABA9243 Unknown Unknown Unknown libmpi.so.12 00007F4BAA9B24B8 Unknown Unknown Unknown libmpi.so.12 00007F4BAA9B66E6 PMPI_Allreduce Unknown Unknown libmpifort.so.12 00007F4BAB636FF1 mpi_allreduce_ Unknown Unknown vasp_gpu 00000000004A7E64 Unknown Unknown Unknown vasp_gpu 0000000000F00C74 Unknown Unknown Unknown vasp_gpu 0000000000F844A5 Unknown Unknown Unknown vasp_gpu 0000000001813C76 Unknown Unknown Unknown vasp_gpu 000000000043FC9E Unknown Unknown Unknown libc-2.22.so 00007F4BAA127725 __libc_start_main Unknown Unknown vasp_gpu 000000000043FB29 Unknown Unknown Unknown forrtl: error (69): process interrupted (SIGINT) Image PC Routine Line Source vasp_gpu 0000000005445D70 Unknown Unknown Unknown libpthread-2.22.s 00007F4E1ACF5C70 Unknown Unknown Unknown libmpi.so.12 00007F4E0BAFB002 PMPIDI_CH3I_Progr Unknown Unknown libmpi.so.12 00007F4E0BC9D243 Unknown Unknown Unknown libmpi.so.12 00007F4E0BAA64B8 Unknown Unknown Unknown libmpi.so.12 00007F4E0BAAA6E6 PMPI_Allreduce Unknown Unknown libmpifort.so.12 00007F4E0C72AFF1 mpi_allreduce_ Unknown Unknown vasp_gpu 00000000004A7E64 Unknown Unknown Unknown vasp_gpu 0000000000F00C74 Unknown Unknown Unknown vasp_gpu 0000000000F844A5 Unknown Unknown Unknown vasp_gpu 0000000001813C76 Unknown Unknown Unknown vasp_gpu 000000000043FC9E Unknown Unknown Unknown libc-2.22.so 00007F4E0B21B725 __libc_start_main Unknown Unknown vasp_gpu 000000000043FB29 Unknown Unknown Unknown *****************************