Using device 0 (rank 0, local rank 0, local size 6) : Tesla K80 Using device 3 (rank 3, local rank 3, local size 6) : Tesla K80 Using device 2 (rank 2, local rank 2, local size 6) : Tesla K80 Using device 5 (rank 5, local rank 5, local size 6) : Tesla K80 Using device 1 (rank 1, local rank 1, local size 6) : Tesla K80 Using device 4 (rank 4, local rank 4, local size 6) : Tesla K80 running on 6 total cores distrk: each k-point on 6 cores, 1 groups distr: one band on 1 cores, 6 groups ******************************************************************************* You are running the GPU port of VASP! When publishing results obtained with this version, please cite: - M. Hacene et al., http://dx.doi.org/10.1002/jcc.23096 - M. Hutchinson and M. Widom, http://dx.doi.org/10.1016/j.cpc.2012.02.017 in addition to the usual required citations (see manual). GPU developers: A. Anciaux-Sedrakian, C. Angerer, and M. Hutchinson. ******************************************************************************* ----------------------------------------------------------------------------- | | | W W AA RRRRR N N II N N GGGG !!! | | W W A A R R NN N II NN N G G !!! | | W W A A R R N N N II N N N G !!! | | W WW W AAAAAA RRRRR N N N II N N N G GGG ! | | WW WW A A R R N NN II N NN G G | | W W A A R R N N II N N GGGG !!! | | | | Please note that VASP has recently been ported to GPU by means of | | OpenACC. You are running the CUDA-C GPU-port of VASP, which is | | deprecated and no longer actively developed, maintained, or | | supported. In the near future, the CUDA-C GPU-port of VASP will be | | dropped completely. We encourage you to switch to the OpenACC | | GPU-port of VASP as soon as possible. | | | ----------------------------------------------------------------------------- vasp.6.2.1 16May21 (build Apr 11 2022 11:03:26) complex MD_VERSION_INFO: Compiled 2022-04-11T18:25:55-UTC in devlin.sd.materialsdesign. com:/home/medea2/data/build/vasp6.2.1/16685/x86_64/src/src/build/gpu from svn 1 6685 This VASP executable licensed from Materials Design, Inc. POSCAR found type information on POSCAR SnO H POSCAR found : 3 types and 22 ions NWRITE = 1 NWRITE = 1 NWRITE = 1 NWRITE = 1 NWRITE = 1 NWRITE = 1 ----------------------------------------------------------------------------- | | | W W AA RRRRR N N II N N GGGG !!! | | W W A A R R NN N II NN N G G !!! | | W W A A R R N N N II N N N G !!! | | W WW W AAAAAA RRRRR N N N II N N N G GGG ! | | WW WW A A R R N NN II N NN G G | | W W A A R R N N II N N GGGG !!! | | | | For optimal performance we recommend to set | | NCORE = 2 up to number-of-cores-per-socket | | NCORE specifies how many cores store one orbital (NPAR=cpu/NCORE). | | This setting can greatly improve the performance of VASP for DFT. | | The default, NCORE=1 might be grossly inefficient on modern | | multi-core architectures or massively parallel machines. Do your | | own testing! More info at https://www.vasp.at/wiki/index.php/NCORE | | Unfortunately you need to use the default for GW and RPA | | calculations (for HF NCORE is supported but not extensively tested | | yet). | | | ----------------------------------------------------------------------------- LDA part: xc-table for Pade appr. of Perdew WARNING: The GPU port of VASP has been extensively tested for: ALGO=Normal, Fast, and VeryFast. Other algorithms may produce incorrect results or yield suboptimal performance. Handle with care! ----------------------------------------------------------------------------- | | | W W AA RRRRR N N II N N GGGG !!! | | W W A A R R NN N II NN N G G !!! | | W W A A R R N N N II N N N G !!! | | W WW W AAAAAA RRRRR N N N II N N N G GGG ! | | WW WW A A R R N NN II N NN G G | | W W A A R R N N II N N GGGG !!! | | | | The distance between some ions is very small. Please check the | | nearest-neighbor list in the OUTCAR file. | | I HOPE YOU KNOW WHAT YOU ARE DOING! | | | ----------------------------------------------------------------------------- POSCAR, INCAR and KPOINTS ok, starting setup creating 32 CUDA streams... creating 32 CUDA streams... creating 32 CUDA streams... creating 32 CUDA streams... creating 32 CUDA streams... creating 32 CUDA streams... creating 32 CUFFT plans with grid size 90 x 24 x 24... creating 32 CUFFT plans with grid size 90 x 24 x 24... creating 32 CUFFT plans with grid size 90 x 24 x 24... creating 32 CUFFT plans with grid size 90 x 24 x 24... creating 32 CUFFT plans with grid size 90 x 24 x 24... creating 32 CUFFT plans with grid size 90 x 24 x 24... FFT: planning ... WAVECAR not read entering main loop N E dE d eps ncg rms rms(c) DAV: 1 0.536714442176E+04 0.53671E+04 -0.74524E+04 1422 0.243E+03 DAV: 2 0.329951547644E+04 -0.20676E+04 -0.19538E+04 1032 0.671E+02 CUDA Error in cuda_mem.cu, line 192: uncorrectable ECC error encountered Failed to free device memory! ***************************** Error running VASP parallel with MPI #!/bin/bash cd "/home/user/MD/TaskServer/Tasks/140.123.79.184-32000-task43300" export PATH="/home/user/MD/Linux-x86_64/IntelMPI5/bin:$PATH" export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/user/MD/Linux-x86_64/IntelMPI5/lib:/home/user/MD/TaskServer/Tools/vasp-gpu6.2.1/Linux-x86_64" "/home/user/MD/Linux-x86_64/IntelMPI5/bin/mpirun" -r ssh -np 6 "/home/user/MD/TaskServer/Tools/vasp-gpu6.2.1/Linux-x86_64/vasp_gpu" forrtl: severe (174): SIGSEGV, segmentation fault occurred Image PC Routine Line Source vasp_gpu 0000000005445AD4 Unknown Unknown Unknown libpthread-2.22.s 00007FDC75AACC70 Unknown Unknown Unknown vasp_gpu 0000000005412C05 Unknown Unknown Unknown vasp_gpu 00000000006EF90B Unknown Unknown Unknown vasp_gpu 0000000000F05E4F Unknown Unknown Unknown vasp_gpu 0000000000F844A5 Unknown Unknown Unknown vasp_gpu 0000000001813C76 Unknown Unknown Unknown vasp_gpu 000000000043FC9E Unknown Unknown Unknown libc-2.22.so 00007FDC65FD2725 __libc_start_main Unknown Unknown vasp_gpu 000000000043FB29 Unknown Unknown Unknown forrtl: error (69): process interrupted (SIGINT) Image PC Routine Line Source vasp_gpu 0000000005445D70 Unknown Unknown Unknown libpthread-2.22.s 00007F8ABAAE4C70 Unknown Unknown Unknown libmpi.so.12 00007F8AAB8E9CFB PMPIDI_CH3I_Progr Unknown Unknown libmpi.so.12 00007F8AABA8C243 Unknown Unknown Unknown libmpi.so.12 00007F8AAB8954B8 Unknown Unknown Unknown libmpi.so.12 00007F8AAB8996E6 PMPI_Allreduce Unknown Unknown libmpifort.so.12 00007F8AAC519FF1 mpi_allreduce_ Unknown Unknown vasp_gpu 00000000004A7E64 Unknown Unknown Unknown vasp_gpu 0000000000F060A8 Unknown Unknown Unknown vasp_gpu 0000000000F844A5 Unknown Unknown Unknown vasp_gpu 0000000001813C76 Unknown Unknown Unknown vasp_gpu 000000000043FC9E Unknown Unknown Unknown libc-2.22.so 00007F8AAB00A725 __libc_start_main Unknown Unknown vasp_gpu 000000000043FB29 Unknown Unknown Unknown forrtl: error (69): process interrupted (SIGINT) Image PC Routine Line Source vasp_gpu 0000000005445D70 Unknown Unknown Unknown libpthread-2.22.s 00007F4B99BCEC70 Unknown Unknown Unknown libmpi.so.12 00007F4B8A9D4002 PMPIDI_CH3I_Progr Unknown Unknown libmpi.so.12 00007F4B8AB76243 Unknown Unknown Unknown libmpi.so.12 00007F4B8A97F4B8 Unknown Unknown Unknown libmpi.so.12 00007F4B8A9836E6 PMPI_Allreduce Unknown Unknown libmpifort.so.12 00007F4B8B603FF1 mpi_allreduce_ Unknown Unknown vasp_gpu 00000000004A7E64 Unknown Unknown Unknown vasp_gpu 0000000000F060A8 Unknown Unknown Unknown vasp_gpu 0000000000F844A5 Unknown Unknown Unknown vasp_gpu 0000000001813C76 Unknown Unknown Unknown vasp_gpu 000000000043FC9E Unknown Unknown Unknown libc-2.22.so 00007F4B8A0F4725 __libc_start_main Unknown Unknown vasp_gpu 000000000043FB29 Unknown Unknown Unknown forrtl: error (69): process interrupted (SIGINT) Image PC Routine Line Source vasp_gpu 0000000005445D70 Unknown Unknown Unknown libpthread-2.22.s 00007F4366A68C70 Unknown Unknown Unknown libmpi.so.12 00007F435786E00B PMPIDI_CH3I_Progr Unknown Unknown libmpi.so.12 00007F4357A10243 Unknown Unknown Unknown libmpi.so.12 00007F43578194B8 Unknown Unknown Unknown libmpi.so.12 00007F435781D6E6 PMPI_Allreduce Unknown Unknown libmpifort.so.12 00007F435849DFF1 mpi_allreduce_ Unknown Unknown vasp_gpu 00000000004A7E64 Unknown Unknown Unknown vasp_gpu 0000000000F060A8 Unknown Unknown Unknown vasp_gpu 0000000000F844A5 Unknown Unknown Unknown vasp_gpu 0000000001813C76 Unknown Unknown Unknown vasp_gpu 000000000043FC9E Unknown Unknown Unknown libc-2.22.so 00007F4356F8E725 __libc_start_main Unknown Unknown vasp_gpu 000000000043FB29 Unknown Unknown Unknown forrtl: error (69): process interrupted (SIGINT) Image PC Routine Line Source vasp_gpu 0000000005445D70 Unknown Unknown Unknown libpthread-2.22.s 00007F550AD5EC70 Unknown Unknown Unknown libmpi.so.12 00007F54FBB63CFB PMPIDI_CH3I_Progr Unknown Unknown libmpi.so.12 00007F54FBD073D0 Unknown Unknown Unknown libmpi.so.12 00007F54FBB0FAA0 Unknown Unknown Unknown libmpi.so.12 00007F54FBB136E6 PMPI_Allreduce Unknown Unknown libmpifort.so.12 00007F54FC793FF1 mpi_allreduce_ Unknown Unknown vasp_gpu 00000000004A7E64 Unknown Unknown Unknown vasp_gpu 0000000000F060A8 Unknown Unknown Unknown vasp_gpu 0000000000F844A5 Unknown Unknown Unknown vasp_gpu 0000000001813C76 Unknown Unknown Unknown vasp_gpu 000000000043FC9E Unknown Unknown Unknown libc-2.22.so 00007F54FB284725 __libc_start_main Unknown Unknown vasp_gpu 000000000043FB29 Unknown Unknown Unknown forrtl: error (69): process interrupted (SIGINT) Image PC Routine Line Source vasp_gpu 0000000005445D70 Unknown Unknown Unknown libpthread-2.22.s 00007F1B36473C70 Unknown Unknown Unknown libc-2.22.so 00007F1B26A4EBF7 __sched_yield Unknown Unknown libmpi.so.12 00007F1B272792CF PMPIDI_CH3I_Progr Unknown Unknown libmpi.so.12 00007F1B2741C3D0 Unknown Unknown Unknown libmpi.so.12 00007F1B27224AA0 Unknown Unknown Unknown libmpi.so.12 00007F1B272286E6 PMPI_Allreduce Unknown Unknown libmpifort.so.12 00007F1B27EA8FF1 mpi_allreduce_ Unknown Unknown vasp_gpu 00000000004A7E64 Unknown Unknown Unknown vasp_gpu 0000000000F060A8 Unknown Unknown Unknown vasp_gpu 0000000000F844A5 Unknown Unknown Unknown vasp_gpu 0000000001813C76 Unknown Unknown Unknown vasp_gpu 000000000043FC9E Unknown Unknown Unknown libc-2.22.so 00007F1B26999725 __libc_start_main Unknown Unknown vasp_gpu 000000000043FB29 Unknown Unknown Unknown *****************************