Using device 4 (rank 4, local rank 4, local size 6) : Tesla K80 Using device 2 (rank 2, local rank 2, local size 6) : Tesla K80 Using device 0 (rank 0, local rank 0, local size 6) : Tesla K80 Using device 1 (rank 1, local rank 1, local size 6) : Tesla K80 Using device 5 (rank 5, local rank 5, local size 6) : Tesla K80 Using device 3 (rank 3, local rank 3, local size 6) : Tesla K80 running on 6 total cores distrk: each k-point on 6 cores, 1 groups distr: one band on 1 cores, 6 groups ******************************************************************************* You are running the GPU port of VASP! When publishing results obtained with this version, please cite: - M. Hacene et al., http://dx.doi.org/10.1002/jcc.23096 - M. Hutchinson and M. Widom, http://dx.doi.org/10.1016/j.cpc.2012.02.017 in addition to the usual required citations (see manual). GPU developers: A. Anciaux-Sedrakian, C. Angerer, and M. Hutchinson. ******************************************************************************* ----------------------------------------------------------------------------- | | | W W AA RRRRR N N II N N GGGG !!! | | W W A A R R NN N II NN N G G !!! | | W W A A R R N N N II N N N G !!! | | W WW W AAAAAA RRRRR N N N II N N N G GGG ! | | WW WW A A R R N NN II N NN G G | | W W A A R R N N II N N GGGG !!! | | | | Please note that VASP has recently been ported to GPU by means of | | OpenACC. You are running the CUDA-C GPU-port of VASP, which is | | deprecated and no longer actively developed, maintained, or | | supported. In the near future, the CUDA-C GPU-port of VASP will be | | dropped completely. We encourage you to switch to the OpenACC | | GPU-port of VASP as soon as possible. | | | ----------------------------------------------------------------------------- vasp.6.2.1 16May21 (build Apr 11 2022 11:03:26) complex MD_VERSION_INFO: Compiled 2022-04-11T18:25:55-UTC in devlin.sd.materialsdesign. com:/home/medea2/data/build/vasp6.2.1/16685/x86_64/src/src/build/gpu from svn 1 6685 This VASP executable licensed from Materials Design, Inc. POSCAR found type information on POSCAR SnO H POSCAR found : 3 types and 22 ions NWRITE = 1 NWRITE = 1 NWRITE = 1 NWRITE = 1 NWRITE = 1 NWRITE = 1 ----------------------------------------------------------------------------- | | | W W AA RRRRR N N II N N GGGG !!! | | W W A A R R NN N II NN N G G !!! | | W W A A R R N N N II N N N G !!! | | W WW W AAAAAA RRRRR N N N II N N N G GGG ! | | WW WW A A R R N NN II N NN G G | | W W A A R R N N II N N GGGG !!! | | | | For optimal performance we recommend to set | | NCORE = 2 up to number-of-cores-per-socket | | NCORE specifies how many cores store one orbital (NPAR=cpu/NCORE). | | This setting can greatly improve the performance of VASP for DFT. | | The default, NCORE=1 might be grossly inefficient on modern | | multi-core architectures or massively parallel machines. Do your | | own testing! More info at https://www.vasp.at/wiki/index.php/NCORE | | Unfortunately you need to use the default for GW and RPA | | calculations (for HF NCORE is supported but not extensively tested | | yet). | | | ----------------------------------------------------------------------------- LDA part: xc-table for Pade appr. of Perdew WARNING: The GPU port of VASP has been extensively tested for: ALGO=Normal, Fast, and VeryFast. Other algorithms may produce incorrect results or yield suboptimal performance. Handle with care! POSCAR, INCAR and KPOINTS ok, starting setup creating 32 CUDA streams... creating 32 CUDA streams... creating 32 CUDA streams... creating 32 CUDA streams... creating 32 CUDA streams... creating 32 CUDA streams... creating 32 CUFFT plans with grid size 90 x 24 x 24... creating 32 CUFFT plans with grid size 90 x 24 x 24... creating 32 CUFFT plans with grid size 90 x 24 x 24... creating 32 CUFFT plans with grid size 90 x 24 x 24... creating 32 CUFFT plans with grid size 90 x 24 x 24... creating 32 CUFFT plans with grid size 90 x 24 x 24... FFT: planning ... WAVECAR not read entering main loop N E dE d eps ncg rms rms(c) DAV: 1 0.228789214464E+04 0.22879E+04 -0.68433E+04 1134 0.183E+03 DAV: 2 0.381070186467E+03 -0.19068E+04 -0.18320E+04 1152 0.585E+02 DAV: 3 -0.438805980112E+02 -0.42495E+03 -0.40610E+03 1212 0.240E+02 DAV: 4 -0.823480715698E+02 -0.38467E+02 -0.37109E+02 1224 0.794E+01 DAV: 5 -0.837500589626E+02 -0.14020E+01 -0.13968E+01 1212 0.152E+01 0.334E+01 DAV: 6 -0.690508150866E+02 0.14699E+02 -0.11170E+02 1590 0.763E+01 0.339E+01 DAV: 7 -0.720982087433E+02 -0.30474E+01 -0.15088E+02 1224 0.233E+01 0.241E+01 DAV: 8 -0.630796894525E+02 0.90185E+01 -0.36010E+01 1500 0.278E+01 0.221E+01 DAV: 9 -0.589076852676E+02 0.41720E+01 -0.27209E+01 1578 0.135E+01 0.131E+01 DAV: 10 -0.578695223848E+02 0.10382E+01 -0.85383E+00 1596 0.125E+01 0.100E+01 DAV: 11 -0.577648744409E+02 0.10465E+00 -0.55256E+00 1500 0.969E+00 0.179E+01 DAV: 12 -0.566358446884E+02 0.11290E+01 -0.53662E+00 1680 0.711E+00 0.667E+00 DAV: 13 -0.564423117451E+02 0.19353E+00 -0.22193E+00 1740 0.484E+00 0.478E+00 DAV: 14 -0.562872545682E+02 0.15506E+00 -0.11770E+00 1488 0.377E+00 0.333E+00 DAV: 15 -0.562613598523E+02 0.25895E-01 -0.24669E-01 1500 0.212E+00 0.348E+00 DAV: 16 -0.561883404770E+02 0.73019E-01 -0.10956E-01 1476 0.128E+00 0.114E+00 DAV: 17 -0.561863554540E+02 0.19850E-02 -0.36574E-02 1596 0.858E-01 0.604E-01 DAV: 18 -0.561876423394E+02 -0.12869E-02 -0.14480E-02 1434 0.545E-01 0.555E-01 DAV: 19 -0.561881912423E+02 -0.54890E-03 -0.71690E-03 1338 0.370E-01 0.312E-01 DAV: 20 -0.561884053138E+02 -0.21407E-03 -0.36363E-03 1398 0.286E-01 0.317E-01 DAV: 21 -0.561882758388E+02 0.12948E-03 -0.27060E-03 1290 0.159E-01 0.263E-01 DAV: 22 -0.561881170420E+02 0.15880E-03 -0.11295E-03 1296 0.826E-02 0.939E-02 DAV: 23 -0.561881514927E+02 -0.34451E-04 -0.86476E-05 948 0.462E-02 0.382E-02 DAV: 24 -0.561881835115E+02 -0.32019E-04 -0.36489E-05 912 0.275E-02 0.396E-02 CUDA Error in cuda_main.cu, line 236: uncorrectable ECC error encountered Failed to synchronize the device! ***************************** Error running VASP parallel with MPI #!/bin/bash cd "/home/user/MD/TaskServer/Tasks/140.123.79.184-32000-task43725" export PATH="/home/user/MD/Linux-x86_64/IntelMPI5/bin:$PATH" export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/user/MD/Linux-x86_64/IntelMPI5/lib:/home/user/MD/TaskServer/Tools/vasp-gpu6.2.1/Linux-x86_64" "/home/user/MD/Linux-x86_64/IntelMPI5/bin/mpirun" -r ssh -np 6 "/home/user/MD/TaskServer/Tools/vasp-gpu6.2.1/Linux-x86_64/vasp_gpu" forrtl: severe (174): SIGSEGV, segmentation fault occurred Image PC Routine Line Source vasp_gpu 0000000005445AD4 Unknown Unknown Unknown libpthread-2.22.s 00007F35D0415C70 Unknown Unknown Unknown vasp_gpu 0000000005413EA0 Unknown Unknown Unknown vasp_gpu 0000000000EFE7C8 Unknown Unknown Unknown vasp_gpu 0000000000F844A5 Unknown Unknown Unknown vasp_gpu 0000000001813C76 Unknown Unknown Unknown vasp_gpu 000000000043FC9E Unknown Unknown Unknown libc-2.22.so 00007F35C093B725 __libc_start_main Unknown Unknown vasp_gpu 000000000043FB29 Unknown Unknown Unknown forrtl: error (69): process interrupted (SIGINT) Image PC Routine Line Source vasp_gpu 0000000005445D70 Unknown Unknown Unknown libpthread-2.22.s 00007FBD98425C70 Unknown Unknown Unknown libmpi.so.12 00007FBD891CF89E Unknown Unknown Unknown libmpi.so.12 00007FBD891D70F9 Unknown Unknown Unknown libmpi.so.12 00007FBD891DA6E6 PMPI_Allreduce Unknown Unknown libmpifort.so.12 00007FBD89E5AFF1 mpi_allreduce_ Unknown Unknown vasp_gpu 00000000004A87D4 Unknown Unknown Unknown vasp_gpu 0000000000F0580E Unknown Unknown Unknown vasp_gpu 0000000000F844A5 Unknown Unknown Unknown vasp_gpu 0000000001813C76 Unknown Unknown Unknown vasp_gpu 000000000043FC9E Unknown Unknown Unknown libc-2.22.so 00007FBD8894B725 __libc_start_main Unknown Unknown vasp_gpu 000000000043FB29 Unknown Unknown Unknown forrtl: error (69): process interrupted (SIGINT) Image PC Routine Line Source vasp_gpu 0000000005445D70 Unknown Unknown Unknown libpthread-2.22.s 00007F9045AC2C70 Unknown Unknown Unknown libmpi.so.12 00007F903686D2C0 Unknown Unknown Unknown libmpi.so.12 00007F9036874273 Unknown Unknown Unknown libmpi.so.12 00007F90368776E6 PMPI_Allreduce Unknown Unknown libmpifort.so.12 00007F90374F7FF1 mpi_allreduce_ Unknown Unknown vasp_gpu 00000000004A87D4 Unknown Unknown Unknown vasp_gpu 0000000000F0580E Unknown Unknown Unknown vasp_gpu 0000000000F844A5 Unknown Unknown Unknown vasp_gpu 0000000001813C76 Unknown Unknown Unknown vasp_gpu 000000000043FC9E Unknown Unknown Unknown libc-2.22.so 00007F9035FE8725 __libc_start_main Unknown Unknown vasp_gpu 000000000043FB29 Unknown Unknown Unknown forrtl: error (69): process interrupted (SIGINT) Image PC Routine Line Source vasp_gpu 0000000005445D70 Unknown Unknown Unknown libpthread-2.22.s 00007F31277DFC70 Unknown Unknown Unknown libmpi.so.12 00007F311858A2C0 Unknown Unknown Unknown libmpi.so.12 00007F3118591273 Unknown Unknown Unknown libmpi.so.12 00007F31185946E6 PMPI_Allreduce Unknown Unknown libmpifort.so.12 00007F3119214FF1 mpi_allreduce_ Unknown Unknown vasp_gpu 00000000004A87D4 Unknown Unknown Unknown vasp_gpu 0000000000F0580E Unknown Unknown Unknown vasp_gpu 0000000000F844A5 Unknown Unknown Unknown vasp_gpu 0000000001813C76 Unknown Unknown Unknown vasp_gpu 000000000043FC9E Unknown Unknown Unknown libc-2.22.so 00007F3117D05725 __libc_start_main Unknown Unknown vasp_gpu 000000000043FB29 Unknown Unknown Unknown forrtl: error (69): process interrupted (SIGINT) Image PC Routine Line Source vasp_gpu 0000000005445D70 Unknown Unknown Unknown libpthread-2.22.s 00007FCFF517DC70 Unknown Unknown Unknown libmpi.so.12 00007FCFE5F282C0 Unknown Unknown Unknown libmpi.so.12 00007FCFE5F2F273 Unknown Unknown Unknown libmpi.so.12 00007FCFE5F326E6 PMPI_Allreduce Unknown Unknown libmpifort.so.12 00007FCFE6BB2FF1 mpi_allreduce_ Unknown Unknown vasp_gpu 00000000004A87D4 Unknown Unknown Unknown vasp_gpu 0000000000F0580E Unknown Unknown Unknown vasp_gpu 0000000000F844A5 Unknown Unknown Unknown vasp_gpu 0000000001813C76 Unknown Unknown Unknown vasp_gpu 000000000043FC9E Unknown Unknown Unknown libc-2.22.so 00007FCFE56A3725 __libc_start_main Unknown Unknown vasp_gpu 000000000043FB29 Unknown Unknown Unknown forrtl: error (69): process interrupted (SIGINT) Image PC Routine Line Source vasp_gpu 0000000005445D70 Unknown Unknown Unknown libpthread-2.22.s 00007FF0D5CE2C70 Unknown Unknown Unknown libmpi.so.12 00007FF0C6AE6EBF PMPIDI_CH3I_Progr Unknown Unknown libmpi.so.12 00007FF0C6A8D2B7 Unknown Unknown Unknown libmpi.so.12 00007FF0C6A94273 Unknown Unknown Unknown libmpi.so.12 00007FF0C6A976E6 PMPI_Allreduce Unknown Unknown libmpifort.so.12 00007FF0C7717FF1 mpi_allreduce_ Unknown Unknown vasp_gpu 00000000004A87D4 Unknown Unknown Unknown vasp_gpu 0000000000F0580E Unknown Unknown Unknown vasp_gpu 0000000000F844A5 Unknown Unknown Unknown vasp_gpu 0000000001813C76 Unknown Unknown Unknown vasp_gpu 000000000043FC9E Unknown Unknown Unknown libc-2.22.so 00007FF0C6208725 __libc_start_main Unknown Unknown vasp_gpu 000000000043FB29 Unknown Unknown Unknown *****************************