Dear Psi4 developers,
I was wondering if it’s currently possible to use Psi4 with LibXC density functionals other than the ones listed in:
I am particularly interested in mn15, scan0 and sogga11-x.
Sure, the other files is that folder build other XC functions from LibXC primitives. That particular list is just the built in “full” functions that LibXC exports. Feel free to build and add any other functional and open a PR.
thanks for your reply. I got the non-hybrid functionals (such as M06-L, MN15-L, SOGGA11) working. SCAN functional seems to be seg-faulting for some reason. Finally, all hybrid functionals I’ve tried to implement seem to perform really badly - do I need to set the % of E_X,HF manually (as is done for pbe0 in those files)?
Yes, exact exchange needs to be set along with the alpha of the exchange parts of the functional.
Odd that SCAN is segfaulting, if your run a debugger over the code do you know where?
Thanks, I got the hybrids working just fine (MN15, SOGGA11-X).
When running both scan and scan0 they crash (seems related to XC_MGGA_C_SCAN, as when I replace it with XC_GGA_C_PBE, it doesn’t segfault). I am struggling to debug this, as when I try to attach GDB to the core dump, it shows “no symbol table info available”.
I have recompiled psi4 with -DCMAKE_BUILD_TYPE=debug, but that doesn’t seem to help. I’m attaching the input and output data: http://lpaste.net/1535407880121876480
Not sure why you are having trouble with GDB, I ran this through a debugger and it looks like the seg fault is coming from LibXC.
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x0)
* frame #0: 0x0000000101c4e54f libxc.dylib`work_mgga_c + 1183
frame #1: 0x0000000101c3d5d3 libxc.dylib`xc_mgga + 787
frame #2: 0x0000000101c3d7a9 libxc.dylib`xc_mgga_exc_vxc + 153
frame #3: 0x000000010484f451 core.so`psi::LibXCFunctional::compute_functional(this=0x000000010192b470, in=size=9, out=size=4, npoints=58, deriv=1) at LibXCfunctional.cc:526
frame #4: 0x0000000104867099 core.so`psi::SuperFunctional::compute_functional(this=0x0000000101923a30, vals=size=9, npoints=58) at superfunctional.cc:607
Hmm, do SCAN functionals need the rho laplacian? If so, this could be it; however, you should get an error message if the laplacian is required.
Wait, why would you need to set the fraction of exact exchange for libxc functionals? Psi4 should just inquire libxc for the relevant parameters (cam_omega, cam_alpha and cam_beta).
I’m also not really sure why you need such an extensive hardcoded list of functionals - there are routines in libxc that translate between the name and the integer functional identifier.
SCAN doesn’t use the laplacian.
Hello Daniel and Susi,
I’m still struggling with the scan-based functionals. As said previously, the “bug” seems to be coming from the interaction between psi4 and libxc’s mgga_c_scan.
My own psi4 core dump is attached: http://lpaste.net/357013
For the record, I see a segmentation fault also with Erkale (with the same grid and starting geometry): http://lpaste.net/6282495255111532544
As a possibly unrelated issue, any functional requesting VV10 (I’ve tried wB97m-v and wB97x-v) seems to segfault too. The backtrace is pretty cryptic: http://lpaste.net/7592921091727687680 My processor is Core i5-4440, it does have avx2.
@tetrahydrofuran Woah, thats weird for VV10. It looks like something is assuming that the data is aligned, when its probably not. SIMD trials of that area were not good without some rewriting. What kind of compilers are you using?
Still no idea on SCAN, I doubled checked that all arguments are getting plenty of data so I don’t think its an issue on our end (could still be wrong on this). I unfortunately will not have time to really dig into it for awhile yet.
@dgasmith: Re: VV10 functionals
I’ve tried again with wb97m-v (and wb97x-v). I tried to use the official package (Psi4conda-latest-py36-Linux-x86_64.sh), but that package doesn’t include these two functionals.
I have compiled two other versions on a different machine (x86_64, Linux, AMD Barcelona as opposed to x86_64, Linux, Intel Haswell) with GCC 7.1.0 and GCC 4.9.2 (both with cmake 3.8.2) - they both segfault. Unfortunately, on this cluster I can’t seem to get the debug symbols working - gdb shows only ??'s.
Can you give the input you are trying to run? I have tried on both and AMD and intel with clang, GCC, and ICC and cannot reproduce this issue.
Would you mind making a PR with the new functionals (with tests) when you get a moment?
Hello, the input is below:
H -1.477411 0.000000 0.000000
N -0.482004 0.000000 0.000000
C 0.686542 0.000000 0.000000
set basis aug-cc-pVTZ
What sort of tests would you like me to add? I’ve only looked at the optimised bond distances and some energies in a few molecules, nothing thorough.
Can confirm that fails on the 3rd iteration, 2nd finite difference. However, I cannot reproduce the error starting from that geometry. Can you reproduce the error in a single computation? If not this is going to be fun to track down.
When I restart from the last geometry of the crashed run, the calculation finishes successfully. Another case that crashes is:
O -1.14109 1.44521 0.00000
C -0.06175 2.03095 0.00000
H -0.01369 3.13017 0.00000
N 1.14109 1.43588 0.00000
H 1.21769 0.41653 0.00000
H 1.97145 2.00210 0.00000
O 1.14109 -1.44521 0.00000
C 0.06175 -2.03095 0.00000
H 0.01369 -3.13017 0.00000
N -1.14109 -1.43588 0.00000
H -1.21769 -0.41653 0.00000
H -1.97145 -2.00210 0.00000
set basis aug-cc-pVTZ
Ah, thanks. Let me know if you run across anything smaller.
For testing the functionals simple energies will do. Were not really testing the XC code (its done for us) and were not testing the gradient/energy code since its the same for all XC kernels. So energies are sufficient for determining that we have assembled the functional correctly. The test might already be in
tests/libxc/input.dat. This will be sufficient.
Note that you’re running geometry optimization in ERKALE - the single point converges just fine. There seems to be a problem with the analytical force, maybe you should run it with “NumGrad true”.
@tetrahydrofuran FYI if you missed it- I fixed the VV10 bug. Apologies for the length of time, but a fresh build from psi4/psi4:master should see it fixed.