MPI option useful?

Hi @ all,

I’m currently trying to compile Psi on our cluster … is MPI already available? I found a similar topic in the forum and the answer (solution) was turning off mpi since it’s superfluous!

Cheers,

Markus

Hi,

As far as I know, MPI is only useful if you want to use GTFock, a massively parallel Fock builder, available for HF computations only for now. It’s only going to be useful if you want to do HF on a very large number of cores.

Hi,

Thanks for your response. This means that MPI is only used if I turn on the jkfactory flag in the compilation.Up to now I have a running version compiled on our cluster (yes I have plenty of cores available) without MPI. I just wanted to check if I can speed up the simulations a little bit since I am mainly interested in simulations of larger structures.

BTW: I found out that libboost_mpi is only built when compiled with openmpi/gcc compilers, if I use intel packages I get the error that libboost_mpi is not found (because it was not built), is that a bug, or whats the reason?

@WTU to my knowledge, and @jgontheir correct me if I’m wrong, MPI is only going to speed up the integral direct Hartree-Fock part of your computations. In my experience, you’re better off with threaded density fitting unless your system is huge.

As for boost mpi not being built, that is a bug. Unfortunately Boost made the build process arduous, so tracking down why it did not build, in your case, is going to be tricky (I routinely build with Intel, so I don’t think that it is simply that you used Intel compilers). Inside the build directory there should be a boost subdirectory, inside that directory should be a boost_1_57_0 directory, inside that directory should be log files. In my experience, the error usually appears in the normal log file, not the error log file. Making matters worse, in my experience CMake (?) cleans up the boost build, if it thinks it is successful, and in the process destroys these logs.

If you are set on trying to compile with MPI another option that you may want to explore is manually compiling your own version of Boost, with MPI. You can then tell the Psi4 setup script to use that version.

@WTU Yes, threaded density fitting is going to do a pretty good job on large systems. However, “large” is relative… My advice would be to try first with density fitting. If that does not get you there, you’ll probably need several hundred or several thousand cores, and then compiling with MPI will seem attractive.

Like ryanmrichard said, compiling your own Boost may be easier if boost_mpi does not build for some reason. You can also check with your cluster administrator whether they already have Boost installed.