Time performance of SAPT with external point charges

I am interested in using SAPT to obtain the interaction energy of a dimer in the presence of a large number (10^5) of point charges representing water molecules. My scientific question requires investigating this system in the presence of a particular molecular mechanics water model, so using EFP to represent the waters is not an option for me. I am using the EXTERN keyword as described in the manual here. However, the impact of the point charges on the time performance is much greater than I expected. Here is the time required for a SAPT0/jun-cc-pvdz calculation on a dimer system containing 296 electrons (850 basis functions) as a function of the number of point charges:

 # charges | Time (min)
       0   |   111.6
       9   |   131.3
      99   |   133.1
     999   |   148.0
    9999   |   250.3
   68592   |   867.7

I am running these calculations via my local computing cluster on a node with 2 Intel Xeon E5-2695 v2 processors using 24 cores and 124 GB memory. I have seen similar performance using smaller systems, using computing nodes with larger memory (up to 248 GB), and using other methods in Psi4 such as DFT. Looking at the times for the individual modules, the additional time is accrued in the SCF calculations and not in the SAPT0 module. Is this poor time scaling expected for external point charges? Can you recommend any options I can use in Psi4 to reduce their impact on the time performance?

This is what the input file looks like for 9 point charges:

# sSAPT0 energy
memory 124 gb

molecule dimer {
0 1
ATOM_1   x  y  z
ATOM_33  x  y  z
0 1
ATOM_34  x  y  z
ATOM_66  x  y  z
units angstrom

set {
    basis jun-cc-pvdz
    scf_type DF
    freeze_core True

MM_charges = QMMM()
MM_charges.extern.addCharge(0.0033955, 0.322, 5.900, 11.195)
MM_charges.extern.addCharge(0.0033955, -0.893, 5.287, 11.359)
MM_charges.extern.addCharge(-0.0067910, -0.415, 5.900, 11.460)
MM_charges.extern.addCharge(0.0033955, 2.140, 8.134, 2.601)
MM_charges.extern.addCharge(0.0033955, 3.159, 8.858, 3.164)
MM_charges.extern.addCharge(-0.0067910, 2.709, 8.217, 3.133)
MM_charges.extern.addCharge(0.0033955, 5.947, -0.727, 3.885)
MM_charges.extern.addCharge(0.0033955, 7.240, -0.639, 4.333)
MM_charges.extern.addCharge(-0.0067910, 6.689, -0.489, 3.796)
psi4.set_global_option_python('EXTERN', MM_charges.extern)

1 Like

I too would love to see a response from the developers on this issue :ear: Though the benchmark you provided suggests linear scaling which isn’t that bad?

Sorry we were all at the Psi4 conference :). Computing charges on a grid is inherently expensive as, naively, you must compute the AO potential for each grid point. There is quite a bit of localization that can be done to speed it up; however, this has simply not been done as our observed use cases have been a few dozen points at most.

This is the first report of someone using such a large number of external points, and something we can certainly look into.

Thanks, that would be great! In the meantime, do you think that a procedure such as using the wavefunction for a vacuum calculation as the initial guess for the calculation with the external charges would improve performance?

Unfortunately, no. The cost is completely in computing the potential matrix which is only ever built once per Wavefunction and does not depend on the number of iterations.

I am hoping to start running production calculations soon. Has there been any progress on implementing the localization for the external point charge potential?

If not, I would be willing to work on this myself. I am a PhD student in computational biophysics, and I have experience programming in both Python and C/C++. My initial thought is to use a multilevel grid approach such as the one described in this paper, implemented either as a plugin or as an additional method in the PotentialInt class. Could you advise on whether this is an appropriate approach or direct me somewhere where I could get further assistance, e.g. the developer Slack channel?

@ccavender Happy to help help out. Can you direct message me a email address I can add to the Slack?