Development proposal for OEProp

Hi everyone,

I would like to implement a faster way to do ESP fits than what is found here:

The solution above is writing multiple grids on disk, yet the strength of the python and C++ APIs of Psi4 should be to have these in memory and allow for fast multithreaded evaluations.

My development proposal would be a little more involved, but also give the Psi4 API a lot more power. I would split the whole OEProp class into:

  1. OEPropCalc
    This class will get all currently existing logic to calculate the requested properties. Most properties are already calculated as SharedMatrix. This class will not be allowed to have any kind of output, unless a bulk property is requested (such as a grid).
  2. OEProp
    This class gets all output handling, it will only call the respective OEPropCalc functions and then use the existing code for output. Data will be passed between OEPropCalc and OEProp as SharedPointer and therefore not cost overhead. Exception to this rule would be the functions writing grids, which will get an in-memory and a direct write solution.

Reason:
Currently OEProp sometimes exposes data in memory (such as Mulliken charges using wfn.set_array) and sometimes not (such as Multipoles only via stdout or outfile). The approach above would allow internal code use of SharedMatrices directly (OEPropCalc would get a public python API) and keep existing functionality untouched.

Once the split is done, I would also split the GridEvaluation routines to either write a grid-XXX.dat (the existing functionality) or to keep the grids in memory (new functionality) and generate them with a grid taken from memory using the existing GridIterator interface.

Are there objections? I can also discuss this further in Slack. Could I get an invite to the email address of my forum account?

Best,
Timo

Thanks for the proposal! Perhaps copy your text to an issue on GitHub for public dev discussion that will be preserved. And a slack invite has been sent for your email.

Thank you, I joined the community and added the issue here.

To everybody looking at this thread: Please join the discussion over here:

2 Likes