I’m a new user to PSI4 using it as a python module and was wondering if there is anything I can do to speed up the calculations other than using more cores. I’ve attached my code below to calculate reorganisation energies and have been testing on a UFF optimised pentacene molecule with 20 cores.
Gaussian09 can complete the job in 11 minutes whilst psi4 takes over 2 hours, with essentially all the extra time due to optimising the pentacene at a -1 charge and multiplicity of 2.
We’d need output files from Psi at least and preferably from Gaussian as well. You should have a timer.dat file, and that would also help. If you aren’t converging your molecules to the same tightness, that would explain it immediately.
My naive guess is that Gaussian requires fewer geometry steps to optimize the anion, but again, I can’t tell without two output files to compare.
I’ll add that we learned (literally yesterday) about a part of the SCF gradient code with negative threading efficiency due to unintended extra work. Once that’s fixed, the Psi timings will need to be updated.
Thank you for your reply. I have attached all the outputs for the psi4_reorg.py script, the main one is the opt_-1_out.txt for the anion optimisation. Additionally attached is the matching Gaussian09 for the anion optimisation.
One other question that has come to mind is that B3LYP implementation in Gaussian I believe is different to other programs, would that have any impact here?
All the uploaded files original file extensions are as underscores in the name and saved as .txt to upload here. The slurm ouput is for the psi4 calculation.
Hi, Thank you for your reply. I just realised I reported the wrong timings for my scripts.
For pentacene to go through my psi4_reorg.py script took 1:27:07 not over 2 hours as I previously stated on 20 cores, still compared to the 11 minutes the version of my script that calls Gaussian09 for the calculations. Sorry for being misleading in that regard, a different molecule took over 2 hours (2 hours 50 minutes) with psi4 and 38 minutes with Gaussian09, but that was one made using a generative method as part of my workflow.
However the same optimisation of the -1 charged mult 2 pentacene did take 68 minutes using psi4 compared to 6 minutes for Gaussian09 so it was still very slow comparatively.
The memory used in opt_-1_out.txt is the default 500 MB. I think psi4 is just a little starved for memory here.
We recommend running optimizations in internal coordinates if possible. There are cases where a torsion composed of two linear bends will fail and Cartesians are required, but its usually best to optimize in internals until that is encountered and then switch.
I ran your system quick on my old laptop with those two changes. The time is down to 11 minutes and the geometry converged in 4 steps. Here’s the input file I used. Hopefully on your hardware the comparison will get still better.
Thanks for the suggestion, I increased the memory as suggested to the maximum on my HPC and by testing with the internal coordinates the time has dropped to 15 minutes now on 20 cores. I was using cartesian as I had encountered the stated issue with other molecules I was testing but will do as you suggested.
Is there an efficiency difference between using psi4 as an executable writing the input in the Psithon format versus importing psi4 as a python package which is currently what I am doing?
I will look at the grid and convergence criteria to try improve timings further.