Psi4 cartesian coordinates precision from same starting point geometry


I’m wondering about the optking optimiser in psi4 and if there is a random element to its steps? I’m ideally looking for each optimisation call to be deterministic to a higher significant figure value.

I have ran some optimisations using B3LYP and 6-311G** with default settings other than G_CONVERGENCE": "GAU" using versions psi4 1.7+6ce35a5 & optking 0.2.1 (loaded via conda) from the same starting point geometry ten times.

The optimisation times range from 2mins 57s to 4mins 29s completing each time in 5 steps with the same convergence values reached in all examples bar RMS Disp = 2.34e-04 for 4/10 of the opts.

	   Step    Total Energy     Delta E     Max Force     RMS Force      Max Disp      RMS Disp   
	  Convergence Criteria              o    4.50e-04 *    3.00e-04 *    1.80e-03 *    1.20e-03 *
	     5    -879.07114345   -1.19e-05 o    2.44e-04 *    5.37e-05 *    8.41e-04 *    2.33e-04 *  ~

Each time the decimal places of the coordinates in the outputted .xyz match up to 3 usually matching to 4 decimal places (e.g. 1.234) in the final outputted geometries and then differ from there. In comparison using gaussian09 from the same starting point the final geometries match up to the 12th decimal place in the xyz out of the log file.

Is there a way to get the same level of precision with psi4? Setting G_CONVERGENCE": "GAU_TIGHT" doesn’t seem to have an affect on this.

Code used below: (2.0 KB)

import psi4

with open("","r") as r:
    xyz =


                "G_CONVERGENCE": "GAU",
                "D_CONVERGENCE": 6,
                "E_CONVERGENCE": 6})

for n in range(10):
    mol = psi4.core.Molecule.from_string(xyz, fix_com=True, fix_orientation=True)
    energy = psi4.optimize("B3LYP",molecule=mol)
    xyz_to_save = str(mol.natom()) + "\n"
    xyz_to_save += mol.save_string_xyz()
    with open(f"LGA_optd-{n+1}.xyz", "w") as saved_xyz:

Is there a reason why you set E_CONVERGENCE and D_CONVERGENCE to 10^-6? Psi4’s default for these values in geometry optimizations is 10^-8, and the behavior you described could be attributable to simply not having enough precision in your SCF solution.

Ahh thank you for spotting that, I think I left them in from some testing I was doing on single point calculations recently. I’ll run the calculations again now and see what that changes.


Even with removing the E_CONVERGENCE and D_CONVERGENCE variables the difference seems to persist in the final geometries in the xyz coordinates to the same degree. The optimisations still complete in 5 steps and the convergence values are the same to each other, again with a few minor differences.

||   Step    Total Energy     Delta E     Max Force     RMS Force      Max Disp      RMS Disp   |
||  Convergence Criteria              o    4.50e-04 *    3.00e-04 *    1.80e-03 *    1.20e-03 *|
||     5    -879.07114349   -1.18e-05 o    2.42e-04 *    5.34e-05 *    8.37e-04 *    2.34e-04 *  ~|

One thing I’ve noticed is looking at the geometries out of the first optimisation call they’re already starting to differ.

Geometry questions means @AlexanderH

slurm-2516692.txt (3.3 MB)
Here’s the output for the calculations.

Just following up on this to say I’ve still not managed to resolve the issue with my repeat optimisations of molecules differing still.

I’ve attempted in a fresh conda environment with just the above mentioned packages in python 3.8 default settings, tighter SCF at 10^-10, tighter G_Convergence options, using cartesian coords, varying the initial geometries precision and setting dertype=“energy”.

Currently looking at using psi4 as an external calculator with ASE to see if that becomes more reproducible but I was wondering if anyone had any other potential suggestions for me to explore?


Having spent more time on it today, I realised I hadn’t tried just running optimisations on a single core having always used 4 or 8 cores.

With the default settings and the same functional and basis set I used previously I set:
on 21 test molecule geometries they all optimise to the same structure and energy exactly whereas at higher numbers of cores the variation is introduced.

I have tried this only on my universities HPC and will now see if the same problem exists on my local machine and another HPC I have access to.


Checked the calculation locally and had someone running the same conda env on another HPC check single core vs multi and the same behaviour persists with agreement with single core and different between multicore opt runs. It looks like the issue with the precision isn’t with geometry optimisation but potentially with the SCF calc for the energies as multicore single points vs single core also don’t match vs do.

I am not sure I understand all the issues in this post.
There are precision problems with the XYZ coordinates? And/Or precision problems with the SCF energies? Concerns about differences to Gaussian?

For tight geo. convergence one needs to increase the default DFT grid.
dft_spherical_points 590
dft_radial_points 99
dft_pruning_scheme robust
Is roughly Gaussian’s default UltraFine

Hi, apologies for the confusion.

Essentially, my issue is that I’m trying to optimise molecules in a reproducible way and calculate their energies. When I use psi4.core.set_num_threads(1) every time I run the optimisation I get the same final geometries to all the decimal places recorded and the exact same final energies. However, when I run the optimisations with a number of cores greater than 1 the latter decimal places do not match. I tested it on a set of 20 molecules I’ve done previous benchmarking with and the the worst case out of that set had agreement only to 2 decimal places the same and the variation appears in the 3rd opposed to on single core when they agree exactly.

Before I spotted the core dependence I tried a few different setting options trying to fix this before and in doing so I, when looking further into the output files, I spotted the the optimisations were different when using multiple cores but the same when using single core and it looked like the first place I could see a numerical difference was in the initial SCF call. I therefore started doing some just single point calculations on an ethane molecule I drew and optimised using UFF in Avogadro and saw the behaviour was present not just for optimisations but for just doing single points on their own. Using 1 core gives the same answer each time and each iteration is the same in the SCF (although a different answer depending on the computer used e.g. my uni’s HPC gives a different final answer to my local machine) but using multiple cores the values change.

The values are similar however to each other with energies differing past 10^-8 at least but my current assumption is this slight variation causes the optimisations to result in differing slightly different final structures. Another assumption I have is that it is the initial guess on multiple cores that causes the problem.

As an example using the below code:

with open("","r") as r:
    xyz =
mol = psi4.core.Molecule.from_string(xyz, fix_com=True, fix_orientation=True)
energy1 ="B3LYP",molecule=mol)

On single core, on my uni’s HPC, everytime I get: (429 Bytes)

  ==> Iterations <==

                           Total Energy        Delta E     RMS |[F,P]|

   @DF-RKS iter SAD:   -79.28597606028181   -7.92860e+01   0.00000e+00 
   @DF-RKS iter   1:   -79.51533823718793   -2.29362e-01   1.12809e-02 DIIS/ADIIS
   @DF-RKS iter   2:   -79.29626396134302    2.19074e-01   1.36716e-02 DIIS/ADIIS
   @DF-RKS iter   3:   -79.85384690728119   -5.57583e-01   4.52700e-04 DIIS/ADIIS
   @DF-RKS iter   4:   -79.85436888645091   -5.21979e-04   1.14961e-04 DIIS/ADIIS
   @DF-RKS iter   5:   -79.85440013387330   -3.12474e-05   1.48135e-05 DIIS
   @DF-RKS iter   6:   -79.85440075100358   -6.17130e-07   5.32521e-07 DIIS
  Energy and wave function converged.

Whereas if I set psi4.core.set_num_threads(2) it changes marginally everytime such as:

  ==> Iterations <==

                           Total Energy        Delta E     RMS |[F,P]|

   @DF-RKS iter SAD:   -79.28597606029641   -7.92860e+01   0.00000e+00 
   @DF-RKS iter   1:   -79.51533823720237   -2.29362e-01   1.12809e-02 DIIS/ADIIS
   @DF-RKS iter   2:   -79.29626396135734    2.19074e-01   1.36716e-02 DIIS/ADIIS
   @DF-RKS iter   3:   -79.85384690729585   -5.57583e-01   4.52700e-04 DIIS/ADIIS
   @DF-RKS iter   4:   -79.85436888646554   -5.21979e-04   1.14961e-04 DIIS/ADIIS
   @DF-RKS iter   5:   -79.85440013388795   -3.12474e-05   1.48135e-05 DIIS
   @DF-RKS iter   6:   -79.85440075101835   -6.17130e-07   5.32521e-07 DIIS
  Energy and wave function converged.

The delta E and RMS at each iteration are the same seemingly, just the initial starting point from SAD differs for multiple cores. I’ve tried other initial start methods such as HUCKEL and SAP so far with the same result and also tried using energy1 ="HF",molecule=mol) to see if it was a DFT issue and see the same thing of single core gives the same answer but multi-core differs.

My end goal from this is I need the energies to be the same to at least 10^-6 which they are already so that’s fine but the final geometries also need to match to 6 decimal places and I need to trust I will always get that level of precision as well for further calculations ideally. This is to go into a high throughput workflow so while I could just run all the calculations single core, which is the current plan, in terms of time efficiency using multiple is the ideal.

Hopefully that makes things clearer. The reason I mentioned gaussian09 is not for the results to agree exactly but I’m moving away from using gaussian for licensing reasons so was trying to make sure psi4 was optimising to the same minima, I don’t need them to match necessarily just with gaussian09 even with multiple cores I get the exact same values so was using it as an example of ideal behaviour.

All the best,

Here are the outputs, calc 1 is single core calc 2 is 2 core.
calc_2.txt (10.3 KB)
calc_1.txt (10.3 KB)

Also, in terms of optimising:
Optimising on a single core and printing mol.save_string_xyz() always gives:

0 1
 C   -2.279916164007    1.887828600647    0.025471454846
 C   -1.685846340679    0.494311388073   -0.193853962972
 H   -1.391888146039    0.346490827884   -1.236673891198
 H   -2.404912918253   -0.289451599155    0.059914649277
 H   -0.797214227943    0.338629802012    0.424070866722
 H   -3.168548308641    2.043510189461   -0.592453290169
 H   -1.560849604459    2.671591589116   -0.228297236870
 H   -2.573874289980    2.035649201963    1.068291410364

Whilst on multiple cores it varies such as:

0 1
 C   -2.279916164625    1.887828601046    0.025471453760
 C   -1.685846340106    0.494311387716   -0.193853961896
 H   -1.391888146440    0.346490827992   -1.236673890339
 H   -2.404912918659   -0.289451598625    0.059914648473
 H   -0.797214226551    0.338629801714    0.424070866609
 H   -3.168548309986    2.043510189787   -0.592453290010
 H   -1.560849604057    2.671591588546   -0.228297236030
 H   -2.573874289575    2.035649201825    1.068291409432

The differences in this case are around 10^-9 which would be fine if all molecules had this level of precision as I could happily round each xyz at 6 decimal places but as I’ve said it depends on a lot on the molecule being optimised.