sSAPT0 issues: out of disk

Hi there,recently,i use psi4 to decompose the interaction energy between two molecules. But this system(216 atoms) is too large to calculate due to the lack of hard disk space. The disk that i used is 2T, while the calculation needs more. So i am wondering that if you can give me some suggestions to change the default configuration to reduce the disk need in this system. i will appreciate for your help.
Here is the output:

Basis Set: JUN-CC-PVDZ
Number of shells: 1478
Number of basis function: 3230
Number of Cartesian functions: 3394
Spherical Harmonics?: true
Max angular momentum: 2

CHF Iterations converged

Ind20,r (A<-B)      =    -0.044691009802 [Eh]
Ind20,r (B<-A)      =    -0.038255718015 [Eh]
Ind20,r             =    -0.082946727817 [Eh]
Exch-Ind20,r (A<-B) =     0.021884140618 [Eh]
Exch-Ind20,r (B<-A) =     0.037050211884 [Eh]
Exch-Ind20,r        =     0.058934352502 [Eh]
Disp20              =    -0.231213037229 [Eh]
Disp20 (SS)         =    -0.115606518614 [Eh]
Disp20 (OS)         =    -0.115606518614 [Eh]

Traceback (most recent call last):
File “/home/Aridea/psi4conda/bin/psi4”, line 269, in
File “”, line 258, in
File “/home/Aridea/psi4conda/lib//python3.6/site-packages/psi4/driver/”, line 492, in energy
wfn = procedures[‘energy’][lowername](lowername, molecule=molecule, **kwargs)
File “/home/Aridea/psi4conda/lib//python3.6/site-packages/psi4/driver/procrouting/”, line 3438, in run_sapt
e_sapt = core.sapt(dimer_wfn, monomerA_wfn, monomerB_wfn)

Fatal Error: PSIO Error
Error occurred in file: /scratch/psilocaluser/conda-builds/psi4-multiout_1530822628409/work/psi4/src/psi4/libpsio/ on line: 129
The most recent 5 function calls were:

psi::PSIO::rw(unsigned long, char*, psi::psio_address, unsigned long, int)

Psi4 stopped on: Thursday, 07 November 2019 04:33PM
Psi4 wall time for execution: 6 days, 9:36:55.98

*** Psi4 encountered an error. Buy a developer more coffee!
*** Resources and help at

PSIO Error usually indicates that your system has run out of either memory or disk space.

It will be helpful if you post your input file (everything except the molecular geometry, since your system is large). If you haven’t already, include set scf_type df to use memory-efficient integrals for the SCF calculation.

Another common problem is failing to tell Psi4 where to write your scratch files. 2 TB of disk space should be plenty for this size molecule (I’ve run systems with ~200 atoms using a ~200 GB scratch directory). But the default behavior is to write to /tmp, which may not be very big. Set the environment variable PSI_SCRATCH to an absolute path to the directory where you want Psi4 to write scratch files. If you are submitting the job to a remote computing cluster, make sure that this directory is on the local node so that the executable is not accessing these files over the network.

1 Like

Hi ccavender, I truly appreciate your timely help, many thanks for you.
the envionment variable i use :
export PATH="/home/Aridea/psi4conda/bin:$PATH"
export PSI_SCRATCH="/home/Aridea/psi4conda/scratch"

this is my input file:
molecule DIMER{
0 1
C 7.10606394 -3.09166388 -1.67257573
… –
-2 1
N 6.36131221 -0.26933893 2.22217075

H -5.58215766 -2.30564947 1.85956622
unit angstrom
symmetry c1

set globals{
basis jun-cc-pVDZ
df_basis_scf jun-cc-pvdz-jkfit
df_basis_sapt jun-cc-pvdz-ri
guess sad
scf_type df

set sapt{
print 1
nat_orbs_t2 true
freeze_core true
memory 60 Gb

During the calculation, i noticed that some of my scratch files are up to 200-400Gib, which is much larger than yours. i wonder that we might use different base functions, so that 2.1T free space is not enough for the energy decomposition in my system. Then i don’t know what to do.

I think your problem may be the amount of memory. For my system of 191 atoms, I required 700 GiB of memory for in-core AOs. This was an F-SAPT calculation, so it is strictly a different module in Psi4 than sSAPT0, but the code should be doing similar things.

If you don’t have enough memory for in-core AOs, I think Psi4 will try to write them out to the scratch directory (which is why your scratch files are so large). But other parts of the SAPT calculation must be done in memory, and 60 GB is likely not enough for a system as large as yours.

If you don’t have access to a machine with more memory (my guess is 0.5 TB to 1 TB RAM for your system), then you’ll have to think carefully about how to truncate your system in a way that still lets you interrogate the interaction you are studying.

Hi ccavender, It’s very kind of you to help me. I think you are right, that’s why reading these scratch files cost most of my calculation time. I used to be very confused for that, but now everthing is clear. Today i bought a 4T hard disk, hope i can finish the calculation with it. Last, i’d like to say thank you for all you have done
Best wishes!

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.