In reference to the 1.5 dev version of psi4 (installed as per my post here) psi4 is failing with the error:
PSIO_ERROR: unit = 193, errval = 8
when attempting to run SAPT0 calculations that have been successfully run in previous versions of psi4. In particular the calculation I am attempting was previously run with Psi4 1.3.2 on WSL.
The last few lines of the output file are as follows:
SAPT0
Ed Hohenstein
6 June 2009
Orbital Information
NSO = 336
NMO = 336
NRI = 1116
NOCC A = 37
NOCC B = 13
FOCC A = 9
FOCC B = 3
NVIR A = 299
NVIR B = 323
The only error message in Anaconda prompt is PSIO_ERROR: unit = 193, errval = 8
Any ideas what is going wrong?
Tried changing scratch directory from default, no success.
At the top of my input file. Removing this entirely, or setting to larger value (5, 6 or 40 GB) allows the calculation to proceed past the point above and does not result in an error message. Basically anything greater than 4 GB of memory works, or the default 500 MB…
Using the default 500 MB memory, the calculation appears to use ~ 500 MB. When setting a value larger than 4, the calculation tops out at around 1.1 GB.
Although the message you reported is the only error message explicitly written to stderr, there is a throw statement that would have been triggered and that is not mentioned in your report.
errval = 8 is Psi trying to close a file that’s already closed. Although the file opening and closing logic may depend on memory estimates, an error like this shouldn’t be directly dependent on memory allocation. unit = 193 says this is about the alpha-alpha block of density-fitted integrals.
That’s about as far as I can go without an input or output file that demonstrates the bug.
There is no other error or statement with this issue, not in the Anaconda prompt, nor in the output file.
using --verbose doesn’t give anything extra either.
When I try to reproduce this on my Mac, I can’t, but you already said this ran fine on WSL, so this isn’t very informative. Unfortunately, I don’t have a Windows machine, so the best I can do is transfer this to our issue tracker, where the SAPT developers will be more likely to see this.
Can you try again without -n 10? I’m curious if parallelization is involved.
I also have a much more complex SAPT calculation that I’m trying to run. No value of memory I have yet found will allow the calculation to complete. All 4, 8 25 & 40 GB all fail with the same error.
I have 1.4.1 & 1.5a1 running on a linux box which runs without error, so I will switch to that for the time being.