Segmentation fault

Hello!
I’ve installed psi4 binary on a cluster with the command: conda create -n p4env psi4 -c psi4

It appeared to have worked fine but when I try to run a job it always ends with segmentation fault:
/var/tmp/slurmd.spool/job1627612/slurm_script: line 42: 10685 Segmentation fault psi4 -i psi4.in -o psi4.out &>psi4.log

When running on a desktop the calculation runs normally.

my input file psi4.in is as follows:

import psi4

set_num_threads(64)
set_memory('128000 MB')

set { scf_mem_safety_factor 0.7 }

set { reference rks }

basis = ''
basis += 'assign H 6-311++G**\n'
basis += 'assign C 6-311++G**\n'
basis += 'assign O 6-311++G**\n'
psi4.basis_helper(basis, name='custom')

molecule MOL {
    -1 1
    noreorient
    nocom
    symmetry c1
    C   26.165710 -36.696987 -26.609535
    C   27.428484 -36.084805 -26.573191
    O   28.355934 -36.492313 -25.650085
    C   27.734941 -35.056667 -27.488134
    O   28.967680 -34.466103 -27.468262
    C   26.772093 -34.638523 -28.419567
    C   25.505253 -35.244923 -28.443604
    C   25.196342 -36.279106 -27.540064
    C   23.824694 -36.947140 -27.569092
    C   22.873039 -36.422535 -26.458328
    C   23.175840 -37.051537 -25.069843
    O   24.045397 -36.504314 -24.411991
    O   22.431618 -37.917103 -24.639688
    O   21.477036 -36.571507 -26.868505
    C   20.912132 -37.778606 -27.182039
    O   21.518299 -38.801369 -27.459547
    C   19.395725 -37.628117 -27.306168
    C   18.683258 -38.930668 -27.742458
    C   17.178164 -38.738014 -27.882532
    C   16.645807 -38.163570 -29.052082
    C   15.258584 -37.987232 -29.189877
    O   14.751235 -37.429531 -30.335291
    C   14.394855 -38.387581 -28.149080
    O   13.044326 -38.224342 -28.277393
    C   14.923571 -38.955395 -26.979425
    C   16.310966 -39.129848 -26.845743
    H   25.932665 -37.494396 -25.908901
    H   27.007076 -33.842934 -29.120117
    H   24.763601 -34.911133 -29.163589
    H   23.356653 -36.787712 -28.548212
    H   23.950626 -38.030964 -27.457405
    H   23.067690 -35.350475 -26.336512
    H   17.311459 -37.855583 -29.853333
    H   14.258064 -39.260048 -26.176981
    H   16.711885 -39.568733 -25.935806
    H   29.446209 -34.892387 -26.731764
    H   27.887402 -37.046555 -24.998480
    H   15.491869 -37.190002 -30.919352
    H   12.904418 -37.812252 -29.150951
    H   19.103634 -39.271408 -28.697565
    H   18.898714 -39.720985 -27.011234
    H   19.188303 -36.824551 -28.023443
    H   19.008739 -37.298046 -26.333832
}

energy, wfn = energy('wB97X-D', return_wfn=True)

oeprop(wfn, 'DIPOLE', 'QUADRUPOLE', 'MULLIKEN_CHARGES')
oeprop(wfn, 'GRID_ESP')

with open('psi4out.xyz', 'w') as f:
    f.write('43 ' )
    f.write('%.12f\n' % energy)
    f.write(MOL.save_string_xyz())

Thank you for any help

Please read this topic for best practices on asking questions. In particular, always add triple backticks ``` to input files that you post.

I’ve enabled you to upload files. Please send a copy of the output file and any other error messages.

Also, you can simplify your basis setting to simply set basis 6-311++G**.

Sorry for any inconvenience. My psi4 version is 1.3.2

which conda python psi4 returns:
/opt/npad/shared/softwares/python/3.6-anaconda-5.0.1/condabin/conda
~/.conda/envs/p4env/bin/python
~/.conda/envs/p4env/bin/psi4

and I’m sending the result of conda list in attachment (also the slurm output file and my job script)condalist.txt (3.7 KB) slurm-1628478.txt (125 Bytes) submissao_cluster.txt (482 Bytes)

Besides that, when I try to run psi4 in login node of the cluster (withou using sbatch) the program runs till it is terminated (we can’t run computational expensive jobs in login node). It appears to be some sort of failed integration with slurm.

I’ve downloaded all three of those files, but I’m not seeing any output from Psi4. Are you sure there’s not another file you haven’t sent yet?

psi4 produces no output. The job I submit to the cluster ends immediately with this error I’ve cited in the beginning.

my system is 2.6.32-431.23.3.el6.x86_64 #1 SMP Thu Jul 31 17:20:51 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

and the distribution is

LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
Distributor ID: CentOS
Description: CentOS release 6.5 (Final)
Release: 6.5
Codename: Final

In this case it is better to contact your cluster administrator. PSI4 does not know about slurm (or any other queuing system).
Maybe check if your slurm config has tight memory restrictions and kills PSI4 for demanding more.

This is a bit suspicious; it’s messing with the memory allocation. Maybe the problem is that Psi4 ends up allocating more memory and is killed by the batch system.

Thank you for answers. Hokru, I’ve already contacted cluster administration. Just posted here in case someone knows something else that could help solve the problem. Susilehtola, I’ve deleted this line of the input file but it didn’t help :frowning: