Can GDMA code be run in parallel?

I am trying to execute psi4.gdma module available in Psi4 through python module. It requires fchk file which is created using psi4.gradient module. The psi4.gradient uses specified number of processors but gdma code runs on single processor. If I have more than one fchk files to be passed to gdma code, is it possible to run gdma module in parallel using multiprocessing in python and obtain the results in a variable.
indent preformatted text by 4 spaces

        for i in range(6):
            options={'scf_type':'df','g_convergence':'gau','freeze_core':'true','mp2_type':'df','df_basis_scf':'def2-tzvpp-jkfit','df_basis_mp2':'def2-tzvppd-ri','perturb_h':'true','perturb_with':'dipole','perturb_dipole':flag[i][1]}
            psi4.set_options(options)
            grad,wfn[i]=psi4.gradient(theory+"/"+basis,return_wfn=True)
            psi4.fchk(wfn[i],outdir+"/"+str(flag[i][0])+'.fchk')
            fdma = open(outdir+"/"+str(flag[i][0])+".dma","w")
            fdma.write(self.createdma(outdir+"/"+str(flag[i][0])+'.fchk',dentype,1))
            fdma.close()

       if parallel:
            processes = []
            for i in range(6):
                process = Process(target=psi4.gdma,args=(wfn[i],outdir+"/"+str(flag[i][0])+".dma"))
                processes.append(process)
                process.start()
            for process in processes:
                process.join()
                print (dir(process))
                # DOESNOT WORK
                dma_results = psi4.variable('DMA DISTRIBUTED MULTIPOLES')
       else:
            # WORKS
            for i in range(6):
                psi4.gdma(wfn[i],datafile=outdir+"/"+str(flag[i][0])+".dma")
                dma_results = psi4.variable('DMA DISTRIBUTED MULTIPOLES')
                dma_results = list(map(lambda x:x[0:4],dma_results.np))

I solved my problem by changing the loop structure. Once fchk files are created, I run following loop.

            manager = Manager()
            dmadict = manager.dict()
            def call_proc(i):
                psi4.gdma(wfn[i],datafile=outdir+"/"+str(flag[i][0])+".dma")
                dma_results = psi4.variable('DMA DISTRIBUTED MULTIPOLES')
                dma_results = list(map(lambda x:x[0:4],dma_results.np))
                #print (dma_results,i)
                dmadict[i] = np.array(dma_results)
            results = []
            processes = []
            for i in range(6):
                p = Process(target=call_proc,args=(i,))
                processes.append(p)
                p.start()
            for p in processes:
                p.join()

Right, as a note here processes are a bit tricky with global states like psi4.variables. In the first example you fork this global state into separate processes and update the global state of each process. This will not update the primary processes globals, in your second example you grab the globals of that thread and return so it works as expected.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.