George McBane Molscat

PMP Molscat

PMP Molscat is a parallel version of Sheldon Green and Jeremy Hutson's Molscat quantum inelastic scattering program, version 14. It is useful for Molscat calculations that involve many propagations at different values of JTOT and M. It uses MPI message passing, which is available on most modern clusters. It also provides a utility (SMERGE) that can be used to split a large Molscat calculation over several machines without any message passing harness at all. The modifications to Molscat were made by George C. McBane of Grand Valley State University.

If you are unfamiliar with ordinary Molscat, you need to learn how to use it first; see https://www.giss.nasa.gov/tools/molscat/ .

If all the following conditions are true:

  • You know how to use the regular serial Molscat,
  • you have a problem big enough to take a long time to run, and
  • that problem involves many different JTOT/M combinations,

then you may find PMP Molscat useful. You might want to browse the manual, or scan through a poster I presented at the 2007 Dynamics of Molecular Collisions meeting, to help you decide whether to try the program.

Recent improvements by P. Valiron

Pierre Valiron, of the Laboratoire d'Astrophysique de Grenoble, has recently made useful improvements to PMP Molscat. For most users, the most important ones are

  • He has allowed the "dispatcher" to switch back and forth between dispatching and real computation, so there no longer needs to be an MPI task dedicated to the dispatching job. This improves the parallelization efficiency for dynamic dispatching to the point that the static version of PMP is no longer necessary. It is especially helpful for runs that incorporate only a few MPI tasks, or that use multiple threads to assign several CPUs to a single MPI task.
  • He has incorporated the newer Lapack diagonalizer routine DSYEVR for use in the Airy propagator. Several PMP users had made this modification already, but it is now the default in his distribution.
  • He has added the dummy scheduling scheme "pmpnone", which restores the behavior of serial Molscat, including resonance searching and automatic JTOT convergence checking. With this change a single code base can be used easily for both serial and parallel computations.
  • He added additional explicit parallelization using OpenMP to propagators 6 and 8 and to COUPLE and WAVVEC routines for ITYPE 3 and 4. I added similar support for ITYPE 1, 2, and 7. With these changes, using shared-memory, multi-core nodes to carry out individual propagations becomes substantially more efficient than before; in particular, the work required in COUPLE to set up the angular momentum coupling coefficients parallelizes very efficiently.

Sadly, Valiron died in 2008. Eventually his PMP and mine will be merged, and these improvements will be part of the standard distribution. The web page that hosted his version is no longer available; should you want a copy, please contact G. McBane directly. The only disadvantage of Valiron's version is a somewhat more complicated setup; it is not really practical to manage compilation and linking "by hand". He provides a Makefile that the user must customize to suit the local environment, with several examples; it works well in Unix (and probably Mac) environments but will require substantial modifications for people operating under Windows.

Getting started

To run PMP Molscat you need one of the following files:

For other operating systems, download whichever package you can unpack more easily; the only difference between them is the end-of-line conventions used in the Fortran files.

Release notes

April 25, 2008. (1) Bug in SMERGE that caused trouble when more than 99 temporary files were required has been fixed; thanks to Brian Stewart for the bug report. (2) SMERGE now gives a useful error message when the array size for storage of integer S-matrix data is inadequate, and comments in sdata.fi have been improved to make the storage requirements clearer. (3) Namelist variables jtotmin and jtotmax have been added to permit exclusion of some JTOT values during assembly of the merged S-matrix file.

April 20, 2005. Modified to give useful error message if the maximum number of temporary files was exceeded. Increased default amount of S-matrix storage space available in distribution version. Thanks to T.-G. Lee of Oak Ridge National Lab for the bug report.

April 2, 2005. Modified SMERGE so it can combine S-matrix files with different ENERGY arrays. The new capabilities are equivalent to IRSTRT=3 of the serial program, and allow easier "filling in" of S-matrices from abnormally terminated runs.

March 18-19, 2005. Fixed bug introduced in SMERGE version of 11 March, and also fixed old bug that gave corrupted output files if input data files had more than 800 channels.

March 11, 2005. (1) The main calculation now sorts the task list by the number of channels, improving the load balancing somewhat. (2) The SMERGE program now handles empty ISAVEU files (like the one produced by pmpdyn.f!) smoothly. (3) If more than one S-matrix for a single JTOT/M/INRG combination appears in the input files, SMERGE now discards all but the first, so that "overlapping" runs that duplicate a few JTOT/M/INRG combinations can be merged without causing problems in the postprocessor programs.

Feb. 20, 2005. Dynamic dispatch version (pmpdyn.f) available. 'autoname' variable added to SMERGE to automatically generate input filenames ISAVEU.0000, ISAVEU.0001, etc.

Please send bug reports and questions to George McBane at [email protected].

[GVSU is committed to making all resources accessible to those with disabilities.  If you have a disability that prevents you from using these software packages, please contact me directly ([email protected]) so that I can help you obtain what you need].  



Page last modified December 3, 2018