We will run this example twice - first in demo mode, without invoking VASP, and later with VASP. Here in demo mode we will make two abreviations:
- Instead of using the HPC queueing system with qsub, we will execute scripts on the local host with bash.
- Instead of calling VASP we will simply copy in the results of a previous VASP run. The results aren’t meaningful, but it allows us to demonstrate quickly the overall schedMain framework.
To run this example, pick a name of some directory for testing, say testb. Then:
cp -r schedMain/example.vasp testb
cd testb
# Set up dummy files to be used instead of calling VASP
cp -r demoFiles global/vaspDemo
Make sure your PYTHONPATH includes nrelmat/readVasp.py. For example:
export PYTHONPATH=$PYTHONPATH:.../nrelmat
Finally, run the scheduler:
.../schedMain.py -globalDir global -ancDir . -initWork initWork -delaySec 1 -redoAll n
The output format and task status values are documented in schedMain Example A: static files.
Real mode means we will use the HPC queueing system and we will execute VASP. This will run much more slowly than our quick demo. First we need to inform the scheduler about the HPC queueing system and our VASP.
For example assume your system is named “myHpc”. In the file schedMisc.py you will find a section like:
if hostType == 'peregrine': cmdLine = 'showq'
You need to add a similar section that specifies the command used on myHPC to show the queue; something like:
elif hostType == 'myHpc': cmdLine = 'showq'
In schedMisc.py you also will find a section like:
if hostType == 'peregrine':
if len(qtoks) == 9 and re.match('^\d+$', qtoks[0]): # ignore headings
(qjobId, quserId, qstate) = qtoks[0:3]
if qstate in ['BatchHold', 'Hold', 'Idle', 'SystemHold', 'UserHold']:
status = ST_WAIT
elif qstate in ['Running']: status = ST_RUN
else:
print 'getPbsMap: unknown status: %s' % (qline,)
status = ST_WAIT # assume it's some variant of wait
pbsMap[qjobId] = status
You need to add a similar section for myHPC. The new section must extract from the showq output the job ID and the job status, and must translate the status to one of the ST_* constants at the top of schedMisc.py.
Similarly, search for hostType and add sections appropriate for myHpc in the following files:
* taskClass.py
* example.vasp/global/cmd/magSetup.py
* example.vasp/global/cmd/nonmagSetup.py
* example.vasp/global/cmd/runVaspChain.py
* example.vasp/global/cmd/rvmisc.py
In your home directory create a file named pyladaExec.sh, and use an ascii text editor (vim, emacs, etc) to add content like the following:
# Add any setup you like, such as export PATH,
# export LD_LIBRARY_PATH, etc.
# Start VASP
.../my/path/to/VASP
Make sure the file is executable:
chmod u+x pyladaExec.sh
Delete the dummy VASP files we used in the demo above:
rm -r global/vaspDemo
Finally we run the scheduler:
.../schedMain.py -globalDir global -ancDir . -initWork initWork -delaySec 5 -redoAll n -hostType myHpc
This could run for a few minutes to a few days, depending on your HPC queues. Using -delaySec 5 instead of 1 makes the display less busy, but the job runs essentially as fast.