Abaqus

This script for running abaqus is a bit more complicated than the other samples here. 

a copy of this script is readable in the shared folder on the condo cluster. It can be copied from that location into your work group directory with the following command:

cp /shared/hpc/sample-job-scripts/abaqus/abaqus-2017.sh /work/<your group/path>

Once this file has been copied to the same location as your input file, you will have to edit the "parameter" section of this script (labeled by the #  comments). 

When the changes are saved run the command: chmod +x abaqus-2017.sh

this makes the file executable, then submit your job by executing the script with command:

./abaqus-2017.sh

sample script below:


#!/bin/bash

# Sample script that creates an sbatch script that submits an Abaqus 2017 job to
# SLURM.
# Instructions:
#   1.  Modify the items under PARAMETERS below. Save the file. 
#   2.  Make this file executable (e.g. chmod +x abaqus-2017.sh)
#   3.  Run this file  (.e.g   ./abaqus-2017.sh ) 

# PARAMETERS:  (modify as needed)
JOBNAME=abaqus-job1
WORKDIR=/work/some-group/some-project
INPUTFILE=input.inp
# the USERFILE value is optional.  It is used to provide the name of a user-supplied routine.
# If you need it, uncomment out the line below.

#USERFILE=my-subroutine.for

PARTITION=compute
NUM_NODES=2
PROCS_PER_NODE=16
MAX_TIME=3:00:00
EMAIL=some-netid@iastate.edu
MAIL_TYPES=BEGIN,FAIL,END
# end of PARAMETERS   

TOTAL_PROCS=$((NUM_NODES*PROCS_PER_NODE))
INPUTFILEPATH=${INPUTFILE}
ERROR_FILE=${JOBNAME}.%j.error
OUTPUT_FILE=${JOBNAME}.%j.output

# if a USERFILE is specified, set USERSTRING and call abaqus with it.
if [[ ${USERFILE} != "" ]]; then
   USERSTRING="user=${USERFILE}"
fi

# Everything below from 'cat ..' until END_OF_SCRIPT gets passed to sbatch.  Edit carefully.
# Note that the regular shell variables (i.e.  $var,  ${var} ) are 
# filled in by bash when you run this script.
# The escaped variables (i.e.  \$var ) are filled in by SLURM at run time.
cat <<END_OF_SCRIPT > ${JOBNAME}.sbatch
#!/bin/bash
#SBATCH -J $JOBNAME
#SBATCH -D $WORKDIR
#SBATCH -N $NUM_NODES
#SBATCH -n $TOTAL_PROCS
#SBATCH --partition=$PARTITION
#SBATCH --ntasks-per-node=$PROCS_PER_NODE
# it's a good idea to tell SLURM to use a large amount of memory. 120000 = 120GB.
#SBATCH --mem=120000
#SBATCH --time=$MAX_TIME
####SBATCH -C compute
#SBATCH --error=$ERROR_FILE
#SBATCH --output=$OUTPUT_FILE
#SBATCH --mail-type=$MAIL_TYPES
#SBATCH --mail-user=$EMAIL
cd $WORKDIR

# Load the Intel compiler and Abaqus software.
module load intel/17.4
module load abaqus/2017

# Abaqus doesn't support SLURM natively.  So, the script below gets the list of
# allocated hosts from SLURM and uses it to construct the mp_host_list[] variable.  
# It copies the global custom_v6.env file from the global Abaqus "site" directory and 
# adds the mp_host_list[] line to the bottom of the abaqus_v6.env file in the current folder.

create_abaqus_mp_host_list.sh

unset SLURM_GTIDS

export I_MPI_HYDRA_BOOTSTRAP=ssh

abaqus interactive analysis job=${JOBNAME} input=${INPUTFILEPATH} cpus=${TOTAL_PROCS} mp_mode=mpi memory="80 %" ${USERSTRING} scratch=${WORKDIR}

END_OF_SCRIPT

# Now send the sbatch script created above to sbatch..
echo "running: sbatch ./${JOBNAME}.sbatch"
sbatch ./${JOBNAME}.sbatch