Ansys-EDT

Ansys-EDT(Electronics Desktop) is used for  for electromagnetic, circuit and system simulation. Copy and edit file to suit your needs. Below is a sample script that can run the program.

The file below is an sbatch script.  A copy of this file is stored in /shared/hpc/sample-job-scripts/Ansys-EDT-sample.sh

Copy this script to to your work directory and rename it. Then you will need to edit the file to suit your needs. You can also edit the number of processors, nodes, and run time requested with this script.

 

#!/bin/bash
# Generates a sbatch job file for running Ansys EDT (Electromagnetics) jobs.
#
# The original version of this script was found at:
#   https://www.chpc.utah.edu/documentation/software/ansys-edt.php
#
#
#SBATCH --time=4:00:00 # walltime, abbreviated by -t
# modify number of nodes to your needs
#SBATCH --nodes=2      # number of cluster nodes, abbreviated by -N
#SBATCH -o slurm-%j.output # name of the stdout, using the job number (%j) and the first node (%N)
# modify number of MPI tasks to your needs, generally we use 2 tasks per node
#SBATCH --ntasks=16    # number of MPI tasks, abbreviated by -n
# additional information for allocated clusters
#>>SBATCH --account=owner-guest     # account - abbreviated by -A
#SBATCH --partition=compute  # partition, abbreviated by -p

module load ansysEM/19.5

# specify work directory and input file names
# change myUNID/myInputDir to your input file location
export WORKDIR=/work/ccresearch/jedicker/ansysEM
# change myInput.aedt to your input name
export INPUTNAME=myInput.aedt

# cd to the work directory
cd $WORKDIR

# How many cores to use per HFSS task:
CORES_PER_HFSS_TASK=2

# Get number of slurm tasks per node (usually this is simply the number of cores per machine):
TPN=$( echo $SLURM_TASKS_PER_NODE | cut -f 1 -d \()

# find number of CPU cores per node
PPN=$( echo $SLURM_JOB_CPUS_PER_NODE | cut -f 1 -d \()

# define the number of HFSS threads per node which is simply the number of slurm tasks requested per node divided
# by the number of cores per HFSS task you want to use.
HFSS_THREADS_PER_NODE=$(( $TPN / $CORES_PER_HFSS_TASK ))

# figure out what nodes we run on
#srun hostname | sort -u > nodefile
scontrol show hostname $SLURM_NODELIST | sort -u > nodefile

# distributed parallel run setup.  The RSM code bits are copied to this working directory.  Unfortunately, we can't
# start ansoft RMS from its default location since it can't write to the log file owned by hpcapps.  Even specifying the
# logfile option doesn't work:
#  $installdir/hfss/rsm/Linux/ansoftrsmservice start -logfile ~/ansoftrsmservice.log

# So we just copy the rsm directory here and press on.

rsync -av /shared/hpc/ansysEM/19.5/rsm .

# and loop over all nodes in the job to start the service
for anode in  $(cat nodefile); do
  # start the RSM service
  ssh $anode $WORKDIR/rsm/Linux64/ansoftrsmservice start
  # register engines with RSM (otherwise it'll complain that it can't find it)

  ssh $anode /shared/hpc/ansysEM/19.5/AnsysEM19.5/Linux64/RegisterEnginesWithRSM.pl add
done

# create list of hosts:tasks:cores
RSMHOSTS=""
a=1
for anode in $( cat nodefile ); do
  if [[ $a==1 ]]
  then
    export RSMHOSTS="${RSMHOSTS}${anode}:${CORES_PER_HFSS_TASK}:${HFSS_THREADS_PER_NODE}"
  else
    export RSMHOSTS="${RSMHOSTS},${anode}:${CORES_PER_HFSS_TASK}:${HFSS_THREADS_PER_NODE}"
  fi
  (( a = a + 1 ))
done

# create batch options file
# this is necessary for correct license type
export OptFile=batch.cfg
echo \$begin \'Config\' > ${OptFile}
echo \'HFSS/NumCoresPerDistributedTask\'=${CORES_PER_HFSS_TASK} >> ${OptFile}
echo \'HFSS/HPCLicenseType\'=\'Pool\' >> ${OptFile}
echo \'HFSS/SolveAdaptiveOnly\'=0 >> ${OptFile}
echo \'HFSS/MPIVendor\'=\'Intel\' >> ${OptFile}
echo \$end \'Config\' >> ${OptFile}

ansysedt -ng -batchsolve -distributed -machinelist list="${RSMHOSTS}" -batchoptions $OptFile $INPUTNAME

# stop the RSM service when done
for anode in $(cat nodefile); do
  ssh $anode $WORKDIR/rsm/Linux64/ansoftrsmservice stop
done
# remove directory with the RSM files
#/bin/rm -rf $WORKDIR/Linux
~                                                                                                            
~                                                                                                            
~