Storage

 

Home directories

Currently each user has 5 GB in home directory space (/home/<username>). This is for configuration and login files.  It will be quite a bit slower than other disk resources, so it should not be used for high volume access.

 

Long-term Group storage

Each group has a directory shared by all members of the group (and only that group).  These directories are links under the directory /work. Issue "ls /work/" to see all group directories. Unless you're in the LAS group, "cdw" command will cd to /work/<your_group_working_directory>/<NetID> . To find your group, issue the "groups | grep condo" command. Normally, groups in the Free Tier will have names <NetID>-free, where <NetID> is NetID of the PI for this group.

Group working directories are available on all nodes, and any user in the group can create files there using the group's quota. Group quota for this space is based on the shares for the group and can be seen at login. The usage and quota can also be found in the file /work/<your_group_working_directory>/group_storage_usage which is updated once every hour.

 

Large short-term storage

For large short-term storage currently 220 TB of storage is available in the Gluster Space /ptmp . Please use this only for large files. Using this for small files (less than 8MB) is likely to be slower than the NFS storage due to lack of parallelism and increased complexity.

Unless you're in the LAS group, "cds" command will cd to /ptmp/<your_group_working_directory>/<NetID> .

Since this is a short-term scratch storage, files in /ptmp are subject to deletion to make space for other users. We expect to delete files which are 90 days past their creation date.  Note that for tar files, one should use the -m flag when untaring files, so that files would have the date they were untared and not the date at which the author of the tar file created them.

 

Large short-term storage for Free Tier users

For large short-term storage for Free Tier users currently 41 TB of storage is available in the Gluster Space /freetmp . Please use this only for large files. Using this for small files (less than 8MB) is likely to be slower than the NFS storage due to lack of parallelism and increased complexity.

Since this is a short-term scratch storage, files in /freetmp are subject to deletion to make space for other users. We expect to delete files which are 6 weeks past their creation date or 3 weeks past their modification date.  Note that for tar files, one should use the -m flag when untaring files, so that files would have the date they were untared and not the date at which the author of the tar file created them.

 

Temporary local storage

One can use the storage on the disk drive on each of the compute nodes by reading and writing to $TMPDIR (about 2.5 TB on regular compute nodes).  This is temporary storage that can be used only during the execution of your program. Only processors executing on a node have access to this disk drive.  You must ensure that multiple processes executing on the same compute node don't accidentally access files in $TMPDIR meant for other processes; one way to accomplish this is by including MPI process ranks in temporary filenames.

 

BETA: Temporary parallel storage

  Multi-node jobs will build a parallel file system on-demand which utilizes the local storage on each compute node assigned to the job.  This provides a large fast-access scratch space for temporary files that resides alongside the local storage available in $TMPDIR.  This filesystem will be mounted at /mnt/scratch during the execution of the multi-node job.

  To use this parallel storage the following workflow should be used.  These steps may be taken interactively (when salloc'd to a compute node) or in batch-mode. In batch mode the copy commands below should be added to the job script.

  1. Copy calculation input to the parallel filesystem.
    e.g., cp /ptmp/<user>/<input files> /mnt/scratch   where <user> is your user name and <input files> contains the folders/files to be used in your calculation (to copy the whole folder use "-r" option).
  2. Run your code, getting input from files located in /mnt/scratch and writing output to /mnt/scratch
  3. Copy final results to storage location
    e.g., cp /mnt/scratch/<final results> /ptmp/<user>/<final results>

Note that this parallel filesystem will disappear at the conclusion of your job.  Any data which is not copied out of /mnt/scratch cannot be recovered after your job has finished.

 

MyFiles

Myfiles are mounted on the Condo cluster. To access your directory, ssh to condodtn (you won't be prompted for the password). When on condodtn, issue:

kinit

and enter your ISU password when prompted. To access your MyFiles directory, use /myfiles/Users/<your_user_name> or /myfiles/<your_dept>/users/<your_user_name>  (depending on the department, the path may differ from the one above). Additional information on MyFiles can be found on the Information Technology website.

 

Submitting MPI job

  When submitting an MPI job make sure that the executable resides in one of the shared locations:
       /home/<user>     (where <user> is your user name)
       /work/<group>    (where <group> is your group name)
       /ptmp/<group>    (where <group> is your group name)
  All these locations are mounted on each of the compute nodes.