Storage

Home directories (/home)

Currently each user has 10 GB in home directory space (/home/<username>). This is for configuration and login files.  It will be quite a bit slower than other disk resources, so it should not be used for high volume access.

 

Long-term Group storage (/work)

Each group has a directory shared by all members of the group (and only that group).  These directories are links under the directory /work. Issue "ls /work/" to see all group directories. Unless you're in the LAS group, "cdw" command will cd to /work/<your_group_working_directory>/<NetID> . ("cdw" will also create directory <NetID> in /work/<your_group_working_directory> if it does not exist).

To find your group, issue the "groups | grep nova" command. Normally groups will have names its-hpc-nova-<NetID>, where <NetID> is NetID of the PI for this group. LAS users can cd to /work/LAS/<PI_name>-lab .

Group working directories are available on all nodes, and any user in the group can create files there using the group's quota. Group quota for this space is based on the shares for the group and can be seen at login. The usage and quota can also be found in the file /work/<your_group_working_directory>/group_storage_usage which is updated once every hour.

The group working directories are backed up daily using ZFS snapshots. The snapshots contain the previous version of a file, or in the case of a deleted file, the last version of it.  Use of snapshots allows to do incremental backups based on the these snapshots, which is needed because there are several millions of files on each server. The snapshots are also for filesystem consistency, as the file can be reverted to the previous version by the filesystem admin. The snapshots are copied to a backup storage and are deleted from the primary storage after 5 days. The snapshots on the primary storage use group allocation. That means that removing files frees up space only after 5 days.

The ZFS on the storage servers uses automated compression, so users can actually store more data in the same amount of physical space. The compression ratio depends on the data. Autocompression not only saves space, but also improves performance. However any file system will perform worse when it's close to being full. For this reason the group's PI will receive an email when group's data usage is over 70% quota. It's a good practice to keep file system less than 70% full.

 

Large short-term storage (/ptmp)

For large short-term storage currently 209 TB of storage is available in the BeeGFS Space /ptmp . Please use this only for large files. Using this for small files (less than 8MB) is likely to be slower than the NFS storage due to lack of parallelism and increased complexity.

Since this is a short-term scratch storage, files in /ptmp are subject to deletion to make space for other users. We expect to delete files which are 60 days past their creation date.  Note that for tar files, one should use the -m flag when untaring files, so that files would have the date they were untared and not the date at which the author of the tar file created them.

 

Temporary local storage ($TMPDIR)

Jobs can use the storage on the disk drive on each of the compute nodes by reading and writing to $TMPDIR (about 1.5 TB on most of the compute nodes, 11TB on the high memory node).  This is temporary storage that can be used only during the execution of your program. Only processes executing on a node have access to this disk drive.  You must ensure that multiple processes executing on the same compute node don't accidentally access files in $TMPDIR meant for other processes. If running MPI program on a single node, one way to accomplish this is by including MPI process ranks in temporary filenames.

To use this local storage the following workflow should be used.  These steps may be taken interactively (when salloc'd to a compute node) or in batch-mode. In batch mode the copy commands below should be added to the job script.

Copy calculation input to the local drive.
e.g., cp /work/<your_group_working_directory>/<user>/<input files> $TMPDIR  where <input files> contains the folders/files to be used in your calculation (to copy the whole folder use "-r" option).

Run your code, getting input from files located in $TMPDIR and writing output to $TMPDIR.

Copy final results to storage location
e.g., cp $TMPDIR/<final results> /work/<your_group_working_directory>/<user>/<final results>

Note that the directory at $TMPDIR will disappear at the conclusion of your job.  Any data which is not copied out of $TMPDIR cannot be recovered after your job has finished.

 

LSS (/lss)

Large Scale Storage (LSS) is a research file storage service for faculty and staff. LSS is a software defined storage solution, and contains approximately 3 petabytes of usable storage, with the ability to continue to grow.

LSS has been designed to provide an extremely low cost, reliable solution for storing large quantities of research data.  The primary user audience for this system is research labs who need to store terabytes of data for long periods of time.  LSS is useful for backups, archiving, and storing large files (e.g., videos, sequencing data, images, etc.)

LSS is mounted on the Nova cluster. To access your directory, ssh to novadtn (you won't be prompted for the password). When on novadtn, issue:

kinit

and enter your ISU password when prompted. To access your group LSS directory, use /lss/research/<PI>-lab. Additional information on LSS can be found at https://researchit.las.iastate.edu/guides/lss/.

 

File Transfers

When transferring files to or from nova storage please use the Data Transfer Node, novadtn.its.iastate.edu.  All of the storage options are available there and it will help keep the head node from becoming overburdened.