HPC Quick Start

View all HPC Resources

To get started with High Performance Computing, first apply for an account. Your login ID is your WSU AccessID and your password is the same password used with your email. If you do not know your WSU AccessID or password, please follow the steps in this Knowledge Base article.

Once you have an account, you will be able to log on to the WSU Grid by using an SSH-2 client. Your ssh client must be set to use 'Keyboard Interactive' authentication and SSH-2 protocols. Please do not share your AccessID and password with anyone. If a colleague of yours requires an account on the grid, please have them fill out the online application.

Policies

Please read the following information before attempting to use the Grid.

  1. The WSU Grid uses Slurm to schedule jobs. You must run all processes on compute nodes by requesting a node through Slurm. This may be done with either an interactive job or a job script. The login node known as Warrior is for the sole purpose of Slurm job submissions, job status, and file transfers. Any processes caught running on Warrior will be deleted immediately.
  2. All home directories are shared across the main clusters. This is a global filesystem that spans all Grid nodes and the master node. Your home directory is where you should place your files and work. By default, your home directory is only accessible by you. If you would like to share files with other Grid users then please contact us and we will set up a group account for you and your group. If you would like to share files with users outside of Wayne State then use globus.
  3. Shared group directories are placed in /home/groups/. If you require a shared group directory, please let us know the name of the group that needs to be created, and the AccessIDs of the users who should be in the group. If you need access to a group that already exists then have someone who belongs to the group send us an email requesting that you be added to the group.
  4. There are two types of temporary scratch space on nodes. The first is /tmp, which varies in size by the node. This directory is local to each node and is not globally accessible across nodes. This is where local processes can write files that are only used by a single process. The second temporary directory is /wsu/tmp, which is about 10TB in size and is globally shared across all nodes. Both of these directories are cleaned out on a weekly basis. If your applications store files there temporarily then be sure to clean them up and move your files to your home directory. We do not backup /tmp or /wsu/tmp, so be warned not to store your files there.
  5.  SSH-2 has been enabled on all nodes and ssh keys are used for you to be able to move between nodes without re-authenticating each time. The first time you log in, keys are automatically generated. We suggest you let the system auto-generate your keys and that you use a blank password for the ssh-keypair. This will allow you to move around the Grid easily.
  6. You cannot ssh directly to a node on the Grid unless you already have a job running on the node from Slurm. Slurm is the job scheduler on the Grid and guarantees that everyone who uses the Grid gets fair time on the system. If you need to ssh directly to an individual node, and you do not have a job running on it, you can use an interactive Slurm job to open a shell.
  7. To get started, visit our tutorials page at tech.wayne.edu/hpc-tutorials.

More information