Locations & Resources

Login Node

The login node ( hopper.auburn.edu ) is your sole interface to the Hopper Cluster and is accessed remotely over the network using ssh. It is where you can issue commands, submit jobs, see results and manage files.

Although it is acceptable to run short, non-intensive jobs on the login node during testing, the login node is not intended to be used for computationally intensive work.

Running intense computations on the login node affects the performance and availability of the cluster for all other users, and is therefore, not allowed.

Any processes that violate this policy will be killed automatically. You can check the impact of your running processes, or the processes of other users with:

$ top

Hopper will kill processes automatically that violate this policy, however, please notify hpcadmin@auburn.edu if you see any activity that affects your ability to access or effectively work on the cluster.

Compute Nodes

Your jobs run on the compute nodes in the cluster and you utilize them by submitting work to the Workload Manager, traditionally known as a queue system. The Workload Manager assigns your job to the compute nodes based on the attributes that you indicated in your job submission, and the resources that you have at your disposal.

Hopper has the following compute nodes:

Category

Count

Processor

Speed

Cores

Memory

Queue

Restricted

qsub Example

Standard

190

E5-2660

2.60GHz

20

128 GB

general

N

qsub -l nodes=3;ppn=20 job.sh

Standard 28

55

E5-2680

2.40GHz

28

128 GB

gen28

N

qsub -q gen28 -l nodes=3;ppn=28 job.sh

Fast Fat

13

E5-2667

3.20GHz

16

256 GB

fastfat

Y

qsub -q fastfat -l nodes=2;ppn=16 job.sh

GPU K80

2

E5-2660

2.60GHz

20

128 GB

gpu

N

qsub -q gpu -l nodes=1;ppn=20 job.sh

Phi 7120P

2

E5-2660

2.60GHz

20

128 GB

phi

N

qsub -q phi -l nodes=1;ppn=20 job.sh

Super

1

E7-4809

2.00GHz

64

1 TB

super

Y

qsub -q super -l nodes=1;ppn=64 job.sh

File Storage

Users are provided a high performance GPFS file system which is used for users’ home directories, the scratch directory and the tools directory.

Home Directory

Each user has their own directory, called a home directory, in /home/<userid>. Your home directory is the primary location for your datasets, output, and custom software and is limited to 2TB.

Home directories are backed up (snapshotted) daily and kept for 90 days. However, it is the user’s responsibility to transfer their data from their home directory back to their computer for permanent storage.

Scratch Directory

All users have access to a large, temporary, work-in-progress directory for storing data, called a scratch directory in /scratch.

Use this directory to store very large datasets for a short period of time and to run your jobs. Although you can submit jobs to run from your home directory, scratch is better option as it is much larger at 1.4 PB. Files in this location are purged after 30 days of inactivity (access time) and are not backed up.

How to Use Scratch for your Data

Create a directory for your job in scratch. Copy your input data to this directory in scratch. Run your job that uses the files in that directory. Within a week, make sure and copy any needed results back to your home directory. Delete your directory in scratch. Create a new directory in scratch for every job you run.

Warnings! Scratch Directory

Warning: Do not use scratch as long term storage.

Any data left on scratch is automatically erased after 30 days of inactivity, based on the last access time of the file(s). Each time the data is accessed, the time window is renewed, so as long as you are using your data it will remain available. There is no backup of scratch. Thus any data files left on /scratch must be transferred elsewhere within a few days if they need to be kept.

Tools Directory

Each user has access to a directory for installed software called the tools directory located in /tools. Many of the most popular software packages, compilers, and libraries are installed here.

File Storage Summary

Name

Directory

Purpose

Quota

Retention

Backup

Home

/home/userid

Small datasets, output and custom software

2 TB

long

Y

Scratch

/scratch

Large datasets and output

1.4 PB

short

N

Tools

/tools

Software packages, compilers and libraries

NA

long

Y

Data Transfer

To transfer files to your home directory from your local machine:

$ scp –r <source filename> <userid>@hopper.auburn.edu:<~/target_filename>

To transfer files to your local machine from your home directory:

$ scp –r <userid>@hopper.auburn.edu:<~/source_filename> <target_filename>