Locations & Resources

Login Node

The login node ( easley.auburn.edu ) is your sole interface to the Easley Cluster and is accessed remotely over the network using ssh. It is where you can issue commands, submit jobs, see results and manage files.

Although it is acceptable to run short, non-intensive programs on the login node during testing, the login node is not intended to be used for computationally intensive work. Running intense computations on the login node affects the performance and availability of the cluster for all users, and is, therefore, not allowed. Any processes that violate this policy will be killed automatically. You can check the impact of your running processes, or the processes of other users with:

top

Please notify hpcadmin@auburn.edu if you see any activity that affects your ability to access or effectively work on the cluster.

Compute Nodes

Your research software runs on the compute nodes and there are a number of different types on the Easley cluster. Some are special purpose, but most are general purpose in that they are suited to a variety of workloads.

Types of compute nodes:

Category

Count

Processor

Speed

Cores

Memory

Partition

Standard

126

Intel Xeon Gold 6248R

3.00GHz

48

192 GB

general

Bigmem2

21

Intel Xeon Gold 6248R

3.00GHz

48

384 GB

bigmem2

Bigmem4

9

Intel Xeon Gold 6248R

3.00GHz

48

768 GB

bigmem4

AMD

11

AMD EPYC 7662

2.00GHz

128

256 GB

amd

2xGPU

9

Intel Xeon Gold 6248R/Tesla T4

3.00GHz

48

384 GB

gpu2

4xGPU+

2

Intel Xeon Gold 6248R/Tesla T4

3.00GHz

48

768 GB

gpu4

Nova 20

61

Intel Xeon E5-2660 v3

2.60GHz

20

128 GB

nova

Nova 28

47

Intel Xeon E5-2680 v4

2.60GHz

28

128 GB

nova

Nova fastfat

12

Intel Xeon E5-2667 v3

3.20GHz

16

256 GB

nova_ff

Nova GPU

2

Intel Xeon E5-2660 v3/Tesla K80

2.60GHz

28

128 GB

nova_gpu

Nova Super

1

Intel Xeon E7-4809 v3

2.00GHz

64

1 TB

nova_super

Recommendation

If you don’t know which partition to use, use the general or nova partition as they are best suited for general workloads.

Note

Although your jobs run on compute nodes in the cluster, your primary interface with them is through the scheduler. The scheduler assigns your job to the appropriate compute nodes based on what you indicate in your job submission and the resources which are available to you.

File Storage

Users are provided a high performance GPFS file system which is used for home directories, the scratch directory and the tools directory.

Home Directory

Each user has their own directory, called a home directory, in /home/<userid>. Your home directory is the primary location for your datasets, output, and custom software and is limited to 2TB.

Home directories are backed up (snapshots) daily and kept for 90 days. However, it is the user’s responsibility to transfer their data from their home directory back to their computer for permanent storage.

Scratch Directory

All users have access to a large, temporary, work-in-progress directory called the scratch directory in /scratch.

Use this directory to store very large datasets for a short period of time and to run your jobs. Although you can submit jobs to run from your home directory, scratch is better option as it is much larger at 2 PB. Files in this location are purged after 30 days of inactivity (access time) and are not backed up.

How to Use Scratch for your Data

Create a directory for your job in scratch. Copy your input data to this directory in scratch. Run your job that uses the files in that directory. Within a week, make sure and copy any needed results back to your home directory. Delete your directory in scratch. Create a new directory in scratch for every job you run.

Warning! Scratch Directory

Do not use scratch as long term storage.

Any data left on scratch is automatically purged after 30 days of inactivity, based on the last access time of the file. Each time the data is accessed, the time window is renewed, so as long as you are using your data it will remain available. There is no backup of scratch. Once a file is deleted, it cannot be recovered. Thus any data files left on /scratch must be transferred elsewhere within a few days if they need to be kept.

Tools Directory

Each user has access to a directory for installed software called the tools directory located in /tools. Many popular software packages, compilers, and libraries are installed here.

File Storage Summary

Name

Directory

Purpose

Quota

Retention

Backup

Home

/home/userid

Small datasets, output and custom software

2 TB

long

Y

Scratch

/scratch

Large datasets and output

20 TB

short

N

Tools

/tools

Software packages, compilers and libraries

NA

long

Y

Data Transfer

To transfer files to your home directory from your local machine:

scp –r <source filename> <userid>@easley.auburn.edu:~/<target_filename>

To transfer files to your local machine from your home directory:

scp –r <userid>@easley.auburn.edu:~/<source_filename> <target_filename>