Skip to main content

A&S Research Cluster - Basic Info

Intended audience:
All users

Getting Started

The cluster can be accessed only from the login node via SSH or via the web interface. The host name for SSH connections is holly.ascs.uky.edu. You should use your Linkblue username (without the AD\ or MC\ prefix) and password to log in.

New users will need to have their accounts activated for use – if you have used A&S Linux systems previously, please attempt to log in once to see if your account is active. To request an account on the cluster if you are a faculty member or graduate student in the College of Arts and Sciences, please use this form. Student accounts require a faculty sponsor.

Use of the University’s VPN is required to access the cluster from off-campus locations. For information on how to download and set up the VPN client, please see our tutorials.

Basic Instructions for Batch Job Submission

Jobs on this system may not be run directly – they must be submitted via the batch job/scheduling system.  Jobs running on the login node interfere with operation of the cluster for other users, and may be killed at any time.

The cluster uses Slurm to schedule jobs and monitor resource usage. Jobs are generally submitted as shell scripts with lines containing parameters for the scheduler. Sample scripts are in the /share/apps/examples/ directory on the system.  A basic script is below:

#!/bin/bash
#SBATCH --nodes=1 --ntasks-per-node=1 --mem=2G --time=00:30:00
#SBATCH --mail-user=user@uky.edu
#SBATCH --mail-type=ALL

module load NAMD/2.9

namd2 /home/user/apoa1/apoa1.namd

To submit a job, you would use the sbatch command, like this:

sbatch simple.sh

The sjstat, sinfo and showq commands provide information on the jobs currently in the queue. scancel is used to delete jobs. More information about the commands is available via their man pages, and more complex examples of the options available can be found online.

Queues

The cluster has several available queues, which should be selected based on the estimated time needed for a job.  The generally available queues are as follows:

pcompute (default) – up to 48 hours
long – up to 30 days
short – up to 24 hours

The queues have varying resource limits for jobs, which may be adjusted from time to time based on utilization patterns.  Generally, if your job can run in the default queue, it should be used.

Additional queues are available for users associated with certain departments or faculty.  These will not allow jobs from users who are not in these groups.

stats_short
stats_medium
stats_long
manon
nguyen
wang

More Information

For a good source of information about Slurm and its options, please check this site (hosted by Harvard University).  Most of the information is fairly generalized and should be useful on any system running Slurm.

Cluster Hardware Information

Category: