Armis2 Cluster Defaults
Cluster Defaults | Default Value |
---|---|
Default Walltime | 60 minutes |
Default Memory Per CPU | 768 MB |
Default Number of CPUs | no memory specified: 1 core Memory specified: memory/768 = # of cores (rounded down) |
/scratch file deletion policy | 60 days without being accessed (see Scratch Storage Policies below) |
/scratch quota per root account | 10 TB storage limit (see Scratch Storage Policies below) |
/home quota per user | 80 GB |
Max queued jobs per user per account | 5,000 |
Shell timeout if idle: | 15 Minutes |
Armis2 Partition Limits
Partition Limit | standard | gpu | largemem |
---|---|---|---|
Max Walltime | 2 weeks | ||
Max running Mem per root account | 5160 GB | 2210 GB | |
Max running CPUs per root account | 1032 cores | 84 cores | |
Max running GPUs per root account | n/a | 10 | n/a |
Armis2 Login Limits
NOTICE: The login nodes are for interacting with the Slurm scheduler or code and data management; they are not for running workloads. For sessions on the login nodes, users are limited to:
- 2 cores
- 4G of memory
If you need additional resources for testing:
- Submit an interactive job with salloc: ( salloc –account=[your account] -p debug –time=10:00 –nodes=1 –ntasks-per-node=4 (this will create an interactive session on 1 node with 4 processors for 10 minutes)
- Use Open OnDemand.
- Use the debug partition if your testing does not require more than 8 CPUs and 40G of memory.
Armis2 Storage: /home and /scratch
On the Armis2 cluster, users have access to two main types of storage: /home and /scratch. Each serves a different purpose:
- /home is for personal, non-project work like scripts, small data files, and software builds.
- /scratch is for active project data needed while running jobs. It’s larger but temporary.
Both directories have specific rules about who can access your files.
File Permissions:
To protect your data and the system, files or folders that are set to be readable or writable by “other” (users outside your account or group) will have those permissions automatically removed. Always set appropriate permissions to limit access to just yourself or your project group. If you need a group to manage access to a directory or files, let us know.
Home Directories
Every user is assigned a /home directory for non-project work. This space is intended for storing scripts, small data files, software builds, and other personal files unrelated to specific funded projects.
- Each /home directory has a hard quota of 80 GB.
- Files stored here should only be accessible by the user and their primary group(s).
- Files or directories with global (other) read or write permissions will have those permissions automatically removed to maintain system security and data integrity.
Use this space for development, light workflows, and other general-purpose computing needs. For large datasets or collaborative work, use the /scratch directory structure described below.
Scratch Directories
Every user has a /scratch folder for each Slurm account they belong to. This space is designed for fast access to active data and should only be used for files needed to run current jobs on the cluster.
- /scratch uses the Turbo high-performance file system.
- It is not backed up, so do not store important or long-term files here. Move important data to a durable location when your jobs are done.
- Each Slurm account also has a shared_data folder where all users in the account can share files.
- The folders are set up so that files can be accessed by anyone in the same Slurm account group, to make working together easier.
Example:
/scratch/myproject_root/myproject1
/scratch/myproject_root/myproject1/bob
/scratch/myproject_root/myproject1/shared_data
Slurm Accounts and Funding:
Each Slurm account may match a different source of funding:
myproject0 → UMRCP (free allocation)
myproject1 → Research Allocation X (paid)
myproject2 → Research Allocation Y (paid)
Make sure to use the correct account for each project so your usage is tracked properly.
Please see the section on Storage Policies for more details.