Armis2

  Armis2 is is a campus-wide HPC cluster suitable for export controlled and HIPAA regulated data, but users are responsible for security and compliance related to sensitive code and/or data. It utilizes Slurm as its workload manager, enabling users to work interactively or submit batch jobs.
Researchers are urged to acknowledge ARC in any publication, presentation, report, or proposal on research that involved ARC hardware (Great Lakes or other resources) and/or staff expertise. “This research was supported in part through computational resources and services provided by Advanced Research Computing at the University of Michigan, Ann Arbor.”
Armis2 Cluster Defaults Cluster Defaults Default Value Default Walltime 60 minutes Default Memory Per CPU 768 MB Default Number of CPUs no memory specified: 1 core Memory specified: memory/768 = # of cores (rounded down) /scratch file deletion policy
HARDWARE Node Type Standard Large Memory GPU (TitanV) GPU (V100) Number of Nodes 95 5 3 1 Processors 2x 2.5 GHz Intel Haswell (Xeon E5-2680v3) 2x 3.0 GHz Intel Skylake (Xeon Gold 6154) 2x 2.10GHz Intel Broadwell (Xeon E5-2620V4) 2x 2.5 GHz Intel Cascade Lake (Xeon Gold 6248)
Welcome to using the cluster from the command line where you can do different things than using Open On Demand.   If you are using data types that may need special considerations be sure to visit the Safe Computing Data Guide Why use the command line?
Open OnDemand is a way for users to run interactive jobs on Great Lakes, Armis2 and Lighthouse. Start computing immediately. A simple interface makes Open OnDemand easy to learn and use. This includes:
Partition Policies Slurm partitions represent collections of nodes for a computational purpose, and are equivalent to Torque queues. For more Armis2 hardware specifications, see the Configuration page. Partitions: