Installation

Choose a python distribution

.accordion { max-width: 1000px; margin: 0 auto; background: #FFFFFF; /* White */ border-radius: 10px; box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1); } .accordion-item { border-bottom: 1px solid #ddd; } .accordion-header { background: #00274C; /* Blue */ color: #FFCB05; /* Maize */ cursor: pointer; padding: 15px; text-align: left; width: 100%; border: none; outline: none; font-size: 18px; transition: background 0.3s ease; } .accordion-header:hover { background: #001f3a; /* Darker Blue on hover */ } .accordion-content { background: #f9f9f9; padding: 0 0px; display: none; overflow: hidden; transition: max-height 0.2s ease-out; } .accordion-content p { margin: 15px 0; }

And install pytorch using the accompanying package manager

Using mamba 

To install using libmamba

module load mamba/py3.12

mamba create --name pytorch_tutorial python==3.12

mamba activate pytorch_tutorial

mamba install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia

Using conda 

To install using conda

module load python3.11-anaconda

conda create --name pytorch_tutorial python==3.11

conda activate pytorch_tutorial

conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia

Standard Python approach

To install from pypi

module load python

python -m venv pytorch_tutorial

source ./pytorch_tutorial/bin/activate

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

Advanced (Compiling Pytorch from source)

The following bash script can be used to automate the download and compile steps. Save the script as torch_installer.sh

#!/bin/bash
set -e  # Exit immediately if a command exits with a non-zero status
set -u  # Treat unset variables as an error
set -o pipefail  # Prevents errors in a pipeline from being masked

# Define paths
CUSPARSELT_URL="https://developer.download.nvidia.com/compute/cusparselt/redist/libcusparse_lt/linux-x86_64/libcusparse_lt-linux-x86_64-0.7.0.0-archive.tar.xz"
CUDSS_URL="https://developer.download.nvidia.com/compute/cudss/redist/libcudss/linux-x86_64/libcudss-linux-x86_64-0.4.0.2_cuda12-archive.tar.xz"
PYTORCH_URL="https://github.com/pytorch/pytorch/releases/download/v2.6.0/pytorch-v2.6.0.tar.gz"
DEST_DIR="/to/path/destination"
LIB_DIR="$DEST_DIR/lib"
LIB_CUDSS_DIR="$DEST_DIR/libcudss"
PYTORCH_DIR="pytorch-v2.6.0"

# Ensure required commands exist
for cmd in wget tar python module; do
    if ! command -v $cmd &> /dev/null; then
        echo "Error: $cmd command not found. Please install it first."
        exit 1
    fi
done

# Download and extract cuSPARCELt
wget -O libcusparse_lt.tar.xz "$CUSPARSELT_URL"
mkdir -p "$LIB_DIR"
tar -xf libcusparse_lt.tar.xz -C "$LIB_DIR"

# Download and extract cuDSS
wget -O libcudss.tar.xz "$CUDSS_URL"
mkdir -p "$LIB_CUDSS_DIR"
tar -xf libcudss.tar.xz -C "$LIB_CUDSS_DIR"

# Load required modules
module load gcc/10.3.0 cuda/12.6 cudnn openmpi/5.0.3-cuda

# Download and extract PyTorch
wget -O pytorch.tar.gz "$PYTORCH_URL"
tar -xf pytorch.tar.gz
cd "$PYTORCH_DIR"

# Ensure the correct CMake paths
if [[ ! -d "$LIB_DIR" || ! -d "$LIB_CUDSS_DIR" ]]; then
    echo "Error: Required library directories not found. Check extraction paths."
    exit 1
fi

export CMAKE_PREFIX_PATH="$LIB_DIR:$LIB_CUDSS_DIR:/sw/pkgs/arc/stacks/gcc/13.2.0/openmpi/5.0.3-cuda"

# Build PyTorch
python setup.py build --cmake-only

# Install PyTorch
python setup.py install --user

# Verify installation
python -c "import torch; print(torch.__config__.show()); print('CUDA Available:', torch.cuda.is_available())"

$ python setup.py install --user
$ python
Python 3.12.1 (main, Jan 15 2024, 10:35:30) [GCC 13.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.__config__.show()) 
PyTorch built with:
  - GCC 10.3
  - C++ Version: 201703
  - Intel(R) MKL-DNN v3.5.3 (Git Hash N/A)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - Build settings: BLAS_INFO=open, BUILD_TYPE=Release, COMMIT_SHA=Unknown, CUDA_VERSION=12.6, CUDNN_VERSION=9.6.0, CXX_COMPILER=/sw/pkgs/arc/gcc/10.3.0/bin/g++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, FORCE_FALLBACK_CUDA_MPI=1, LAPACK_INFO=open, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.6.0, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=ON, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=OFF, USE_MKLDNN=ON, USE_MPI=ON, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, 

>>> torch.cuda.is_available()
True
const accordionHeaders = document.querySelectorAll('.accordion-header'); accordionHeaders.forEach(header => { header.addEventListener('click', function() { const content = this.nextElementSibling; // Toggle the current accordion item content.style.display = content.style.display === 'block' ? 'none' : 'block'; // Collapse other open accordion items accordionHeaders.forEach(otherHeader => { const otherContent = otherHeader.nextElementSibling; if (otherHeader !== this) { otherContent.style.display = 'none'; } }); }); });