These applications are not officially maintained, that is, HPC does not update this software or provide official support or documentation.
Sponsored applications are sponsored by users who have volunteered to share their software installations with users outside of their project group and to field basic questions about usage. There is no expectation that applications will be updated to the latest version, or that the installations have been rigorously tested, or that the person volunteering to field questions will have the capacity to provide extensive or time sensitive troubleshooting or training for the applications.
These applications are currently not maintained and not available. However, here are some general guidelines for installing some popular packages yourself.
conda create --prefix /usr/local/usrapps/$GROUP/$USER/pytorch_conda_env pip4). Activate the conda environment:
conda activate --prefix /usr/local/usrapps/$GROUP/$USER/pytorch_conda_env5) Go to: pytorch. Scroll to bottom of page to design your pip install command for pytorch (select "Stable" for build, "Linux" for OS, "Pip" for package, "Python" for language, "CUDA 12.6" for Compute Platform).
bsub -Is -n 1 -W 30 -q short_gpu -gpu "num=1:mode=shared:mps=yes" bashOnce on the interactive compute node, activate the conda environment and start python:
conda activate /usr/local/usrapps/$GROUP/$USER/pytorch_conda_env python
# run these python commands using python command prompt import torch torch.cuda.is_available() #If "True" is returned, it is indicative that pytorch is configured to properly run on the HPC GPUs
# At the time this was published to our website, tensorflow required python version < 3.14. Thus, the selection of version 3.13.9. conda create --prefix /usr/local/usrapps/$GROUP/$USER/tf_conda_env pip python==3.13.9
conda activate --prefix /usr/local/usrapps/$GROUP/$USER/tf_conda_env
pip install tensorflow[and-cuda]
bsub -Is -n 1 -W 30 -q short_gpu -gpu "num=1:mode=exclusive_process:mps=yes" bashOnce on the interactive compute node, activate the conda environment and start python:
conda activate /usr/local/usrapps/$GROUP/$USER/tf_conda_env python
# run these python commands using python command prompt
import tensorflow as tf
gpus = tf.config.list_physical_devices('GPU')
if gpus:
print("GPUs available:")
for gpu in gpus:
print(f" {gpu}")
else:
print("No GPUs found by TensorFlow.")
#!/bin/bash #BSUB -n "4,64" # counts down from 64 to 4 until it can find node with that many cores #BSUB -W 120 #BSUB -q short_gpu #gets you a GPU node fast, but, max wall clock time is 120 minutes (good for debugging). For production runs, switch to "-q gpu" and longer wall clock time #BSUB -gpu "mps=yes:mode=exclusive_process:num=2" # tensorflow by default uses all VRAM (GPU memory) on the GPU it is on, thus, it is best to use the exclusive_process mode #BSUB -R “rusage[mem=32]” #BSUB -R "select[a10 || a30 || a100 || l40 || l40s || h100 || h200]" # more modern GPUs #BSUB -o out.%J #BSUB -e err.%J echo "Starting to run commands" source ~/.bashrc conda activate /usr/local/usrapps/$GROUP/$USER/tf_conda_env # run your code conda deactivate
bsub < submit.sh
Last modified: December 22 2025 18:33:44.