Difference between revisions of "Submitting GPU Jobs"
(→CUDA modules) |
|||
Line 3: | Line 3: | ||
The GPU jobs are submitted to <code>gpu</code> partition. | The GPU jobs are submitted to <code>gpu</code> partition. | ||
− | To ask for one GPU card, use <code>#SBATCH - | + | To ask for one GPU card, use <code>#SBATCH -G 1</code> directive or <code>-G 1</code> option on the command line. The submitted job has <code>CUDA_VISIBLE_DEVICES</code> set appropriately, so all CUDA applications should use only the allocated GPUs. |
== Rules == | == Rules == | ||
Line 9: | Line 9: | ||
* Always use GPUs via ''sbatch'' (or ''srun''), never via ''ssh''. You can ssh to any machine e.g. to run ''nvidia-smi'' or ''htop'', but not to start computing on GPU. | * Always use GPUs via ''sbatch'' (or ''srun''), never via ''ssh''. You can ssh to any machine e.g. to run ''nvidia-smi'' or ''htop'', but not to start computing on GPU. | ||
* Don't forget to specify you RAM requirements with e.g. ''--mem=10G''. | * Don't forget to specify you RAM requirements with e.g. ''--mem=10G''. | ||
− | * Always specify the number of GPU cards (e.g. ''- | + | * Always specify the number of GPU cards (e.g. ''-G 1''). Thus e.g. <code>srun -p gpu --mem=64G -G 2 --pty bash</code> |
* For interactive jobs, you can use ''srun'', but make sure to end your job as soon as you don't need the GPU (so don't use srun for long training). | * For interactive jobs, you can use ''srun'', but make sure to end your job as soon as you don't need the GPU (so don't use srun for long training). | ||
* In general: don't reserve a GPU (as described above) without actually using it for longer time, e.g., try separating steps which need GPU and steps which do not and execute those separately on our GPU resp. CPU cluster. | * In general: don't reserve a GPU (as described above) without actually using it for longer time, e.g., try separating steps which need GPU and steps which do not and execute those separately on our GPU resp. CPU cluster. | ||
Line 17: | Line 17: | ||
Available CUDA versions are in | Available CUDA versions are in | ||
− | + | /lnet/aic/opt/cuda/ | |
+ | and as of Apr 2023, available versions as 10.1, 10.2, 11.2, 11.7, 11.8. | ||
+ | |||
+ | The cuDNN library is also available in the subdirectory <code>cudnn/VERSION/lib64</code> of the respective CUDA directories. | ||
+ | |||
+ | Therefore, to use CUDA 11.2 with cuDNN 8.1.1, you should add the following to your <code>.profile</code>: | ||
+ | export PATH="/lnet/aic/opt/cuda/cuda-11.2/bin:$PATH" | ||
+ | export LD_LIBRARY_PATH="/lnet/aic/opt/cuda/cuda-11.2/lib64:/lnet/aic/opt/cuda/cuda-11.2/cudnn/8.1.1/lib64:/lnet/aic/opt/cuda/cuda-11.2/extras/CUPTI/lib64:$LD_LIBRARY_PATH" | ||
+ | export XLA_FLAGS=--xla_gpu_cuda_data_dir=/lnet/aic/opt/cuda/cuda-11.2 # XLA configuration if you are using TensorFlow | ||
=== CUDA modules === | === CUDA modules === | ||
− | + | CUDA 11.2 and later can be also loaded as modules. This will set various environment variables for you so you should be able to use CUDA easily. | |
+ | On a GPU node, you can do the following: | ||
# list available modules with: <code>module avail</code> | # list available modules with: <code>module avail</code> | ||
# load the version you need (possibly specifying the version of CuDNN): <code>module load <modulename></code> | # load the version you need (possibly specifying the version of CuDNN): <code>module load <modulename></code> | ||
# you can unload the module with: <code>module unload <modulename></code> | # you can unload the module with: <code>module unload <modulename></code> | ||
+ | |||
+ | As of Apr 2023, the available modules are | ||
+ | * cuda/11.2 | ||
+ | * cuda/11.2-cudnn8.1 | ||
+ | * cuda/11.7 | ||
+ | * cuda/11.7-cudnn8.5 | ||
+ | * cuda/11.8 | ||
+ | * cuda/11.8-cudnn8.5 | ||
+ | * cuda/11.8-cudnn8.6 |
Revision as of 10:50, 23 April 2023
Start by reading Submitting CPU Jobs page.
The GPU jobs are submitted to gpu
partition.
To ask for one GPU card, use #SBATCH -G 1
directive or -G 1
option on the command line. The submitted job has CUDA_VISIBLE_DEVICES
set appropriately, so all CUDA applications should use only the allocated GPUs.
Rules
- Always use GPUs via sbatch (or srun), never via ssh. You can ssh to any machine e.g. to run nvidia-smi or htop, but not to start computing on GPU.
- Don't forget to specify you RAM requirements with e.g. --mem=10G.
- Always specify the number of GPU cards (e.g. -G 1). Thus e.g.
srun -p gpu --mem=64G -G 2 --pty bash
- For interactive jobs, you can use srun, but make sure to end your job as soon as you don't need the GPU (so don't use srun for long training).
- In general: don't reserve a GPU (as described above) without actually using it for longer time, e.g., try separating steps which need GPU and steps which do not and execute those separately on our GPU resp. CPU cluster.
- If you know an approximate runtime of your job, please specify it with -t . Acceptable time formats include "minutes", "minutes:seconds", "hours:minutes:seconds", "days-hours", "days-hours:minutes" and "days-hours:minutes:seconds".
CUDA and cuDNN
Available CUDA versions are in
/lnet/aic/opt/cuda/
and as of Apr 2023, available versions as 10.1, 10.2, 11.2, 11.7, 11.8.
The cuDNN library is also available in the subdirectory cudnn/VERSION/lib64
of the respective CUDA directories.
Therefore, to use CUDA 11.2 with cuDNN 8.1.1, you should add the following to your .profile
:
export PATH="/lnet/aic/opt/cuda/cuda-11.2/bin:$PATH" export LD_LIBRARY_PATH="/lnet/aic/opt/cuda/cuda-11.2/lib64:/lnet/aic/opt/cuda/cuda-11.2/cudnn/8.1.1/lib64:/lnet/aic/opt/cuda/cuda-11.2/extras/CUPTI/lib64:$LD_LIBRARY_PATH" export XLA_FLAGS=--xla_gpu_cuda_data_dir=/lnet/aic/opt/cuda/cuda-11.2 # XLA configuration if you are using TensorFlow
CUDA modules
CUDA 11.2 and later can be also loaded as modules. This will set various environment variables for you so you should be able to use CUDA easily.
On a GPU node, you can do the following:
- list available modules with:
module avail
- load the version you need (possibly specifying the version of CuDNN):
module load <modulename>
- you can unload the module with:
module unload <modulename>
As of Apr 2023, the available modules are
- cuda/11.2
- cuda/11.2-cudnn8.1
- cuda/11.7
- cuda/11.7-cudnn8.5
- cuda/11.8
- cuda/11.8-cudnn8.5
- cuda/11.8-cudnn8.6