<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://aic.ufal.mff.cuni.cz/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Admin</id>
	<title>UFAL AIC - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://aic.ufal.mff.cuni.cz/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Admin"/>
	<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php/Special:Contributions/Admin"/>
	<updated>2026-04-07T13:50:47Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.32.1</generator>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_GPU_Jobs&amp;diff=123</id>
		<title>Submitting GPU Jobs</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_GPU_Jobs&amp;diff=123"/>
		<updated>2025-01-24T11:04:47Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* List of installed GPUs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Start by reading [[Submitting CPU Jobs]] page.&lt;br /&gt;
&lt;br /&gt;
The GPU jobs are submitted to &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition.&lt;br /&gt;
&lt;br /&gt;
To ask for one GPU card, use &amp;lt;code&amp;gt;#SBATCH -G 1&amp;lt;/code&amp;gt; directive or &amp;lt;code&amp;gt;-G 1&amp;lt;/code&amp;gt; option on the command line. The submitted job has &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt; set appropriately, so all CUDA applications should use only the allocated GPUs.&lt;br /&gt;
&lt;br /&gt;
== Rules ==&lt;br /&gt;
&lt;br /&gt;
* Always use GPUs via ''sbatch'' (or ''srun''), never via ''ssh''. You can ssh to any machine e.g. to run ''nvidia-smi'' or ''htop'', but not to start computing on GPU.&lt;br /&gt;
* Don't forget to specify you RAM requirements with e.g. ''--mem=10G''.&lt;br /&gt;
* Always specify the number of GPU cards (e.g. ''-G 1''). Thus e.g. &amp;lt;code&amp;gt;srun -p gpu --mem=64G -G 2 --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* For interactive jobs, you can use ''srun'', but make sure to end your job as soon as you don't need the GPU (so don't use srun for long training).&lt;br /&gt;
* In general: don't reserve a GPU (as described above) without actually using it for longer time, e.g., try separating steps which need GPU and steps which do not and execute those separately on our GPU resp. CPU cluster.&lt;br /&gt;
* If you know an approximate runtime of your job, please specify it with ''-t &amp;lt;time&amp;gt;''. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== CUDA and cuDNN ==&lt;br /&gt;
&lt;br /&gt;
Available CUDA versions are in&lt;br /&gt;
 /lnet/aic/opt/cuda/&lt;br /&gt;
and as of Apr 2023, available versions as 10.1, 10.2, 11.2, 11.7, 11.8.&lt;br /&gt;
&lt;br /&gt;
The cuDNN library is also available in the subdirectory &amp;lt;code&amp;gt;cudnn/VERSION/lib64&amp;lt;/code&amp;gt; of the respective CUDA directories.&lt;br /&gt;
&lt;br /&gt;
Therefore, to use CUDA 11.2 with cuDNN 8.1.1, you should add the following to your &amp;lt;code&amp;gt;.profile&amp;lt;/code&amp;gt;:&lt;br /&gt;
 export PATH=&amp;quot;/lnet/aic/opt/cuda/cuda-11.2/bin:$PATH&amp;quot;&lt;br /&gt;
 export LD_LIBRARY_PATH=&amp;quot;/lnet/aic/opt/cuda/cuda-11.2/lib64:/lnet/aic/opt/cuda/cuda-11.2/cudnn/8.1.1/lib64:/lnet/aic/opt/cuda/cuda-11.2/extras/CUPTI/lib64:$LD_LIBRARY_PATH&amp;quot;&lt;br /&gt;
 export XLA_FLAGS=--xla_gpu_cuda_data_dir=/lnet/aic/opt/cuda/cuda-11.2 # XLA configuration if you are using TensorFlow&lt;br /&gt;
&lt;br /&gt;
=== CUDA modules ===&lt;br /&gt;
CUDA 11.2 and later can be also loaded as modules. This will set various environment variables for you so you should be able to use CUDA easily.&lt;br /&gt;
&lt;br /&gt;
On a GPU node, you can do the following:&lt;br /&gt;
# list available modules with: &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt;&lt;br /&gt;
# load the version you need (possibly specifying the version of CuDNN): &amp;lt;code&amp;gt;module load &amp;lt;modulename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
# you can unload the module with: &amp;lt;code&amp;gt;module unload &amp;lt;modulename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As of Apr 2023, the available modules are&lt;br /&gt;
 cuda/11.2&lt;br /&gt;
 cuda/11.2-cudnn8.1&lt;br /&gt;
 cuda/11.7&lt;br /&gt;
 cuda/11.7-cudnn8.5&lt;br /&gt;
 cuda/11.8&lt;br /&gt;
 cuda/11.8-cudnn8.5&lt;br /&gt;
 cuda/11.8-cudnn8.6&lt;br /&gt;
 cuda/11.8-cudnn8.9&lt;br /&gt;
&lt;br /&gt;
=== List of installed GPUs ===&lt;br /&gt;
&lt;br /&gt;
===== GPU types and memory size =====&lt;br /&gt;
* 2080 - 11G GPU RAM&lt;br /&gt;
* A4000 - 16G GPU RAM&lt;br /&gt;
* 3090 - 24G GPU RAM&lt;br /&gt;
&lt;br /&gt;
 root@gpu-node1:~# nvidia-smi -L&lt;br /&gt;
 GPU 0: NVIDIA RTX A4000 (UUID: GPU-5b111b2e-ff0d-25f7-2e08-f4065c510832)&lt;br /&gt;
 GPU 1: NVIDIA RTX A4000 (UUID: GPU-9e4fa6ca-e3fa-d404-eac2-026295fbd076)&lt;br /&gt;
 GPU 2: NVIDIA RTX A4000 (UUID: GPU-189c4e93-0ebe-2c7b-aa61-270d08db5a9c)&lt;br /&gt;
 GPU 3: NVIDIA RTX A4000 (UUID: GPU-2f06bc8b-0ef4-6bd9-4385-69c76d73daae)&lt;br /&gt;
 GPU 4: NVIDIA RTX A4000 (UUID: GPU-818b6a31-6d23-39a5-2139-c2c6c8a1174e)&lt;br /&gt;
 GPU 5: NVIDIA GeForce RTX 3090 (UUID: GPU-ba293e60-32f9-6907-705b-e053d1bf453b)&lt;br /&gt;
 GPU 6: NVIDIA RTX A4000 (UUID: GPU-edbbd8a2-f618-070b-8fce-b9a5fa10ccb2)&lt;br /&gt;
 GPU 7: NVIDIA RTX A4000 (UUID: GPU-82956c50-ec17-6fb7-7898-d3c920c1b1f7)&lt;br /&gt;
 GPU 8: NVIDIA RTX A4000 (UUID: GPU-ae27887e-5198-ebc6-fa9c-1f4b66d91b46)&lt;br /&gt;
&lt;br /&gt;
 root@gpu-node2:~# nvidia-smi -L&lt;br /&gt;
 GPU 0: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-15b17780-d818-bcd2-566c-564aa1dfc38e)&lt;br /&gt;
 GPU 1: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-e184b0d4-7147-af43-041b-caa7f597363a)&lt;br /&gt;
 GPU 2: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-ac1a453e-1c30-3fe0-e246-dd07c7645066)&lt;br /&gt;
 GPU 3: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-4d19d859-d044-fdc8-17e0-e84fef4a8a13)&lt;br /&gt;
 GPU 4: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-8035e3f3-76c9-124f-c5ea-d1dd4369f2a8)&lt;br /&gt;
 GPU 5: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-670d0788-a048-8eef-ad1b-1eb77b18980b)&lt;br /&gt;
 GPU 6: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-18d030c6-5956-f45f-7d15-ab53cffa813e)&lt;br /&gt;
 GPU 7: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-f7940219-84a7-8c9c-386f-14e4043c9884)&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_GPU_Jobs&amp;diff=122</id>
		<title>Submitting GPU Jobs</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_GPU_Jobs&amp;diff=122"/>
		<updated>2025-01-24T11:04:11Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* List of installed GPUs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Start by reading [[Submitting CPU Jobs]] page.&lt;br /&gt;
&lt;br /&gt;
The GPU jobs are submitted to &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition.&lt;br /&gt;
&lt;br /&gt;
To ask for one GPU card, use &amp;lt;code&amp;gt;#SBATCH -G 1&amp;lt;/code&amp;gt; directive or &amp;lt;code&amp;gt;-G 1&amp;lt;/code&amp;gt; option on the command line. The submitted job has &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt; set appropriately, so all CUDA applications should use only the allocated GPUs.&lt;br /&gt;
&lt;br /&gt;
== Rules ==&lt;br /&gt;
&lt;br /&gt;
* Always use GPUs via ''sbatch'' (or ''srun''), never via ''ssh''. You can ssh to any machine e.g. to run ''nvidia-smi'' or ''htop'', but not to start computing on GPU.&lt;br /&gt;
* Don't forget to specify you RAM requirements with e.g. ''--mem=10G''.&lt;br /&gt;
* Always specify the number of GPU cards (e.g. ''-G 1''). Thus e.g. &amp;lt;code&amp;gt;srun -p gpu --mem=64G -G 2 --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* For interactive jobs, you can use ''srun'', but make sure to end your job as soon as you don't need the GPU (so don't use srun for long training).&lt;br /&gt;
* In general: don't reserve a GPU (as described above) without actually using it for longer time, e.g., try separating steps which need GPU and steps which do not and execute those separately on our GPU resp. CPU cluster.&lt;br /&gt;
* If you know an approximate runtime of your job, please specify it with ''-t &amp;lt;time&amp;gt;''. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== CUDA and cuDNN ==&lt;br /&gt;
&lt;br /&gt;
Available CUDA versions are in&lt;br /&gt;
 /lnet/aic/opt/cuda/&lt;br /&gt;
and as of Apr 2023, available versions as 10.1, 10.2, 11.2, 11.7, 11.8.&lt;br /&gt;
&lt;br /&gt;
The cuDNN library is also available in the subdirectory &amp;lt;code&amp;gt;cudnn/VERSION/lib64&amp;lt;/code&amp;gt; of the respective CUDA directories.&lt;br /&gt;
&lt;br /&gt;
Therefore, to use CUDA 11.2 with cuDNN 8.1.1, you should add the following to your &amp;lt;code&amp;gt;.profile&amp;lt;/code&amp;gt;:&lt;br /&gt;
 export PATH=&amp;quot;/lnet/aic/opt/cuda/cuda-11.2/bin:$PATH&amp;quot;&lt;br /&gt;
 export LD_LIBRARY_PATH=&amp;quot;/lnet/aic/opt/cuda/cuda-11.2/lib64:/lnet/aic/opt/cuda/cuda-11.2/cudnn/8.1.1/lib64:/lnet/aic/opt/cuda/cuda-11.2/extras/CUPTI/lib64:$LD_LIBRARY_PATH&amp;quot;&lt;br /&gt;
 export XLA_FLAGS=--xla_gpu_cuda_data_dir=/lnet/aic/opt/cuda/cuda-11.2 # XLA configuration if you are using TensorFlow&lt;br /&gt;
&lt;br /&gt;
=== CUDA modules ===&lt;br /&gt;
CUDA 11.2 and later can be also loaded as modules. This will set various environment variables for you so you should be able to use CUDA easily.&lt;br /&gt;
&lt;br /&gt;
On a GPU node, you can do the following:&lt;br /&gt;
# list available modules with: &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt;&lt;br /&gt;
# load the version you need (possibly specifying the version of CuDNN): &amp;lt;code&amp;gt;module load &amp;lt;modulename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
# you can unload the module with: &amp;lt;code&amp;gt;module unload &amp;lt;modulename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As of Apr 2023, the available modules are&lt;br /&gt;
 cuda/11.2&lt;br /&gt;
 cuda/11.2-cudnn8.1&lt;br /&gt;
 cuda/11.7&lt;br /&gt;
 cuda/11.7-cudnn8.5&lt;br /&gt;
 cuda/11.8&lt;br /&gt;
 cuda/11.8-cudnn8.5&lt;br /&gt;
 cuda/11.8-cudnn8.6&lt;br /&gt;
 cuda/11.8-cudnn8.9&lt;br /&gt;
&lt;br /&gt;
=== List of installed GPUs ===&lt;br /&gt;
&lt;br /&gt;
==== GPU types and memory size ====&lt;br /&gt;
* 2080 - 11G GPU RAM&lt;br /&gt;
* A4000 - 16G GPU RAM&lt;br /&gt;
* 3090 - 24G GPU RAM&lt;br /&gt;
&lt;br /&gt;
 root@gpu-node1:~# nvidia-smi -L&lt;br /&gt;
 GPU 0: NVIDIA RTX A4000 (UUID: GPU-5b111b2e-ff0d-25f7-2e08-f4065c510832)&lt;br /&gt;
 GPU 1: NVIDIA RTX A4000 (UUID: GPU-9e4fa6ca-e3fa-d404-eac2-026295fbd076)&lt;br /&gt;
 GPU 2: NVIDIA RTX A4000 (UUID: GPU-189c4e93-0ebe-2c7b-aa61-270d08db5a9c)&lt;br /&gt;
 GPU 3: NVIDIA RTX A4000 (UUID: GPU-2f06bc8b-0ef4-6bd9-4385-69c76d73daae)&lt;br /&gt;
 GPU 4: NVIDIA RTX A4000 (UUID: GPU-818b6a31-6d23-39a5-2139-c2c6c8a1174e)&lt;br /&gt;
 GPU 5: NVIDIA GeForce RTX 3090 (UUID: GPU-ba293e60-32f9-6907-705b-e053d1bf453b)&lt;br /&gt;
 GPU 6: NVIDIA RTX A4000 (UUID: GPU-edbbd8a2-f618-070b-8fce-b9a5fa10ccb2)&lt;br /&gt;
 GPU 7: NVIDIA RTX A4000 (UUID: GPU-82956c50-ec17-6fb7-7898-d3c920c1b1f7)&lt;br /&gt;
 GPU 8: NVIDIA RTX A4000 (UUID: GPU-ae27887e-5198-ebc6-fa9c-1f4b66d91b46)&lt;br /&gt;
&lt;br /&gt;
 root@gpu-node2:~# nvidia-smi -L&lt;br /&gt;
 GPU 0: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-15b17780-d818-bcd2-566c-564aa1dfc38e)&lt;br /&gt;
 GPU 1: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-e184b0d4-7147-af43-041b-caa7f597363a)&lt;br /&gt;
 GPU 2: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-ac1a453e-1c30-3fe0-e246-dd07c7645066)&lt;br /&gt;
 GPU 3: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-4d19d859-d044-fdc8-17e0-e84fef4a8a13)&lt;br /&gt;
 GPU 4: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-8035e3f3-76c9-124f-c5ea-d1dd4369f2a8)&lt;br /&gt;
 GPU 5: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-670d0788-a048-8eef-ad1b-1eb77b18980b)&lt;br /&gt;
 GPU 6: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-18d030c6-5956-f45f-7d15-ab53cffa813e)&lt;br /&gt;
 GPU 7: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-f7940219-84a7-8c9c-386f-14e4043c9884)&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_GPU_Jobs&amp;diff=121</id>
		<title>Submitting GPU Jobs</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_GPU_Jobs&amp;diff=121"/>
		<updated>2025-01-24T10:59:27Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* List of installed GPUs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Start by reading [[Submitting CPU Jobs]] page.&lt;br /&gt;
&lt;br /&gt;
The GPU jobs are submitted to &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition.&lt;br /&gt;
&lt;br /&gt;
To ask for one GPU card, use &amp;lt;code&amp;gt;#SBATCH -G 1&amp;lt;/code&amp;gt; directive or &amp;lt;code&amp;gt;-G 1&amp;lt;/code&amp;gt; option on the command line. The submitted job has &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt; set appropriately, so all CUDA applications should use only the allocated GPUs.&lt;br /&gt;
&lt;br /&gt;
== Rules ==&lt;br /&gt;
&lt;br /&gt;
* Always use GPUs via ''sbatch'' (or ''srun''), never via ''ssh''. You can ssh to any machine e.g. to run ''nvidia-smi'' or ''htop'', but not to start computing on GPU.&lt;br /&gt;
* Don't forget to specify you RAM requirements with e.g. ''--mem=10G''.&lt;br /&gt;
* Always specify the number of GPU cards (e.g. ''-G 1''). Thus e.g. &amp;lt;code&amp;gt;srun -p gpu --mem=64G -G 2 --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* For interactive jobs, you can use ''srun'', but make sure to end your job as soon as you don't need the GPU (so don't use srun for long training).&lt;br /&gt;
* In general: don't reserve a GPU (as described above) without actually using it for longer time, e.g., try separating steps which need GPU and steps which do not and execute those separately on our GPU resp. CPU cluster.&lt;br /&gt;
* If you know an approximate runtime of your job, please specify it with ''-t &amp;lt;time&amp;gt;''. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== CUDA and cuDNN ==&lt;br /&gt;
&lt;br /&gt;
Available CUDA versions are in&lt;br /&gt;
 /lnet/aic/opt/cuda/&lt;br /&gt;
and as of Apr 2023, available versions as 10.1, 10.2, 11.2, 11.7, 11.8.&lt;br /&gt;
&lt;br /&gt;
The cuDNN library is also available in the subdirectory &amp;lt;code&amp;gt;cudnn/VERSION/lib64&amp;lt;/code&amp;gt; of the respective CUDA directories.&lt;br /&gt;
&lt;br /&gt;
Therefore, to use CUDA 11.2 with cuDNN 8.1.1, you should add the following to your &amp;lt;code&amp;gt;.profile&amp;lt;/code&amp;gt;:&lt;br /&gt;
 export PATH=&amp;quot;/lnet/aic/opt/cuda/cuda-11.2/bin:$PATH&amp;quot;&lt;br /&gt;
 export LD_LIBRARY_PATH=&amp;quot;/lnet/aic/opt/cuda/cuda-11.2/lib64:/lnet/aic/opt/cuda/cuda-11.2/cudnn/8.1.1/lib64:/lnet/aic/opt/cuda/cuda-11.2/extras/CUPTI/lib64:$LD_LIBRARY_PATH&amp;quot;&lt;br /&gt;
 export XLA_FLAGS=--xla_gpu_cuda_data_dir=/lnet/aic/opt/cuda/cuda-11.2 # XLA configuration if you are using TensorFlow&lt;br /&gt;
&lt;br /&gt;
=== CUDA modules ===&lt;br /&gt;
CUDA 11.2 and later can be also loaded as modules. This will set various environment variables for you so you should be able to use CUDA easily.&lt;br /&gt;
&lt;br /&gt;
On a GPU node, you can do the following:&lt;br /&gt;
# list available modules with: &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt;&lt;br /&gt;
# load the version you need (possibly specifying the version of CuDNN): &amp;lt;code&amp;gt;module load &amp;lt;modulename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
# you can unload the module with: &amp;lt;code&amp;gt;module unload &amp;lt;modulename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As of Apr 2023, the available modules are&lt;br /&gt;
 cuda/11.2&lt;br /&gt;
 cuda/11.2-cudnn8.1&lt;br /&gt;
 cuda/11.7&lt;br /&gt;
 cuda/11.7-cudnn8.5&lt;br /&gt;
 cuda/11.8&lt;br /&gt;
 cuda/11.8-cudnn8.5&lt;br /&gt;
 cuda/11.8-cudnn8.6&lt;br /&gt;
 cuda/11.8-cudnn8.9&lt;br /&gt;
&lt;br /&gt;
=== List of installed GPUs ===&lt;br /&gt;
 root@gpu-node1:~# nvidia-smi -L&lt;br /&gt;
 GPU 0: NVIDIA RTX A4000 (UUID: GPU-5b111b2e-ff0d-25f7-2e08-f4065c510832)&lt;br /&gt;
 GPU 1: NVIDIA RTX A4000 (UUID: GPU-9e4fa6ca-e3fa-d404-eac2-026295fbd076)&lt;br /&gt;
 GPU 2: NVIDIA RTX A4000 (UUID: GPU-189c4e93-0ebe-2c7b-aa61-270d08db5a9c)&lt;br /&gt;
 GPU 3: NVIDIA RTX A4000 (UUID: GPU-2f06bc8b-0ef4-6bd9-4385-69c76d73daae)&lt;br /&gt;
 GPU 4: NVIDIA RTX A4000 (UUID: GPU-818b6a31-6d23-39a5-2139-c2c6c8a1174e)&lt;br /&gt;
 GPU 5: NVIDIA GeForce RTX 3090 (UUID: GPU-ba293e60-32f9-6907-705b-e053d1bf453b)&lt;br /&gt;
 GPU 6: NVIDIA RTX A4000 (UUID: GPU-edbbd8a2-f618-070b-8fce-b9a5fa10ccb2)&lt;br /&gt;
 GPU 7: NVIDIA RTX A4000 (UUID: GPU-82956c50-ec17-6fb7-7898-d3c920c1b1f7)&lt;br /&gt;
 GPU 8: NVIDIA RTX A4000 (UUID: GPU-ae27887e-5198-ebc6-fa9c-1f4b66d91b46)&lt;br /&gt;
&lt;br /&gt;
 root@gpu-node2:~# nvidia-smi -L&lt;br /&gt;
 GPU 0: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-15b17780-d818-bcd2-566c-564aa1dfc38e)&lt;br /&gt;
 GPU 1: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-e184b0d4-7147-af43-041b-caa7f597363a)&lt;br /&gt;
 GPU 2: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-ac1a453e-1c30-3fe0-e246-dd07c7645066)&lt;br /&gt;
 GPU 3: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-4d19d859-d044-fdc8-17e0-e84fef4a8a13)&lt;br /&gt;
 GPU 4: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-8035e3f3-76c9-124f-c5ea-d1dd4369f2a8)&lt;br /&gt;
 GPU 5: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-670d0788-a048-8eef-ad1b-1eb77b18980b)&lt;br /&gt;
 GPU 6: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-18d030c6-5956-f45f-7d15-ab53cffa813e)&lt;br /&gt;
 GPU 7: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-f7940219-84a7-8c9c-386f-14e4043c9884)&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=120</id>
		<title>Submitting CPU Jobs</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=120"/>
		<updated>2024-04-23T14:49:45Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Selected submit options */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The CPU jobs should be submitted to &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt; partition.&lt;br /&gt;
&lt;br /&gt;
You can submit a non-interactive job using the '''sbatch''' command.&lt;br /&gt;
To submit an interactive job, use the '''srun''' command:&lt;br /&gt;
&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
&lt;br /&gt;
== Resource specification ==&lt;br /&gt;
&lt;br /&gt;
You should specify the memory and CPU requirements (if higher than the defaults) and don't exceed them.&lt;br /&gt;
If your job needs more than one CPU (thread) (on a single machine) for most of the time, reserve the given number of CPU threads with the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; and memory with the &amp;lt;code&amp;gt;--mem&amp;lt;/code&amp;gt; options.  &lt;br /&gt;
&lt;br /&gt;
 srun -p cpu --cpus-per-task=4 --mem=8G --pty bash&lt;br /&gt;
 &lt;br /&gt;
This will give you an interactive shell with 4 threads and 8G RAM on the ''cpu'' partition.&lt;br /&gt;
&lt;br /&gt;
== Monitoring and interaction ==&lt;br /&gt;
&lt;br /&gt;
=== Job monitoring ===&lt;br /&gt;
We should be able to see what is going on when we run a job. Following examples shows usage of some typical commands:&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -a&amp;lt;/code&amp;gt; - this shows the jobs in all partitions.&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -u user&amp;lt;/code&amp;gt; - print a list of running/waiting jobs of a given user&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -j&amp;lt;JOB_ID&amp;gt;&amp;lt;/code&amp;gt; - this shows detailed info about the job with given JOB_ID (if it is still running).&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; - print available/total resources&lt;br /&gt;
&lt;br /&gt;
=== Job interaction ===&lt;br /&gt;
* &amp;lt;code&amp;gt;scontrol show job JOBID&amp;lt;/code&amp;gt; - this shows details of running job with JOBID&lt;br /&gt;
* &amp;lt;code&amp;gt;scancel JOBID&amp;lt;/code&amp;gt; - delete job from the queue&lt;br /&gt;
&lt;br /&gt;
=== Selected submit options ===&lt;br /&gt;
The complete list of available options for the commands &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; can be found in [https://slurm.schedmd.com/man_index.html SLURM documentation]. Most of the options listed here can be entered as a command parameters or as an SBATCH directive inside of a script.&lt;br /&gt;
&lt;br /&gt;
  -J helloWorld         # name of job&lt;br /&gt;
  --chdir /job/path/    # path where the job will be executed&lt;br /&gt;
  -p gpu                # name of partition or queue (if not specified default partition is used)&lt;br /&gt;
  -q normal             # QOS level (sets priority of the job)&lt;br /&gt;
  -c 4                  # reserve 4 CPU threads&lt;br /&gt;
  --gres=gpu:1          # reserve 1 GPU card&lt;br /&gt;
  -o script.out         # name of output file for the job &lt;br /&gt;
  -e script.err         # name of error file for the job&lt;br /&gt;
&lt;br /&gt;
== Array jobs ==&lt;br /&gt;
If you need to submit rather large number of jobs which are similar (i.e. processing a large number of input files) you should consider launching an ''array job''.&lt;br /&gt;
&lt;br /&gt;
For example, one might need to process 1000 files named &amp;lt;code&amp;gt;file_N.txt&amp;lt;/code&amp;gt; (where N is a number between 1-1000).&lt;br /&gt;
A program that can process one file is called &amp;lt;code&amp;gt;crunchFile&amp;lt;/code&amp;gt; and it takes only one argument - the name of the file to process. Instead of calling 1000x:&lt;br /&gt;
   sbatch crunchFile file_N.txt&lt;br /&gt;
&lt;br /&gt;
we can write a wrapper script &amp;lt;code&amp;gt;crunchScript.sh&amp;lt;/code&amp;gt; referring to the SLURM variable &amp;lt;code&amp;gt;SLURM_ARRAY_TASK_ID&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
   #!/bin/bash&lt;br /&gt;
   #SBATCH -p CPU&lt;br /&gt;
   #SBATCH --mem 2G&lt;br /&gt;
   &lt;br /&gt;
   crunchFile name_${SLURM_ARRAY_TASK_ID}.txt&lt;br /&gt;
&lt;br /&gt;
and submit all the jobs at once as an ''array job'':&lt;br /&gt;
&lt;br /&gt;
  sbatch --array=1-1000%20 crunchScript.sh&lt;br /&gt;
&lt;br /&gt;
Where the option &amp;lt;code&amp;gt;--array 1-1000%20&amp;lt;/code&amp;gt; means that we want SLURM to:&lt;br /&gt;
* launch 1000 instances of &amp;lt;code&amp;gt;crunchScript.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
* each instance will be launched with &amp;lt;code&amp;gt;SLURM_ARRAY_TASK_ID&amp;lt;/code&amp;gt; set to a number in the specified range&lt;br /&gt;
* there will be at most 20 parallel tasks running at once. This is useful for a larger number of tasks - this way we ensure that we do not flood the cluster with requests.&lt;br /&gt;
&lt;br /&gt;
You can read more about ''array jobs'' from the [https://slurm.schedmd.com/job_array.html SLURM documentation].&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=119</id>
		<title>Submitting CPU Jobs</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=119"/>
		<updated>2024-04-23T14:49:04Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Selected submit options */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The CPU jobs should be submitted to &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt; partition.&lt;br /&gt;
&lt;br /&gt;
You can submit a non-interactive job using the '''sbatch''' command.&lt;br /&gt;
To submit an interactive job, use the '''srun''' command:&lt;br /&gt;
&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
&lt;br /&gt;
== Resource specification ==&lt;br /&gt;
&lt;br /&gt;
You should specify the memory and CPU requirements (if higher than the defaults) and don't exceed them.&lt;br /&gt;
If your job needs more than one CPU (thread) (on a single machine) for most of the time, reserve the given number of CPU threads with the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; and memory with the &amp;lt;code&amp;gt;--mem&amp;lt;/code&amp;gt; options.  &lt;br /&gt;
&lt;br /&gt;
 srun -p cpu --cpus-per-task=4 --mem=8G --pty bash&lt;br /&gt;
 &lt;br /&gt;
This will give you an interactive shell with 4 threads and 8G RAM on the ''cpu'' partition.&lt;br /&gt;
&lt;br /&gt;
== Monitoring and interaction ==&lt;br /&gt;
&lt;br /&gt;
=== Job monitoring ===&lt;br /&gt;
We should be able to see what is going on when we run a job. Following examples shows usage of some typical commands:&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -a&amp;lt;/code&amp;gt; - this shows the jobs in all partitions.&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -u user&amp;lt;/code&amp;gt; - print a list of running/waiting jobs of a given user&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -j&amp;lt;JOB_ID&amp;gt;&amp;lt;/code&amp;gt; - this shows detailed info about the job with given JOB_ID (if it is still running).&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; - print available/total resources&lt;br /&gt;
&lt;br /&gt;
=== Job interaction ===&lt;br /&gt;
* &amp;lt;code&amp;gt;scontrol show job JOBID&amp;lt;/code&amp;gt; - this shows details of running job with JOBID&lt;br /&gt;
* &amp;lt;code&amp;gt;scancel JOBID&amp;lt;/code&amp;gt; - delete job from the queue&lt;br /&gt;
&lt;br /&gt;
=== Selected submit options ===&lt;br /&gt;
The complete list of available options for the commands &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; can be found in [https://slurm.schedmd.com/man_index.html SLURM documentation]. Most of the options listed here can be entered as a command parameters or as an SBATCH directive inside of a script.&lt;br /&gt;
&lt;br /&gt;
  -J helloWorld         # name of job&lt;br /&gt;
  --chdir /path/where/the/job/will/be/executed&lt;br /&gt;
  -p gpu                # name of partition or queue (if not specified default partition is used)&lt;br /&gt;
  -q normal             # QOS level (sets priority of the job)&lt;br /&gt;
  -c 4                  # reserve 4 CPU threads&lt;br /&gt;
  --gres=gpu:1          # reserve 1 GPU card&lt;br /&gt;
  -o script.out         # name of output file for the job &lt;br /&gt;
  -e script.err         # name of error file for the job&lt;br /&gt;
&lt;br /&gt;
== Array jobs ==&lt;br /&gt;
If you need to submit rather large number of jobs which are similar (i.e. processing a large number of input files) you should consider launching an ''array job''.&lt;br /&gt;
&lt;br /&gt;
For example, one might need to process 1000 files named &amp;lt;code&amp;gt;file_N.txt&amp;lt;/code&amp;gt; (where N is a number between 1-1000).&lt;br /&gt;
A program that can process one file is called &amp;lt;code&amp;gt;crunchFile&amp;lt;/code&amp;gt; and it takes only one argument - the name of the file to process. Instead of calling 1000x:&lt;br /&gt;
   sbatch crunchFile file_N.txt&lt;br /&gt;
&lt;br /&gt;
we can write a wrapper script &amp;lt;code&amp;gt;crunchScript.sh&amp;lt;/code&amp;gt; referring to the SLURM variable &amp;lt;code&amp;gt;SLURM_ARRAY_TASK_ID&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
   #!/bin/bash&lt;br /&gt;
   #SBATCH -p CPU&lt;br /&gt;
   #SBATCH --mem 2G&lt;br /&gt;
   &lt;br /&gt;
   crunchFile name_${SLURM_ARRAY_TASK_ID}.txt&lt;br /&gt;
&lt;br /&gt;
and submit all the jobs at once as an ''array job'':&lt;br /&gt;
&lt;br /&gt;
  sbatch --array=1-1000%20 crunchScript.sh&lt;br /&gt;
&lt;br /&gt;
Where the option &amp;lt;code&amp;gt;--array 1-1000%20&amp;lt;/code&amp;gt; means that we want SLURM to:&lt;br /&gt;
* launch 1000 instances of &amp;lt;code&amp;gt;crunchScript.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
* each instance will be launched with &amp;lt;code&amp;gt;SLURM_ARRAY_TASK_ID&amp;lt;/code&amp;gt; set to a number in the specified range&lt;br /&gt;
* there will be at most 20 parallel tasks running at once. This is useful for a larger number of tasks - this way we ensure that we do not flood the cluster with requests.&lt;br /&gt;
&lt;br /&gt;
You can read more about ''array jobs'' from the [https://slurm.schedmd.com/job_array.html SLURM documentation].&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=118</id>
		<title>Submitting CPU Jobs</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=118"/>
		<updated>2024-04-23T14:22:48Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The CPU jobs should be submitted to &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt; partition.&lt;br /&gt;
&lt;br /&gt;
You can submit a non-interactive job using the '''sbatch''' command.&lt;br /&gt;
To submit an interactive job, use the '''srun''' command:&lt;br /&gt;
&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
&lt;br /&gt;
== Resource specification ==&lt;br /&gt;
&lt;br /&gt;
You should specify the memory and CPU requirements (if higher than the defaults) and don't exceed them.&lt;br /&gt;
If your job needs more than one CPU (thread) (on a single machine) for most of the time, reserve the given number of CPU threads with the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; and memory with the &amp;lt;code&amp;gt;--mem&amp;lt;/code&amp;gt; options.  &lt;br /&gt;
&lt;br /&gt;
 srun -p cpu --cpus-per-task=4 --mem=8G --pty bash&lt;br /&gt;
 &lt;br /&gt;
This will give you an interactive shell with 4 threads and 8G RAM on the ''cpu'' partition.&lt;br /&gt;
&lt;br /&gt;
== Monitoring and interaction ==&lt;br /&gt;
&lt;br /&gt;
=== Job monitoring ===&lt;br /&gt;
We should be able to see what is going on when we run a job. Following examples shows usage of some typical commands:&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -a&amp;lt;/code&amp;gt; - this shows the jobs in all partitions.&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -u user&amp;lt;/code&amp;gt; - print a list of running/waiting jobs of a given user&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -j&amp;lt;JOB_ID&amp;gt;&amp;lt;/code&amp;gt; - this shows detailed info about the job with given JOB_ID (if it is still running).&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; - print available/total resources&lt;br /&gt;
&lt;br /&gt;
=== Job interaction ===&lt;br /&gt;
* &amp;lt;code&amp;gt;scontrol show job JOBID&amp;lt;/code&amp;gt; - this shows details of running job with JOBID&lt;br /&gt;
* &amp;lt;code&amp;gt;scancel JOBID&amp;lt;/code&amp;gt; - delete job from the queue&lt;br /&gt;
&lt;br /&gt;
=== Selected submit options ===&lt;br /&gt;
The complete list of available options for the commands &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; can be found in [https://slurm.schedmd.com/man_index.html SLURM documentation]. Most of the options listed here can be entered as a command parameters or as an SBATCH directive inside of a script.&lt;br /&gt;
&lt;br /&gt;
  -J helloWorld         # name of job&lt;br /&gt;
  -p gpu                # name of partition or queue (if not specified default partition is used)&lt;br /&gt;
  -q normal             # QOS level (sets priority of the job)&lt;br /&gt;
  -c 4                  # reserve 4 CPU threads&lt;br /&gt;
  --gres=gpu:1          # reserve 1 GPU card&lt;br /&gt;
  -o script.out         # name of output file for the job &lt;br /&gt;
  -e script.err         # name of error file for the job&lt;br /&gt;
&lt;br /&gt;
== Array jobs ==&lt;br /&gt;
If you need to submit rather large number of jobs which are similar (i.e. processing a large number of input files) you should consider launching an ''array job''.&lt;br /&gt;
&lt;br /&gt;
For example, one might need to process 1000 files named &amp;lt;code&amp;gt;file_N.txt&amp;lt;/code&amp;gt; (where N is a number between 1-1000).&lt;br /&gt;
A program that can process one file is called &amp;lt;code&amp;gt;crunchFile&amp;lt;/code&amp;gt; and it takes only one argument - the name of the file to process. Instead of calling 1000x:&lt;br /&gt;
   sbatch crunchFile file_N.txt&lt;br /&gt;
&lt;br /&gt;
we can write a wrapper script &amp;lt;code&amp;gt;crunchScript.sh&amp;lt;/code&amp;gt; referring to the SLURM variable &amp;lt;code&amp;gt;SLURM_ARRAY_TASK_ID&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
   #!/bin/bash&lt;br /&gt;
   #SBATCH -p CPU&lt;br /&gt;
   #SBATCH --mem 2G&lt;br /&gt;
   &lt;br /&gt;
   crunchFile name_${SLURM_ARRAY_TASK_ID}.txt&lt;br /&gt;
&lt;br /&gt;
and submit all the jobs at once as an ''array job'':&lt;br /&gt;
&lt;br /&gt;
  sbatch --array=1-1000%20 crunchScript.sh&lt;br /&gt;
&lt;br /&gt;
Where the option &amp;lt;code&amp;gt;--array 1-1000%20&amp;lt;/code&amp;gt; means that we want SLURM to:&lt;br /&gt;
* launch 1000 instances of &amp;lt;code&amp;gt;crunchScript.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
* each instance will be launched with &amp;lt;code&amp;gt;SLURM_ARRAY_TASK_ID&amp;lt;/code&amp;gt; set to a number in the specified range&lt;br /&gt;
* there will be at most 20 parallel tasks running at once. This is useful for a larger number of tasks - this way we ensure that we do not flood the cluster with requests.&lt;br /&gt;
&lt;br /&gt;
You can read more about ''array jobs'' from the [https://slurm.schedmd.com/job_array.html SLURM documentation].&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Environment&amp;diff=117</id>
		<title>Environment</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Environment&amp;diff=117"/>
		<updated>2024-04-09T13:34:19Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Bash ===&lt;br /&gt;
&lt;br /&gt;
There are several useful environment variables provided by SLURM when submitting jobs:&lt;br /&gt;
  SLURM_JOB_ID - Job ID.&lt;br /&gt;
  SLURM_JOB_NAME - Job name.&lt;br /&gt;
  CUDA_VISIBLE_DEVICES - Specifies the GPU devices for the job allocation.&lt;br /&gt;
  TMPDIR - Local temporary directory reserved for the job. This should be used instead of /tmp. &lt;br /&gt;
  &lt;br /&gt;
Refer to [https://slurm.schedmd.com/prolog_epilog.html SLURM docs] for complete list.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
&lt;br /&gt;
If you need to use a different version of python than the default one installed with node OS you can use pre-installed versions found at:&lt;br /&gt;
&lt;br /&gt;
 /opt/python&lt;br /&gt;
&lt;br /&gt;
If you need some specific version not found in this directory please contact the [mailto:it@ufal.mff.cuni.cz AIC administrators].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Virtual environment ===&lt;br /&gt;
The recommended way is to use python through a virtual environment.&lt;br /&gt;
You need to decide which PYTHON_VERSION you want to use and where do you want to store your virtual environment (VENV_PATH). Then you can create it:&lt;br /&gt;
&lt;br /&gt;
 /opt/python/&amp;lt;PYTHON_VERSION&amp;gt;/bin/python3 -m venv &amp;lt;VENV_PATH&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to activate the environment before you actually use it:&lt;br /&gt;
&lt;br /&gt;
 source &amp;lt;VENV_PATH&amp;gt;/bin/activate&lt;br /&gt;
&lt;br /&gt;
Then you should be able to use the python version of your choice.&lt;br /&gt;
More about virtual environments can be found [https://docs.python.org/3/library/venv.html here]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=116</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=116"/>
		<updated>2024-03-19T15:06:41Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style='text-align: center;'&amp;gt;CZ.02.2.69/0.0/0.0/17_044/0008562&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style='text-align: center;'&amp;gt;Podpora rozvoje studijního prostředí na Univerzitě Karlově - VRR&amp;lt;/div&amp;gt; &lt;br /&gt;
[[File:OP_VVV_logo.jpg|frameless|center|upright=2.5]]&lt;br /&gt;
&lt;br /&gt;
== Welcome to AIC ==&lt;br /&gt;
&lt;br /&gt;
AIC (Artificial Intelligence Cluster) is a computational grid with sufficient computational capacity for research in the field of [https://en.wikipedia.org/wiki/Deep_learning deep learning] using both CPU and GPU. It was built on top of [https://slurm.schedmd.com/ SLURM] scheduling system. MFF students of Bc. and Mgr. degrees can use it to run their experiments and learn the proper ways of grid computing in the process.&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
AIC is dedicated to UFAL students who will get an account if requested by authorized lector.&lt;br /&gt;
&lt;br /&gt;
To change the password, this link is available: https://aic.ufal.mff.cuni.cz/pw-manager&lt;br /&gt;
&lt;br /&gt;
There is a restriction on resources allocated by one user in group '''students''' at a given time.&lt;br /&gt;
By default, this is set to a maximum of 4 CPU and 1 GPU.&lt;br /&gt;
&lt;br /&gt;
=== Jupyterlab ===&lt;br /&gt;
AIC provides also Jupyterlab portal on top of your AIC account and HOME directory. It can be found at https://aic.ufal.mff.cuni.cz/jlab . Pre-installed extensions: R, ipython, Rstudio (community), Slurm Queue Manager.&lt;br /&gt;
&lt;br /&gt;
=== Connecting to the Cluster (directly) ===&lt;br /&gt;
Use SSH to connect to the cluster:&lt;br /&gt;
  ssh LOGIN@aic.ufal.mff.cuni.cz&lt;br /&gt;
&lt;br /&gt;
=== Basic HOWTO ===&lt;br /&gt;
&lt;br /&gt;
Following HOWTO is meant to provide only a simplified overview of the cluster usage. It is strongly recommended to read some further documentation ([[Submitting_CPU_Jobs|CPU]] or [[Submitting_GPU_Jobs|GPU]]) before running some serious experiments.&lt;br /&gt;
More serious experiments tend to take more resources. In order to avoid unexpected failures please make sure your [[Quotas|quota]] is not exceeded.&lt;br /&gt;
&lt;br /&gt;
'''Rule 0: NEVER RUN JOBS DIRECTLY ON aic.ufal.mff.cuni.cz HEADNODE. Use &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; to get computational node shell!'''&lt;br /&gt;
&lt;br /&gt;
Suppose we want to run some computations described by a script called &amp;lt;code&amp;gt;job_script.sh&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH -J helloWorld					  # name of job&lt;br /&gt;
 #SBATCH -p cpu 	 		       		  # name of partition or queue (default is cpu)&lt;br /&gt;
 #SBATCH -o helloWorld.out				  # name of output file for this submission script&lt;br /&gt;
 #SBATCH -e helloWorld.err				  # name of error file for this submission script&lt;br /&gt;
 # run my job (some executable)&lt;br /&gt;
 sleep 5&lt;br /&gt;
 echo &amp;quot;Hello I am running on cluster!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
We need to ''submit'' the job to the cluster which is done by logging on the submit host &amp;lt;code&amp;gt;aic.ufal.mff.cuni.cz&amp;lt;/code&amp;gt; and issuing the command:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;sbatch job_script.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will enqueue our ''job'' to the default ''partition'' (or ''queue'') which is &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt;. The scheduler decides which particular machine in the specified queue has ''resources'' needed to run the job. Typically we will see a message which tells us the ID of our job (3 in this example):&lt;br /&gt;
&lt;br /&gt;
 Submitted batch job 3&lt;br /&gt;
&lt;br /&gt;
The options used in this example are specified inside the script using the ''#SBATCH'' directive. Any option can be specified either in the script or as a command line parameter (see ''man sbatch'' for details).&lt;br /&gt;
&lt;br /&gt;
We can specify custom arguments '''before''' the name of the script:&lt;br /&gt;
&lt;br /&gt;
 sbatch --export=ARG1='firstArg',ARG2='secondArg' job_script.sh&lt;br /&gt;
&lt;br /&gt;
These can be accessed in the job script as &amp;lt;code&amp;gt;$ARG1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;$ARG2&amp;lt;/code&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=115</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=115"/>
		<updated>2024-03-19T15:06:18Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style='text-align: center;'&amp;gt;CZ.02.2.69/0.0/0.0/17_044/0008562&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style='text-align: center;'&amp;gt;Podpora rozvoje studijního prostředí na Univerzitě Karlově - VRR&amp;lt;/div&amp;gt; &lt;br /&gt;
[[File:OP_VVV_logo.jpg|frameless|center|upright=2.5]]&lt;br /&gt;
&lt;br /&gt;
== Welcome to AIC ==&lt;br /&gt;
&lt;br /&gt;
AIC (Artificial Intelligence Cluster) is a computational grid with sufficient computational capacity for research in the field of [https://en.wikipedia.org/wiki/Deep_learning deep learning] using both CPU and GPU. It was built on top of [https://slurm.schedmd.com/ SLURM] scheduling system. MFF students of Bc. and Mgr. degrees can use it to run their experiments and learn the proper ways of grid computing in the process.&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
AIC is dedicated to UFAL students who will get an account if requested by authorized lector.&lt;br /&gt;
To change the password, this link is available: https://aic.ufal.mff.cuni.cz/pw-manager&lt;br /&gt;
&lt;br /&gt;
There is a restriction on resources allocated by one user in group '''students''' at a given time.&lt;br /&gt;
By default, this is set to a maximum of 4 CPU and 1 GPU.&lt;br /&gt;
&lt;br /&gt;
=== Jupyterlab ===&lt;br /&gt;
AIC provides also Jupyterlab portal on top of your AIC account and HOME directory. It can be found at https://aic.ufal.mff.cuni.cz/jlab . Pre-installed extensions: R, ipython, Rstudio (community), Slurm Queue Manager.&lt;br /&gt;
&lt;br /&gt;
=== Connecting to the Cluster (directly) ===&lt;br /&gt;
Use SSH to connect to the cluster:&lt;br /&gt;
  ssh LOGIN@aic.ufal.mff.cuni.cz&lt;br /&gt;
&lt;br /&gt;
=== Basic HOWTO ===&lt;br /&gt;
&lt;br /&gt;
Following HOWTO is meant to provide only a simplified overview of the cluster usage. It is strongly recommended to read some further documentation ([[Submitting_CPU_Jobs|CPU]] or [[Submitting_GPU_Jobs|GPU]]) before running some serious experiments.&lt;br /&gt;
More serious experiments tend to take more resources. In order to avoid unexpected failures please make sure your [[Quotas|quota]] is not exceeded.&lt;br /&gt;
&lt;br /&gt;
'''Rule 0: NEVER RUN JOBS DIRECTLY ON aic.ufal.mff.cuni.cz HEADNODE. Use &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; to get computational node shell!'''&lt;br /&gt;
&lt;br /&gt;
Suppose we want to run some computations described by a script called &amp;lt;code&amp;gt;job_script.sh&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH -J helloWorld					  # name of job&lt;br /&gt;
 #SBATCH -p cpu 	 		       		  # name of partition or queue (default is cpu)&lt;br /&gt;
 #SBATCH -o helloWorld.out				  # name of output file for this submission script&lt;br /&gt;
 #SBATCH -e helloWorld.err				  # name of error file for this submission script&lt;br /&gt;
 # run my job (some executable)&lt;br /&gt;
 sleep 5&lt;br /&gt;
 echo &amp;quot;Hello I am running on cluster!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
We need to ''submit'' the job to the cluster which is done by logging on the submit host &amp;lt;code&amp;gt;aic.ufal.mff.cuni.cz&amp;lt;/code&amp;gt; and issuing the command:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;sbatch job_script.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will enqueue our ''job'' to the default ''partition'' (or ''queue'') which is &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt;. The scheduler decides which particular machine in the specified queue has ''resources'' needed to run the job. Typically we will see a message which tells us the ID of our job (3 in this example):&lt;br /&gt;
&lt;br /&gt;
 Submitted batch job 3&lt;br /&gt;
&lt;br /&gt;
The options used in this example are specified inside the script using the ''#SBATCH'' directive. Any option can be specified either in the script or as a command line parameter (see ''man sbatch'' for details).&lt;br /&gt;
&lt;br /&gt;
We can specify custom arguments '''before''' the name of the script:&lt;br /&gt;
&lt;br /&gt;
 sbatch --export=ARG1='firstArg',ARG2='secondArg' job_script.sh&lt;br /&gt;
&lt;br /&gt;
These can be accessed in the job script as &amp;lt;code&amp;gt;$ARG1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;$ARG2&amp;lt;/code&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=114</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=114"/>
		<updated>2024-03-19T15:01:43Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style='text-align: center;'&amp;gt;CZ.02.2.69/0.0/0.0/17_044/0008562&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style='text-align: center;'&amp;gt;Podpora rozvoje studijního prostředí na Univerzitě Karlově - VRR&amp;lt;/div&amp;gt; &lt;br /&gt;
[[File:OP_VVV_logo.jpg|frameless|center|upright=2.5]]&lt;br /&gt;
&lt;br /&gt;
== Welcome to AIC ==&lt;br /&gt;
&lt;br /&gt;
AIC (Artificial Intelligence Cluster) is a computational grid with sufficient computational capacity for research in the field of [https://en.wikipedia.org/wiki/Deep_learning deep learning] using both CPU and GPU. It was built on top of [https://slurm.schedmd.com/ SLURM] scheduling system. MFF students of Bc. and Mgr. degrees can use it to run their experiments and learn the proper ways of grid computing in the process.&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
AIC is dedicated to UFAL students who will get an account if requested by authorized lector.&lt;br /&gt;
For changing AIC password, this link is avialible:  https://aic.ufal.mff.cuni.cz/pw-manager&lt;br /&gt;
&lt;br /&gt;
There is a restriction on resources allocated by one user in group '''students''' at a given time.&lt;br /&gt;
By default, this is set to a maximum of 4 CPU and 1 GPU.&lt;br /&gt;
&lt;br /&gt;
=== Jupyterlab ===&lt;br /&gt;
AIC provides also Jupyterlab portal on top of your AIC account and HOME directory. It can be found at https://aic.ufal.mff.cuni.cz/jlab . Pre-installed extensions: R, ipython, Rstudio (community), Slurm Queue Manager.&lt;br /&gt;
&lt;br /&gt;
=== Connecting to the Cluster (directly) ===&lt;br /&gt;
Use SSH to connect to the cluster:&lt;br /&gt;
  ssh LOGIN@aic.ufal.mff.cuni.cz&lt;br /&gt;
&lt;br /&gt;
=== Basic HOWTO ===&lt;br /&gt;
&lt;br /&gt;
Following HOWTO is meant to provide only a simplified overview of the cluster usage. It is strongly recommended to read some further documentation ([[Submitting_CPU_Jobs|CPU]] or [[Submitting_GPU_Jobs|GPU]]) before running some serious experiments.&lt;br /&gt;
More serious experiments tend to take more resources. In order to avoid unexpected failures please make sure your [[Quotas|quota]] is not exceeded.&lt;br /&gt;
&lt;br /&gt;
'''Rule 0: NEVER RUN JOBS DIRECTLY ON aic.ufal.mff.cuni.cz HEADNODE. Use &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; to get computational node shell!'''&lt;br /&gt;
&lt;br /&gt;
Suppose we want to run some computations described by a script called &amp;lt;code&amp;gt;job_script.sh&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH -J helloWorld					  # name of job&lt;br /&gt;
 #SBATCH -p cpu 	 		       		  # name of partition or queue (default is cpu)&lt;br /&gt;
 #SBATCH -o helloWorld.out				  # name of output file for this submission script&lt;br /&gt;
 #SBATCH -e helloWorld.err				  # name of error file for this submission script&lt;br /&gt;
 # run my job (some executable)&lt;br /&gt;
 sleep 5&lt;br /&gt;
 echo &amp;quot;Hello I am running on cluster!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
We need to ''submit'' the job to the cluster which is done by logging on the submit host &amp;lt;code&amp;gt;aic.ufal.mff.cuni.cz&amp;lt;/code&amp;gt; and issuing the command:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;sbatch job_script.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will enqueue our ''job'' to the default ''partition'' (or ''queue'') which is &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt;. The scheduler decides which particular machine in the specified queue has ''resources'' needed to run the job. Typically we will see a message which tells us the ID of our job (3 in this example):&lt;br /&gt;
&lt;br /&gt;
 Submitted batch job 3&lt;br /&gt;
&lt;br /&gt;
The options used in this example are specified inside the script using the ''#SBATCH'' directive. Any option can be specified either in the script or as a command line parameter (see ''man sbatch'' for details).&lt;br /&gt;
&lt;br /&gt;
We can specify custom arguments '''before''' the name of the script:&lt;br /&gt;
&lt;br /&gt;
 sbatch --export=ARG1='firstArg',ARG2='secondArg' job_script.sh&lt;br /&gt;
&lt;br /&gt;
These can be accessed in the job script as &amp;lt;code&amp;gt;$ARG1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;$ARG2&amp;lt;/code&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_GPU_Jobs&amp;diff=113</id>
		<title>Submitting GPU Jobs</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_GPU_Jobs&amp;diff=113"/>
		<updated>2023-12-07T10:37:40Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Start by reading [[Submitting CPU Jobs]] page.&lt;br /&gt;
&lt;br /&gt;
The GPU jobs are submitted to &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition.&lt;br /&gt;
&lt;br /&gt;
To ask for one GPU card, use &amp;lt;code&amp;gt;#SBATCH -G 1&amp;lt;/code&amp;gt; directive or &amp;lt;code&amp;gt;-G 1&amp;lt;/code&amp;gt; option on the command line. The submitted job has &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt; set appropriately, so all CUDA applications should use only the allocated GPUs.&lt;br /&gt;
&lt;br /&gt;
== Rules ==&lt;br /&gt;
&lt;br /&gt;
* Always use GPUs via ''sbatch'' (or ''srun''), never via ''ssh''. You can ssh to any machine e.g. to run ''nvidia-smi'' or ''htop'', but not to start computing on GPU.&lt;br /&gt;
* Don't forget to specify you RAM requirements with e.g. ''--mem=10G''.&lt;br /&gt;
* Always specify the number of GPU cards (e.g. ''-G 1''). Thus e.g. &amp;lt;code&amp;gt;srun -p gpu --mem=64G -G 2 --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* For interactive jobs, you can use ''srun'', but make sure to end your job as soon as you don't need the GPU (so don't use srun for long training).&lt;br /&gt;
* In general: don't reserve a GPU (as described above) without actually using it for longer time, e.g., try separating steps which need GPU and steps which do not and execute those separately on our GPU resp. CPU cluster.&lt;br /&gt;
* If you know an approximate runtime of your job, please specify it with ''-t &amp;lt;time&amp;gt;''. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== CUDA and cuDNN ==&lt;br /&gt;
&lt;br /&gt;
Available CUDA versions are in&lt;br /&gt;
 /lnet/aic/opt/cuda/&lt;br /&gt;
and as of Apr 2023, available versions as 10.1, 10.2, 11.2, 11.7, 11.8.&lt;br /&gt;
&lt;br /&gt;
The cuDNN library is also available in the subdirectory &amp;lt;code&amp;gt;cudnn/VERSION/lib64&amp;lt;/code&amp;gt; of the respective CUDA directories.&lt;br /&gt;
&lt;br /&gt;
Therefore, to use CUDA 11.2 with cuDNN 8.1.1, you should add the following to your &amp;lt;code&amp;gt;.profile&amp;lt;/code&amp;gt;:&lt;br /&gt;
 export PATH=&amp;quot;/lnet/aic/opt/cuda/cuda-11.2/bin:$PATH&amp;quot;&lt;br /&gt;
 export LD_LIBRARY_PATH=&amp;quot;/lnet/aic/opt/cuda/cuda-11.2/lib64:/lnet/aic/opt/cuda/cuda-11.2/cudnn/8.1.1/lib64:/lnet/aic/opt/cuda/cuda-11.2/extras/CUPTI/lib64:$LD_LIBRARY_PATH&amp;quot;&lt;br /&gt;
 export XLA_FLAGS=--xla_gpu_cuda_data_dir=/lnet/aic/opt/cuda/cuda-11.2 # XLA configuration if you are using TensorFlow&lt;br /&gt;
&lt;br /&gt;
=== CUDA modules ===&lt;br /&gt;
CUDA 11.2 and later can be also loaded as modules. This will set various environment variables for you so you should be able to use CUDA easily.&lt;br /&gt;
&lt;br /&gt;
On a GPU node, you can do the following:&lt;br /&gt;
# list available modules with: &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt;&lt;br /&gt;
# load the version you need (possibly specifying the version of CuDNN): &amp;lt;code&amp;gt;module load &amp;lt;modulename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
# you can unload the module with: &amp;lt;code&amp;gt;module unload &amp;lt;modulename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As of Apr 2023, the available modules are&lt;br /&gt;
 cuda/11.2&lt;br /&gt;
 cuda/11.2-cudnn8.1&lt;br /&gt;
 cuda/11.7&lt;br /&gt;
 cuda/11.7-cudnn8.5&lt;br /&gt;
 cuda/11.8&lt;br /&gt;
 cuda/11.8-cudnn8.5&lt;br /&gt;
 cuda/11.8-cudnn8.6&lt;br /&gt;
 cuda/11.8-cudnn8.9&lt;br /&gt;
&lt;br /&gt;
=== List of installed GPUs ===&lt;br /&gt;
 root@gpu-node1:~# nvidia-smi -L&lt;br /&gt;
 GPU 0: NVIDIA GeForce RTX 3090 (UUID: GPU-ba293e60-32f9-6907-705b-e053d1bf453b)&lt;br /&gt;
 GPU 1: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-b29fe79f-6192-5ece-6f91-e59d97ab304e)&lt;br /&gt;
 GPU 2: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-fbcba5d0-61bd-cc4c-810e-c80cbd9cd563)&lt;br /&gt;
 GPU 3: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-76ae3ae7-0a2d-ea68-3070-94c919f40169)&lt;br /&gt;
 GPU 4: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-18174817-13c3-b930-1d68-37c47b41dc0b)&lt;br /&gt;
 GPU 5: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-3af8a5c5-9e07-9468-e9dc-e1259f3e7890)&lt;br /&gt;
 GPU 6: NVIDIA GeForce GTX 1080 Ti (UUID: GPU-51e376db-189b-b11d-bd27-bbbb6470ff26)&lt;br /&gt;
&lt;br /&gt;
 root@gpu-node2:~# nvidia-smi -L&lt;br /&gt;
 GPU 0: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-15b17780-d818-bcd2-566c-564aa1dfc38e)&lt;br /&gt;
 GPU 1: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-e184b0d4-7147-af43-041b-caa7f597363a)&lt;br /&gt;
 GPU 2: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-ac1a453e-1c30-3fe0-e246-dd07c7645066)&lt;br /&gt;
 GPU 3: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-4d19d859-d044-fdc8-17e0-e84fef4a8a13)&lt;br /&gt;
 GPU 4: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-8035e3f3-76c9-124f-c5ea-d1dd4369f2a8)&lt;br /&gt;
 GPU 5: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-670d0788-a048-8eef-ad1b-1eb77b18980b)&lt;br /&gt;
 GPU 6: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-18d030c6-5956-f45f-7d15-ab53cffa813e)&lt;br /&gt;
 GPU 7: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-f7940219-84a7-8c9c-386f-14e4043c9884)&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=106</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=106"/>
		<updated>2023-10-25T10:37:03Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style='text-align: center;'&amp;gt;CZ.02.2.69/0.0/0.0/17_044/0008562&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style='text-align: center;'&amp;gt;Podpora rozvoje studijního prostředí na Univerzitě Karlově - VRR&amp;lt;/div&amp;gt; &lt;br /&gt;
[[File:OP_VVV_logo.jpg|frameless|center|upright=2.5]]&lt;br /&gt;
&lt;br /&gt;
== Welcome to AIC ==&lt;br /&gt;
&lt;br /&gt;
AIC (Artificial Intelligence Cluster) is a computational grid with sufficient computational capacity for research in the field of [https://en.wikipedia.org/wiki/Deep_learning deep learning] using both CPU and GPU. It was built on top of [https://slurm.schedmd.com/ SLURM] scheduling system. MFF students of Bc. and Mgr. degrees can use it to run their experiments and learn the proper ways of grid computing in the process.&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
AIC is dedicated to UFAL students who will get an account if requested by authorized lector.&lt;br /&gt;
&lt;br /&gt;
There is a restriction on resources allocated by one user in group '''students''' at a given time.&lt;br /&gt;
By default, this is set to a maximum of 4 CPU and 1 GPU.&lt;br /&gt;
&lt;br /&gt;
=== Jupyterlab ===&lt;br /&gt;
AIC provides also Jupyterlab portal on top of your AIC account and HOME directory. It can be found at https://aic.ufal.mff.cuni.cz/jlab . Pre-installed extensions: R, ipython, Rstudio (community), Slurm Queue Manager.&lt;br /&gt;
&lt;br /&gt;
=== Connecting to the Cluster (directly) ===&lt;br /&gt;
Use SSH to connect to the cluster:&lt;br /&gt;
  ssh LOGIN@aic.ufal.mff.cuni.cz&lt;br /&gt;
&lt;br /&gt;
=== Basic HOWTO ===&lt;br /&gt;
&lt;br /&gt;
Following HOWTO is meant to provide only a simplified overview of the cluster usage. It is strongly recommended to read some further documentation ([[Submitting_CPU_Jobs|CPU]] or [[Submitting_GPU_Jobs|GPU]]) before running some serious experiments.&lt;br /&gt;
More serious experiments tend to take more resources. In order to avoid unexpected failures please make sure your [[Quotas|quota]] is not exceeded.&lt;br /&gt;
&lt;br /&gt;
'''Rule 0: NEVER RUN JOBS DIRECTLY ON aic.ufal.mff.cuni.cz HEADNODE. Use &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; to get computational node shell!'''&lt;br /&gt;
&lt;br /&gt;
Suppose we want to run some computations described by a script called &amp;lt;code&amp;gt;job_script.sh&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH -J helloWorld					  # name of job&lt;br /&gt;
 #SBATCH -p cpu 	 		       		  # name of partition or queue (default is cpu)&lt;br /&gt;
 #SBATCH -o helloWorld.out				  # name of output file for this submission script&lt;br /&gt;
 #SBATCH -e helloWorld.err				  # name of error file for this submission script&lt;br /&gt;
 # run my job (some executable)&lt;br /&gt;
 sleep 5&lt;br /&gt;
 echo &amp;quot;Hello I am running on cluster!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
We need to ''submit'' the job to the cluster which is done by logging on the submit host &amp;lt;code&amp;gt;aic.ufal.mff.cuni.cz&amp;lt;/code&amp;gt; and issuing the command:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;sbatch job_script.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will enqueue our ''job'' to the default ''partition'' (or ''queue'') which is &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt;. The scheduler decides which particular machine in the specified queue has ''resources'' needed to run the job. Typically we will see a message which tells us the ID of our job (3 in this example):&lt;br /&gt;
&lt;br /&gt;
 Submitted batch job 3&lt;br /&gt;
&lt;br /&gt;
The options used in this example are specified inside the script using the ''#SBATCH'' directive. Any option can be specified either in the script or as a command line parameter (see ''man sbatch'' for details).&lt;br /&gt;
&lt;br /&gt;
We can specify custom arguments '''before''' the name of the script:&lt;br /&gt;
&lt;br /&gt;
 sbatch --export=ARG1='firstArg',ARG2='secondArg' job_script.sh&lt;br /&gt;
&lt;br /&gt;
These can be accessed in the job script as &amp;lt;code&amp;gt;$ARG1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;$ARG2&amp;lt;/code&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=105</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=105"/>
		<updated>2023-10-25T10:36:42Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style='text-align: center;'&amp;gt;CZ.02.2.69/0.0/0.0/17_044/0008562&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style='text-align: center;'&amp;gt;Podpora rozvoje studijního prostředí na Univerzitě Karlově - VRR&amp;lt;/div&amp;gt; &lt;br /&gt;
[[File:OP_VVV_logo.jpg|frameless|center|upright=2.5]]&lt;br /&gt;
&lt;br /&gt;
== Welcome to AIC ==&lt;br /&gt;
&lt;br /&gt;
AIC (Artificial Intelligence Cluster) is a computational grid with sufficient computational capacity for research in the field of [https://en.wikipedia.org/wiki/Deep_learning deep learning] using both CPU and GPU. It was built on top of [https://slurm.schedmd.com/ SLURM] scheduling system. MFF students of Bc. and Mgr. degrees can use it to run their experiments and learn the proper ways of grid computing in the process.&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
AIC is dedicated to UFAL students who will get an account if requested by authorized lector.&lt;br /&gt;
&lt;br /&gt;
There is a restriction on resources allocated by one user in group '''students''' at a given time.&lt;br /&gt;
By default, this is set to a maximum of 4 CPU and 1 GPU.&lt;br /&gt;
&lt;br /&gt;
=== Jupyterlab ===&lt;br /&gt;
AIC provides also Jupyterlab portal on top of your AIC account and HOME directory. It can be found at https://aic.ufal.mff.cuni.cz/jlab . Pre-installed extensions: R, ipython, Rstudio (community), Slurm Queue Manager.&lt;br /&gt;
&lt;br /&gt;
=== Connecting to the Cluster. ===&lt;br /&gt;
Use SSH to connect to the cluster:&lt;br /&gt;
  ssh LOGIN@aic.ufal.mff.cuni.cz&lt;br /&gt;
&lt;br /&gt;
=== Basic HOWTO ===&lt;br /&gt;
&lt;br /&gt;
Following HOWTO is meant to provide only a simplified overview of the cluster usage. It is strongly recommended to read some further documentation ([[Submitting_CPU_Jobs|CPU]] or [[Submitting_GPU_Jobs|GPU]]) before running some serious experiments.&lt;br /&gt;
More serious experiments tend to take more resources. In order to avoid unexpected failures please make sure your [[Quotas|quota]] is not exceeded.&lt;br /&gt;
&lt;br /&gt;
'''Rule 0: NEVER RUN JOBS DIRECTLY ON aic.ufal.mff.cuni.cz HEADNODE. Use &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; to get computational node shell!'''&lt;br /&gt;
&lt;br /&gt;
Suppose we want to run some computations described by a script called &amp;lt;code&amp;gt;job_script.sh&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH -J helloWorld					  # name of job&lt;br /&gt;
 #SBATCH -p cpu 	 		       		  # name of partition or queue (default is cpu)&lt;br /&gt;
 #SBATCH -o helloWorld.out				  # name of output file for this submission script&lt;br /&gt;
 #SBATCH -e helloWorld.err				  # name of error file for this submission script&lt;br /&gt;
 # run my job (some executable)&lt;br /&gt;
 sleep 5&lt;br /&gt;
 echo &amp;quot;Hello I am running on cluster!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
We need to ''submit'' the job to the cluster which is done by logging on the submit host &amp;lt;code&amp;gt;aic.ufal.mff.cuni.cz&amp;lt;/code&amp;gt; and issuing the command:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;sbatch job_script.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will enqueue our ''job'' to the default ''partition'' (or ''queue'') which is &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt;. The scheduler decides which particular machine in the specified queue has ''resources'' needed to run the job. Typically we will see a message which tells us the ID of our job (3 in this example):&lt;br /&gt;
&lt;br /&gt;
 Submitted batch job 3&lt;br /&gt;
&lt;br /&gt;
The options used in this example are specified inside the script using the ''#SBATCH'' directive. Any option can be specified either in the script or as a command line parameter (see ''man sbatch'' for details).&lt;br /&gt;
&lt;br /&gt;
We can specify custom arguments '''before''' the name of the script:&lt;br /&gt;
&lt;br /&gt;
 sbatch --export=ARG1='firstArg',ARG2='secondArg' job_script.sh&lt;br /&gt;
&lt;br /&gt;
These can be accessed in the job script as &amp;lt;code&amp;gt;$ARG1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;$ARG2&amp;lt;/code&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=104</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=104"/>
		<updated>2023-10-25T10:35:34Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style='text-align: center;'&amp;gt;CZ.02.2.69/0.0/0.0/17_044/0008562&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style='text-align: center;'&amp;gt;Podpora rozvoje studijního prostředí na Univerzitě Karlově - VRR&amp;lt;/div&amp;gt; &lt;br /&gt;
[[File:OP_VVV_logo.jpg|frameless|center|upright=2.5]]&lt;br /&gt;
&lt;br /&gt;
== Welcome to AIC ==&lt;br /&gt;
&lt;br /&gt;
AIC (Artificial Intelligence Cluster) is a computational grid with sufficient computational capacity for research in the field of [https://en.wikipedia.org/wiki/Deep_learning deep learning] using both CPU and GPU. It was built on top of [https://slurm.schedmd.com/ SLURM] scheduling system. MFF students of Bc. and Mgr. degrees can use it to run their experiments and learn the proper ways of grid computing in the process.&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
AIC is dedicated to UFAL students who will get an account if requested by authorized lector.&lt;br /&gt;
&lt;br /&gt;
There is a restriction on resources allocated by one user in group '''students''' at a given time.&lt;br /&gt;
By default, this is set to a maximum of 4 CPU and 1 GPU.&lt;br /&gt;
&lt;br /&gt;
=== Connecting to the Cluster. ===&lt;br /&gt;
Use SSH to connect to the cluster:&lt;br /&gt;
  ssh LOGIN@aic.ufal.mff.cuni.cz&lt;br /&gt;
&lt;br /&gt;
=== Jupyterlab ===&lt;br /&gt;
AIC provides also Jupyterlab portal on top of your AIC account and HOME directory. It can be found at https://aic.ufal.mff.cuni.cz/jlab . Pre-installed extensions: R, ipython, Rstudio (community), Slurm Queue Manager.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Basic HOWTO ===&lt;br /&gt;
&lt;br /&gt;
Following HOWTO is meant to provide only a simplified overview of the cluster usage. It is strongly recommended to read some further documentation ([[Submitting_CPU_Jobs|CPU]] or [[Submitting_GPU_Jobs|GPU]]) before running some serious experiments.&lt;br /&gt;
More serious experiments tend to take more resources. In order to avoid unexpected failures please make sure your [[Quotas|quota]] is not exceeded.&lt;br /&gt;
&lt;br /&gt;
'''Rule 0: NEVER RUN JOBS DIRECTLY ON aic.ufal.mff.cuni.cz HEADNODE. Use &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; to get computational node shell!'''&lt;br /&gt;
&lt;br /&gt;
Suppose we want to run some computations described by a script called &amp;lt;code&amp;gt;job_script.sh&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH -J helloWorld					  # name of job&lt;br /&gt;
 #SBATCH -p cpu 	 		       		  # name of partition or queue (default is cpu)&lt;br /&gt;
 #SBATCH -o helloWorld.out				  # name of output file for this submission script&lt;br /&gt;
 #SBATCH -e helloWorld.err				  # name of error file for this submission script&lt;br /&gt;
 # run my job (some executable)&lt;br /&gt;
 sleep 5&lt;br /&gt;
 echo &amp;quot;Hello I am running on cluster!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
We need to ''submit'' the job to the cluster which is done by logging on the submit host &amp;lt;code&amp;gt;aic.ufal.mff.cuni.cz&amp;lt;/code&amp;gt; and issuing the command:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;sbatch job_script.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will enqueue our ''job'' to the default ''partition'' (or ''queue'') which is &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt;. The scheduler decides which particular machine in the specified queue has ''resources'' needed to run the job. Typically we will see a message which tells us the ID of our job (3 in this example):&lt;br /&gt;
&lt;br /&gt;
 Submitted batch job 3&lt;br /&gt;
&lt;br /&gt;
The options used in this example are specified inside the script using the ''#SBATCH'' directive. Any option can be specified either in the script or as a command line parameter (see ''man sbatch'' for details).&lt;br /&gt;
&lt;br /&gt;
We can specify custom arguments '''before''' the name of the script:&lt;br /&gt;
&lt;br /&gt;
 sbatch --export=ARG1='firstArg',ARG2='secondArg' job_script.sh&lt;br /&gt;
&lt;br /&gt;
These can be accessed in the job script as &amp;lt;code&amp;gt;$ARG1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;$ARG2&amp;lt;/code&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_GPU_Jobs&amp;diff=98</id>
		<title>Submitting GPU Jobs</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_GPU_Jobs&amp;diff=98"/>
		<updated>2022-12-02T09:55:13Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* CUDA modules */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Start by reading [[Submitting CPU Jobs]] page.&lt;br /&gt;
&lt;br /&gt;
The GPU jobs are submitted to &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition.&lt;br /&gt;
&lt;br /&gt;
To ask for one GPU card, use &amp;lt;code&amp;gt;#SBATCH --gres=gpu:1&amp;lt;/code&amp;gt; directive or &amp;lt;code&amp;gt;--gres=gpu:1&amp;lt;/code&amp;gt; option on the command line. The submitted job has &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt; set appropriately, so all CUDA applications should use only the allocated GPUs.&lt;br /&gt;
&lt;br /&gt;
== Rules ==&lt;br /&gt;
&lt;br /&gt;
* Always use GPUs via ''sbatch'' (or ''srun''), never via ''ssh''. You can ssh to any machine e.g. to run ''nvidia-smi'' or ''htop'', but not to start computing on GPU.&lt;br /&gt;
* Don't forget to specify you RAM requirements with e.g. ''--mem=10G''.&lt;br /&gt;
* Always specify the number of GPU cards (e.g. ''--gres=gpu:1''). Thus e.g. &amp;lt;code&amp;gt;srun -p gpu --mem=64G --gres=gpu:2 --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* For interactive jobs, you can use ''srun'', but make sure to end your job as soon as you don't need the GPU (so don't use srun for long training).&lt;br /&gt;
* In general: don't reserve a GPU (as described above) without actually using it for longer time, e.g., try separating steps which need GPU and steps which do not and execute those separately on our GPU resp. CPU cluster.&lt;br /&gt;
* If you know an approximate runtime of your job, please specify it with ''-t &amp;lt;time&amp;gt;''. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== CUDA and cuDNN ==&lt;br /&gt;
&lt;br /&gt;
Available CUDA versions are in&lt;br /&gt;
  /opt/cuda&lt;br /&gt;
&lt;br /&gt;
=== CUDA modules ===&lt;br /&gt;
You can load late versions of CUDA as modules. This will set various environment variables for you so you should be able to use CUDA easily.&lt;br /&gt;
&lt;br /&gt;
# list available modules with: &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt;&lt;br /&gt;
# load the version you need (possibly specifying the version of CuDNN): &amp;lt;code&amp;gt;module load &amp;lt;modulename&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
# you can unload the module with: &amp;lt;code&amp;gt;module unload &amp;lt;modulename&amp;gt;&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_GPU_Jobs&amp;diff=97</id>
		<title>Submitting GPU Jobs</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_GPU_Jobs&amp;diff=97"/>
		<updated>2022-12-02T09:51:46Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* CUDA modules */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Start by reading [[Submitting CPU Jobs]] page.&lt;br /&gt;
&lt;br /&gt;
The GPU jobs are submitted to &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition.&lt;br /&gt;
&lt;br /&gt;
To ask for one GPU card, use &amp;lt;code&amp;gt;#SBATCH --gres=gpu:1&amp;lt;/code&amp;gt; directive or &amp;lt;code&amp;gt;--gres=gpu:1&amp;lt;/code&amp;gt; option on the command line. The submitted job has &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt; set appropriately, so all CUDA applications should use only the allocated GPUs.&lt;br /&gt;
&lt;br /&gt;
== Rules ==&lt;br /&gt;
&lt;br /&gt;
* Always use GPUs via ''sbatch'' (or ''srun''), never via ''ssh''. You can ssh to any machine e.g. to run ''nvidia-smi'' or ''htop'', but not to start computing on GPU.&lt;br /&gt;
* Don't forget to specify you RAM requirements with e.g. ''--mem=10G''.&lt;br /&gt;
* Always specify the number of GPU cards (e.g. ''--gres=gpu:1''). Thus e.g. &amp;lt;code&amp;gt;srun -p gpu --mem=64G --gres=gpu:2 --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* For interactive jobs, you can use ''srun'', but make sure to end your job as soon as you don't need the GPU (so don't use srun for long training).&lt;br /&gt;
* In general: don't reserve a GPU (as described above) without actually using it for longer time, e.g., try separating steps which need GPU and steps which do not and execute those separately on our GPU resp. CPU cluster.&lt;br /&gt;
* If you know an approximate runtime of your job, please specify it with ''-t &amp;lt;time&amp;gt;''. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== CUDA and cuDNN ==&lt;br /&gt;
&lt;br /&gt;
Available CUDA versions are in&lt;br /&gt;
  /opt/cuda&lt;br /&gt;
&lt;br /&gt;
=== CUDA modules ===&lt;br /&gt;
You can load late versions of CUDA as modules. This will set various environment variables for you so you should be able to use CUDA easily.&lt;br /&gt;
&lt;br /&gt;
# list available modules with:&lt;br /&gt;
  module avail&lt;br /&gt;
&amp;lt;li value=2&amp;gt; load the version you need (possibly specifying the version of CuDNN):&lt;br /&gt;
  module load &amp;lt;modulename&amp;gt;&lt;br /&gt;
&amp;lt;li value=3&amp;gt; you can unload the module with:&lt;br /&gt;
  module unload &amp;lt;modulename&amp;gt;&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_GPU_Jobs&amp;diff=96</id>
		<title>Submitting GPU Jobs</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_GPU_Jobs&amp;diff=96"/>
		<updated>2022-12-02T09:49:27Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* CUDA and cuDNN */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Start by reading [[Submitting CPU Jobs]] page.&lt;br /&gt;
&lt;br /&gt;
The GPU jobs are submitted to &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition.&lt;br /&gt;
&lt;br /&gt;
To ask for one GPU card, use &amp;lt;code&amp;gt;#SBATCH --gres=gpu:1&amp;lt;/code&amp;gt; directive or &amp;lt;code&amp;gt;--gres=gpu:1&amp;lt;/code&amp;gt; option on the command line. The submitted job has &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt; set appropriately, so all CUDA applications should use only the allocated GPUs.&lt;br /&gt;
&lt;br /&gt;
== Rules ==&lt;br /&gt;
&lt;br /&gt;
* Always use GPUs via ''sbatch'' (or ''srun''), never via ''ssh''. You can ssh to any machine e.g. to run ''nvidia-smi'' or ''htop'', but not to start computing on GPU.&lt;br /&gt;
* Don't forget to specify you RAM requirements with e.g. ''--mem=10G''.&lt;br /&gt;
* Always specify the number of GPU cards (e.g. ''--gres=gpu:1''). Thus e.g. &amp;lt;code&amp;gt;srun -p gpu --mem=64G --gres=gpu:2 --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* For interactive jobs, you can use ''srun'', but make sure to end your job as soon as you don't need the GPU (so don't use srun for long training).&lt;br /&gt;
* In general: don't reserve a GPU (as described above) without actually using it for longer time, e.g., try separating steps which need GPU and steps which do not and execute those separately on our GPU resp. CPU cluster.&lt;br /&gt;
* If you know an approximate runtime of your job, please specify it with ''-t &amp;lt;time&amp;gt;''. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== CUDA and cuDNN ==&lt;br /&gt;
&lt;br /&gt;
Available CUDA versions are in&lt;br /&gt;
  /opt/cuda&lt;br /&gt;
&lt;br /&gt;
=== CUDA modules ===&lt;br /&gt;
You can load late versions of CUDA as modules. This will set various environment variables for you so you should be able to use CUDA easily.&lt;br /&gt;
&lt;br /&gt;
# list available modules with:&lt;br /&gt;
  module avail&lt;br /&gt;
# load the version you need (possibly specifying the version of CuDNN):&lt;br /&gt;
  module load &amp;lt;modulename&amp;gt;&lt;br /&gt;
# you can unload the module with:&lt;br /&gt;
  module unload &amp;lt;modulename&amp;gt;&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=95</id>
		<title>Submitting CPU Jobs</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=95"/>
		<updated>2022-12-01T13:17:25Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Monitoring and interaction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The CPU jobs should be submitted to &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt; partition.&lt;br /&gt;
&lt;br /&gt;
You can submit a non-interactive job using the '''sbatch''' command.&lt;br /&gt;
To submit an interactive job, use the '''srun''' command:&lt;br /&gt;
&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
&lt;br /&gt;
== Resource specification ==&lt;br /&gt;
&lt;br /&gt;
You should specify the memory and CPU requirements (if higher than the defaults) and don't exceed them.&lt;br /&gt;
If your job needs more than one CPU (thread) (on a single machine) for most of the time, reserve the given number of CPU threads with the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; and memory with the &amp;lt;code&amp;gt;--mem&amp;lt;/code&amp;gt; options.  &lt;br /&gt;
&lt;br /&gt;
 srun -p cpu --cpus-per-task=4 --mem=8G --pty bash&lt;br /&gt;
 &lt;br /&gt;
This will give you an interactive shell with 4 threads and 8G RAM on the ''cpu'' partition.&lt;br /&gt;
&lt;br /&gt;
== Monitoring and interaction ==&lt;br /&gt;
&lt;br /&gt;
=== Job monitoring ===&lt;br /&gt;
We should be able to see what is going on when we run a job. Following examples shows usage of some typical commands:&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -a&amp;lt;/code&amp;gt; - this shows the jobs in all partitions.&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -u user&amp;lt;/code&amp;gt; - print a list of running/waiting jobs of a given user&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -j&amp;lt;JOB_ID&amp;gt;&amp;lt;/code&amp;gt; - this shows detailed info about the job with given JOB_ID (if it is still running).&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; - print available/total resources&lt;br /&gt;
&lt;br /&gt;
=== Job interaction ===&lt;br /&gt;
* &amp;lt;code&amp;gt;scontrol show job JOBID&amp;lt;/code&amp;gt; - this shows details of running job with JOBID&lt;br /&gt;
* &amp;lt;code&amp;gt;scancel JOBID&amp;lt;/code&amp;gt; - delete job from the queue&lt;br /&gt;
&lt;br /&gt;
=== Selected submit options ===&lt;br /&gt;
The complete list of available options for the commands &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; can be found in [https://slurm.schedmd.com/man_index.html SLURM documentation]. Most of the options listed here can be entered as a command parameters or as an SBATCH directive inside of a script.&lt;br /&gt;
&lt;br /&gt;
  -J helloWorld         # name of job&lt;br /&gt;
  -p gpu                # name of partition or queue (if not specified default partition is used)&lt;br /&gt;
  -q normal             # QOS level (sets priority of the job)&lt;br /&gt;
  -c 4                  # reserve 4 CPU threads&lt;br /&gt;
  --gres=gpu:1          # reserve 1 GPU card&lt;br /&gt;
  -o script.out         # name of output file for the job &lt;br /&gt;
  -e script.err         # name of error file for the job&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=94</id>
		<title>Submitting CPU Jobs</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=94"/>
		<updated>2022-12-01T13:09:30Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Selected submit options */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The CPU jobs should be submitted to &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt; partition.&lt;br /&gt;
&lt;br /&gt;
You can submit a non-interactive job using the '''sbatch''' command.&lt;br /&gt;
To submit an interactive job, use the '''srun''' command:&lt;br /&gt;
&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
&lt;br /&gt;
== Resource specification ==&lt;br /&gt;
&lt;br /&gt;
You should specify the memory and CPU requirements (if higher than the defaults) and don't exceed them.&lt;br /&gt;
If your job needs more than one CPU (thread) (on a single machine) for most of the time, reserve the given number of CPU threads with the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; and memory with the &amp;lt;code&amp;gt;--mem&amp;lt;/code&amp;gt; options.  &lt;br /&gt;
&lt;br /&gt;
 srun -p cpu --cpus-per-task=4 --mem=8G --pty bash&lt;br /&gt;
 &lt;br /&gt;
This will give you an interactive shell with 4 threads and 8G RAM on the ''cpu'' partition.&lt;br /&gt;
&lt;br /&gt;
== Monitoring and interaction ==&lt;br /&gt;
&lt;br /&gt;
=== Job monitoring ===&lt;br /&gt;
We should be able to see what is going on when we run a job. Following examples shows usage of some typical commands:&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -a&amp;lt;/code&amp;gt; - this shows the jobs in all partitions.&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -u user&amp;lt;/code&amp;gt; - print a list of running/waiting jobs of a given user&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -j&amp;lt;JOB_ID&amp;gt;&amp;lt;/code&amp;gt; - this shows detailed info about the job with given JOB_ID (if it is still running).&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; - print available/total resources&lt;br /&gt;
&lt;br /&gt;
=== Output monitoring ===&lt;br /&gt;
The standard output of the job is written to the file specified with the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. Similarly the errors are logged in the file specified with the option &amp;lt;code&amp;gt;-e&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Selected submit options ===&lt;br /&gt;
&lt;br /&gt;
The complete list of available options for the commands &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; can be found in [https://slurm.schedmd.com/man_index.html SLURM documentation]. Most of the options listed here can be entered as a command parameters or as an SBATCH directive inside of a script.&lt;br /&gt;
&lt;br /&gt;
  -J helloWorld         # name of job&lt;br /&gt;
  -p gpu                # name of partition or queue (if not specified default partition is used)&lt;br /&gt;
  -q normal             # QOS level (sets priority of the job)&lt;br /&gt;
  -c 4                  # reserve 4 CPU threads&lt;br /&gt;
  --gres=gpu:1          # reserve 1 GPU card&lt;br /&gt;
  -o script.out         # name of output file for the job &lt;br /&gt;
  -e script.err         # name of error file for the job&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=93</id>
		<title>Submitting CPU Jobs</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=93"/>
		<updated>2022-12-01T12:48:13Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Selected submit options */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The CPU jobs should be submitted to &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt; partition.&lt;br /&gt;
&lt;br /&gt;
You can submit a non-interactive job using the '''sbatch''' command.&lt;br /&gt;
To submit an interactive job, use the '''srun''' command:&lt;br /&gt;
&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
&lt;br /&gt;
== Resource specification ==&lt;br /&gt;
&lt;br /&gt;
You should specify the memory and CPU requirements (if higher than the defaults) and don't exceed them.&lt;br /&gt;
If your job needs more than one CPU (thread) (on a single machine) for most of the time, reserve the given number of CPU threads with the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; and memory with the &amp;lt;code&amp;gt;--mem&amp;lt;/code&amp;gt; options.  &lt;br /&gt;
&lt;br /&gt;
 srun -p cpu --cpus-per-task=4 --mem=8G --pty bash&lt;br /&gt;
 &lt;br /&gt;
This will give you an interactive shell with 4 threads and 8G RAM on the ''cpu'' partition.&lt;br /&gt;
&lt;br /&gt;
== Monitoring and interaction ==&lt;br /&gt;
&lt;br /&gt;
=== Job monitoring ===&lt;br /&gt;
We should be able to see what is going on when we run a job. Following examples shows usage of some typical commands:&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -a&amp;lt;/code&amp;gt; - this shows the jobs in all partitions.&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -u user&amp;lt;/code&amp;gt; - print a list of running/waiting jobs of a given user&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -j&amp;lt;JOB_ID&amp;gt;&amp;lt;/code&amp;gt; - this shows detailed info about the job with given JOB_ID (if it is still running).&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; - print available/total resources&lt;br /&gt;
&lt;br /&gt;
=== Output monitoring ===&lt;br /&gt;
The standard output of the job is written to the file specified with the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. Similarly the errors are logged in the file specified with the option &amp;lt;code&amp;gt;-e&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Selected submit options ===&lt;br /&gt;
&lt;br /&gt;
The complete list of available options for the commands &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; can be found in [https://slurm.schedmd.com/man_index.html SLURM documentation]. Most of the options listed here can be entered as a command parameters or as an SBATCH directive inside of a script.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=92</id>
		<title>Submitting CPU Jobs</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=92"/>
		<updated>2022-12-01T12:46:01Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The CPU jobs should be submitted to &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt; partition.&lt;br /&gt;
&lt;br /&gt;
You can submit a non-interactive job using the '''sbatch''' command.&lt;br /&gt;
To submit an interactive job, use the '''srun''' command:&lt;br /&gt;
&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
&lt;br /&gt;
== Resource specification ==&lt;br /&gt;
&lt;br /&gt;
You should specify the memory and CPU requirements (if higher than the defaults) and don't exceed them.&lt;br /&gt;
If your job needs more than one CPU (thread) (on a single machine) for most of the time, reserve the given number of CPU threads with the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; and memory with the &amp;lt;code&amp;gt;--mem&amp;lt;/code&amp;gt; options.  &lt;br /&gt;
&lt;br /&gt;
 srun -p cpu --cpus-per-task=4 --mem=8G --pty bash&lt;br /&gt;
 &lt;br /&gt;
This will give you an interactive shell with 4 threads and 8G RAM on the ''cpu'' partition.&lt;br /&gt;
&lt;br /&gt;
== Monitoring and interaction ==&lt;br /&gt;
&lt;br /&gt;
=== Job monitoring ===&lt;br /&gt;
We should be able to see what is going on when we run a job. Following examples shows usage of some typical commands:&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -a&amp;lt;/code&amp;gt; - this shows the jobs in all partitions.&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -u user&amp;lt;/code&amp;gt; - print a list of running/waiting jobs of a given user&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -j&amp;lt;JOB_ID&amp;gt;&amp;lt;/code&amp;gt; - this shows detailed info about the job with given JOB_ID (if it is still running).&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; - print available/total resources&lt;br /&gt;
&lt;br /&gt;
=== Output monitoring ===&lt;br /&gt;
The standard output of the job is written to the file specified with the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. Similarly the errors are logged in the file specified with the option &amp;lt;code&amp;gt;-e&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Selected submit options ===&lt;br /&gt;
&lt;br /&gt;
The complete list of available options for the commands &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; can be found in [https://slurm.schedmd.com/man_index.html SLURM documentation].&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=91</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=91"/>
		<updated>2022-12-01T12:40:32Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Access */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style='text-align: center;'&amp;gt;CZ.02.2.69/0.0/0.0/17_044/0008562&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style='text-align: center;'&amp;gt;Podpora rozvoje studijního prostředí na Univerzitě Karlově - VRR&amp;lt;/div&amp;gt; &lt;br /&gt;
[[File:OP_VVV_logo.jpg|frameless|center|upright=2.5]]&lt;br /&gt;
&lt;br /&gt;
== Welcome to AIC ==&lt;br /&gt;
&lt;br /&gt;
AIC (Artificial Intelligence Cluster) is a computational grid with sufficient computational capacity for research in the field of [https://en.wikipedia.org/wiki/Deep_learning deep learning] using both CPU and GPU. It was built on top of [https://slurm.schedmd.com/ SLURM] scheduling system. MFF students of Bc. and Mgr. degrees can use it to run their experiments and learn the proper ways of grid computing in the process.&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
AIC is dedicated to UFAL students who will get an account if requested by authorized lector.&lt;br /&gt;
&lt;br /&gt;
There is a restriction on resources allocated by one user in group '''students''' at a given time.&lt;br /&gt;
By default, this is set to a maximum of 4 CPU and 1 GPU.&lt;br /&gt;
&lt;br /&gt;
=== Connecting to the Cluster. ===&lt;br /&gt;
Use SSH to connect to the cluster:&lt;br /&gt;
  ssh LOGIN@aic.ufal.mff.cuni.cz&lt;br /&gt;
&lt;br /&gt;
=== Basic HOWTO ===&lt;br /&gt;
&lt;br /&gt;
Following HOWTO is meant to provide only a simplified overview of the cluster usage. It is strongly recommended to read some further documentation ([[Submitting_CPU_Jobs|CPU]] or [[Submitting_GPU_Jobs|GPU]]) before running some serious experiments.&lt;br /&gt;
More serious experiments tend to take more resources. In order to avoid unexpected failures please make sure your [[Quotas|quota]] is not exceeded.&lt;br /&gt;
&lt;br /&gt;
'''Rule 0: NEVER RUN JOBS DIRECTLY ON aic.ufal.mff.cuni.cz HEADNODE. Use &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; to get computational node shell!'''&lt;br /&gt;
&lt;br /&gt;
Suppose we want to run some computations described by a script called &amp;lt;code&amp;gt;job_script.sh&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH -J helloWorld					  # name of job&lt;br /&gt;
 #SBATCH -p cpu 	 		       		  # name of partition or queue (default is cpu)&lt;br /&gt;
 #SBATCH -o helloWorld.out				  # name of output file for this submission script&lt;br /&gt;
 #SBATCH -e helloWorld.err				  # name of error file for this submission script&lt;br /&gt;
 # run my job (some executable)&lt;br /&gt;
 sleep 5&lt;br /&gt;
 echo &amp;quot;Hello I am running on cluster!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
We need to ''submit'' the job to the cluster which is done by logging on the submit host &amp;lt;code&amp;gt;aic.ufal.mff.cuni.cz&amp;lt;/code&amp;gt; and issuing the command:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;sbatch job_script.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will enqueue our ''job'' to the default ''partition'' (or ''queue'') which is &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt;. The scheduler decides which particular machine in the specified queue has ''resources'' needed to run the job. Typically we will see a message which tells us the ID of our job (3 in this example):&lt;br /&gt;
&lt;br /&gt;
 Submitted batch job 3&lt;br /&gt;
&lt;br /&gt;
The options used in this example are specified inside the script using the ''#SBATCH'' directive. Any option can be specified either in the script or as a command line parameter (see ''man sbatch'' for details).&lt;br /&gt;
&lt;br /&gt;
We can specify custom arguments '''before''' the name of the script:&lt;br /&gt;
&lt;br /&gt;
 sbatch --export=ARG1='firstArg',ARG2='secondArg' job_script.sh&lt;br /&gt;
&lt;br /&gt;
These can be accessed in the job script as &amp;lt;code&amp;gt;$ARG1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;$ARG2&amp;lt;/code&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=90</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=90"/>
		<updated>2022-12-01T12:22:59Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Basic HOWTO */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style='text-align: center;'&amp;gt;CZ.02.2.69/0.0/0.0/17_044/0008562&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style='text-align: center;'&amp;gt;Podpora rozvoje studijního prostředí na Univerzitě Karlově - VRR&amp;lt;/div&amp;gt; &lt;br /&gt;
[[File:OP_VVV_logo.jpg|frameless|center|upright=2.5]]&lt;br /&gt;
&lt;br /&gt;
== Welcome to AIC ==&lt;br /&gt;
&lt;br /&gt;
AIC (Artificial Intelligence Cluster) is a computational grid with sufficient computational capacity for research in the field of [https://en.wikipedia.org/wiki/Deep_learning deep learning] using both CPU and GPU. It was built on top of [https://slurm.schedmd.com/ SLURM] scheduling system. MFF students of Bc. and Mgr. degrees can use it to run their experiments and learn the proper ways of grid computing in the process.&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
AIC is dedicated to UFAL students who will get an account if requested by authorized lector.&lt;br /&gt;
&lt;br /&gt;
=== Connecting to the Cluster. ===&lt;br /&gt;
Use SSH to connect to the cluster:&lt;br /&gt;
  ssh LOGIN@aic.ufal.mff.cuni.cz&lt;br /&gt;
&lt;br /&gt;
=== Basic HOWTO ===&lt;br /&gt;
&lt;br /&gt;
Following HOWTO is meant to provide only a simplified overview of the cluster usage. It is strongly recommended to read some further documentation ([[Submitting_CPU_Jobs|CPU]] or [[Submitting_GPU_Jobs|GPU]]) before running some serious experiments.&lt;br /&gt;
More serious experiments tend to take more resources. In order to avoid unexpected failures please make sure your [[Quotas|quota]] is not exceeded.&lt;br /&gt;
&lt;br /&gt;
'''Rule 0: NEVER RUN JOBS DIRECTLY ON aic.ufal.mff.cuni.cz HEADNODE. Use &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; to get computational node shell!'''&lt;br /&gt;
&lt;br /&gt;
Suppose we want to run some computations described by a script called &amp;lt;code&amp;gt;job_script.sh&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH -J helloWorld					  # name of job&lt;br /&gt;
 #SBATCH -p cpu 	 		       		  # name of partition or queue (default is cpu)&lt;br /&gt;
 #SBATCH -o helloWorld.out				  # name of output file for this submission script&lt;br /&gt;
 #SBATCH -e helloWorld.err				  # name of error file for this submission script&lt;br /&gt;
 # run my job (some executable)&lt;br /&gt;
 sleep 5&lt;br /&gt;
 echo &amp;quot;Hello I am running on cluster!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
We need to ''submit'' the job to the cluster which is done by logging on the submit host &amp;lt;code&amp;gt;aic.ufal.mff.cuni.cz&amp;lt;/code&amp;gt; and issuing the command:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;sbatch job_script.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will enqueue our ''job'' to the default ''partition'' (or ''queue'') which is &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt;. The scheduler decides which particular machine in the specified queue has ''resources'' needed to run the job. Typically we will see a message which tells us the ID of our job (3 in this example):&lt;br /&gt;
&lt;br /&gt;
 Submitted batch job 3&lt;br /&gt;
&lt;br /&gt;
The options used in this example are specified inside the script using the ''#SBATCH'' directive. Any option can be specified either in the script or as a command line parameter (see ''man sbatch'' for details).&lt;br /&gt;
&lt;br /&gt;
We can specify custom arguments '''before''' the name of the script:&lt;br /&gt;
&lt;br /&gt;
 sbatch --export=ARG1='firstArg',ARG2='secondArg' job_script.sh&lt;br /&gt;
&lt;br /&gt;
These can be accessed in the job script as &amp;lt;code&amp;gt;$ARG1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;$ARG2&amp;lt;/code&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=89</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=89"/>
		<updated>2022-12-01T12:22:21Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Basic HOWTO */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style='text-align: center;'&amp;gt;CZ.02.2.69/0.0/0.0/17_044/0008562&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style='text-align: center;'&amp;gt;Podpora rozvoje studijního prostředí na Univerzitě Karlově - VRR&amp;lt;/div&amp;gt; &lt;br /&gt;
[[File:OP_VVV_logo.jpg|frameless|center|upright=2.5]]&lt;br /&gt;
&lt;br /&gt;
== Welcome to AIC ==&lt;br /&gt;
&lt;br /&gt;
AIC (Artificial Intelligence Cluster) is a computational grid with sufficient computational capacity for research in the field of [https://en.wikipedia.org/wiki/Deep_learning deep learning] using both CPU and GPU. It was built on top of [https://slurm.schedmd.com/ SLURM] scheduling system. MFF students of Bc. and Mgr. degrees can use it to run their experiments and learn the proper ways of grid computing in the process.&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
AIC is dedicated to UFAL students who will get an account if requested by authorized lector.&lt;br /&gt;
&lt;br /&gt;
=== Connecting to the Cluster. ===&lt;br /&gt;
Use SSH to connect to the cluster:&lt;br /&gt;
  ssh LOGIN@aic.ufal.mff.cuni.cz&lt;br /&gt;
&lt;br /&gt;
=== Basic HOWTO ===&lt;br /&gt;
&lt;br /&gt;
Following HOWTO is meant to provide only a simplified overview of the cluster usage. It is strongly recommended to read some further documentation ([[Submitting_CPU_Jobs|CPU]] or [[Submitting_GPU_Jobs|GPU]]) before running some serious experiments.&lt;br /&gt;
More serious experiments tend to take more resources. In order to avoid unexpected failures please make sure your [[Quotas|quota]] is not exceeded.&lt;br /&gt;
&lt;br /&gt;
'''Rule 0: NEVER RUN JOBS DIRECTLY ON aic.ufal.mff.cuni.cz HEADNODE. Use '''srun''' to get computational node shell!'''&lt;br /&gt;
&lt;br /&gt;
Suppose we want to run some computations described by a script called &amp;lt;code&amp;gt;job_script.sh&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH -J helloWorld					  # name of job&lt;br /&gt;
 #SBATCH -p cpu 	 		       		  # name of partition or queue (default is cpu)&lt;br /&gt;
 #SBATCH -o helloWorld.out				  # name of output file for this submission script&lt;br /&gt;
 #SBATCH -e helloWorld.err				  # name of error file for this submission script&lt;br /&gt;
 # run my job (some executable)&lt;br /&gt;
 sleep 5&lt;br /&gt;
 echo &amp;quot;Hello I am running on cluster!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
We need to ''submit'' the job to the cluster which is done by logging on the submit host &amp;lt;code&amp;gt;aic.ufal.mff.cuni.cz&amp;lt;/code&amp;gt; and issuing the command:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;sbatch job_script.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will enqueue our ''job'' to the default ''partition'' (or ''queue'') which is &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt;. The scheduler decides which particular machine in the specified queue has ''resources'' needed to run the job. Typically we will see a message which tells us the ID of our job (3 in this example):&lt;br /&gt;
&lt;br /&gt;
 Submitted batch job 3&lt;br /&gt;
&lt;br /&gt;
The options used in this example are specified inside the script using the ''#SBATCH'' directive. Any option can be specified either in the script or as a command line parameter (see ''man sbatch'' for details).&lt;br /&gt;
&lt;br /&gt;
We can specify custom arguments '''before''' the name of the script:&lt;br /&gt;
&lt;br /&gt;
 sbatch --export=ARG1='firstArg',ARG2='secondArg' job_script.sh&lt;br /&gt;
&lt;br /&gt;
These can be accessed in the job script as &amp;lt;code&amp;gt;$ARG1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;$ARG2&amp;lt;/code&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=88</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=88"/>
		<updated>2022-11-22T10:34:43Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Welcome to AIC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style='text-align: center;'&amp;gt;CZ.02.2.69/0.0/0.0/17_044/0008562&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style='text-align: center;'&amp;gt;Podpora rozvoje studijního prostředí na Univerzitě Karlově - VRR&amp;lt;/div&amp;gt; &lt;br /&gt;
[[File:OP_VVV_logo.jpg|frameless|center|upright=2.5]]&lt;br /&gt;
&lt;br /&gt;
== Welcome to AIC ==&lt;br /&gt;
&lt;br /&gt;
AIC (Artificial Intelligence Cluster) is a computational grid with sufficient computational capacity for research in the field of [https://en.wikipedia.org/wiki/Deep_learning deep learning] using both CPU and GPU. It was built on top of [https://slurm.schedmd.com/ SLURM] scheduling system. MFF students of Bc. and Mgr. degrees can use it to run their experiments and learn the proper ways of grid computing in the process.&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
AIC is dedicated to UFAL students who will get an account if requested by authorized lector.&lt;br /&gt;
&lt;br /&gt;
=== Connecting to the Cluster. ===&lt;br /&gt;
Use SSH to connect to the cluster:&lt;br /&gt;
  ssh LOGIN@aic.ufal.mff.cuni.cz&lt;br /&gt;
&lt;br /&gt;
=== Basic HOWTO ===&lt;br /&gt;
&lt;br /&gt;
Following HOWTO is meant to provide only a simplified overview of the cluster usage. It is strongly recommended to read some further documentation ([[Submitting_CPU_Jobs|CPU]] or [[Submitting_GPU_Jobs|GPU]]) before running some serious experiments.&lt;br /&gt;
More serious experiments tend to take more resources. In order to avoid unexpected failures please make sure your [[Quotas|quota]] is not exceeded.&lt;br /&gt;
&lt;br /&gt;
'''Rule 0: NEVER RUN JOBS DIRECTLY ON aic.ufal.mff.cuni.cz HEADNODE. Use *srun* to get computational node shell!'''&lt;br /&gt;
&lt;br /&gt;
Suppose we want to run some computations described by a script called &amp;lt;code&amp;gt;job_script.sh&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH -J helloWorld					  # name of job&lt;br /&gt;
 #SBATCH -p cpu 	 		       		  # name of partition or queue (default is cpu)&lt;br /&gt;
 #SBATCH -o helloWorld.out				  # name of output file for this submission script&lt;br /&gt;
 #SBATCH -e helloWorld.err				  # name of error file for this submission script&lt;br /&gt;
 # run my job (some executable)&lt;br /&gt;
 sleep 5&lt;br /&gt;
 echo &amp;quot;Hello I am running on cluster!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
We need to ''submit'' the job to the cluster which is done by logging on the submit host &amp;lt;code&amp;gt;aic.ufal.mff.cuni.cz&amp;lt;/code&amp;gt; and issuing the command:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;sbatch job_script.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will enqueue our ''job'' to the default ''partition'' (or ''queue'') which is &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt;. The scheduler decides which particular machine in the specified queue has ''resources'' needed to run the job. Typically we will see a message which tells us the ID of our job (3 in this example):&lt;br /&gt;
&lt;br /&gt;
 Submitted batch job 3&lt;br /&gt;
&lt;br /&gt;
The options used in this example are specified inside the script using the ''#SBATCH'' directive. Any option can be specified either in the script or as a command line parameter (see ''man sbatch'' for details).&lt;br /&gt;
&lt;br /&gt;
We can specify custom arguments '''before''' the name of the script:&lt;br /&gt;
&lt;br /&gt;
 sbatch --export=ARG1='firstArg',ARG2='secondArg' job_script.sh&lt;br /&gt;
&lt;br /&gt;
These can be accessed in the job script as &amp;lt;code&amp;gt;$ARG1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;$ARG2&amp;lt;/code&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_GPU_Jobs&amp;diff=87</id>
		<title>Submitting GPU Jobs</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_GPU_Jobs&amp;diff=87"/>
		<updated>2022-11-16T15:23:06Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Start by reading [[Submitting CPU Jobs]] page.&lt;br /&gt;
&lt;br /&gt;
The GPU jobs are submitted to &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition.&lt;br /&gt;
&lt;br /&gt;
To ask for one GPU card, use &amp;lt;code&amp;gt;#SBATCH --gres=gpu:1&amp;lt;/code&amp;gt; directive or &amp;lt;code&amp;gt;--gres=gpu:1&amp;lt;/code&amp;gt; option on the command line. The submitted job has &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt; set appropriately, so all CUDA applications should use only the allocated GPUs.&lt;br /&gt;
&lt;br /&gt;
== Rules ==&lt;br /&gt;
&lt;br /&gt;
* Always use GPUs via ''sbatch'' (or ''srun''), never via ''ssh''. You can ssh to any machine e.g. to run ''nvidia-smi'' or ''htop'', but not to start computing on GPU.&lt;br /&gt;
* Don't forget to specify you RAM requirements with e.g. ''--mem=10G''.&lt;br /&gt;
* Always specify the number of GPU cards (e.g. ''--gres=gpu:1''). Thus e.g. &amp;lt;code&amp;gt;srun -p gpu --mem=64G --gres=gpu:2 --pty bash&amp;lt;/code&amp;gt;&lt;br /&gt;
* For interactive jobs, you can use ''srun'', but make sure to end your job as soon as you don't need the GPU (so don't use srun for long training).&lt;br /&gt;
* In general: don't reserve a GPU (as described above) without actually using it for longer time, e.g., try separating steps which need GPU and steps which do not and execute those separately on our GPU resp. CPU cluster.&lt;br /&gt;
* If you know an approximate runtime of your job, please specify it with ''-t &amp;lt;time&amp;gt;''. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== CUDA and cuDNN ==&lt;br /&gt;
&lt;br /&gt;
Default CUDA (currently 11.2 as of Nov 2021) is available in&lt;br /&gt;
  /opt/cuda&lt;br /&gt;
Specific version can be found in&lt;br /&gt;
  /lnet/aic/opt/cuda/cuda-{9.0,9.2,10.0,10.1,10.2,11.2,...}&lt;br /&gt;
Depending on what version you need, you should add &amp;lt;code&amp;gt;LD_LIBRARY_PATH=&amp;quot;/lnet/aic/opt/cuda/cuda-X.Y/lib64:$LD_LIBRARY_PATH&amp;quot;&amp;lt;/code&amp;gt; to your configuration.&lt;br /&gt;
&lt;br /&gt;
Regarding cuDNN:&lt;br /&gt;
* for CUDA 9.0, 9.2, 10.0 and 10.1, cuDNN is available directly in ''lib64'' directory of the respective CUDA, so no need to configure it specifically;&lt;br /&gt;
* for CUDA 10.1 and later, cuDNN is available in ''cudnn/VERSION/lib64'' subdirectory of the respective CUDA, so you need to add &amp;lt;code&amp;gt;LD_LIBRARY_PATH=&amp;quot;/lnet/aic/opt/cuda/cuda-X.Y/cudnn/VERSION/lib64:$LD_LIBRARY_PATH&amp;quot;&amp;lt;/code&amp;gt; to your configuration.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=86</id>
		<title>Submitting CPU Jobs</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=86"/>
		<updated>2022-11-16T15:13:39Z</updated>

		<summary type="html">&lt;p&gt;Admin: modified for SLURM - first draft&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The CPU jobs should be submitted to &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt; partition.&lt;br /&gt;
&lt;br /&gt;
You can submit a non-interactive job using the '''sbatch''' command.&lt;br /&gt;
To submit an interactive job, use the '''srun''' command:&lt;br /&gt;
&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
&lt;br /&gt;
== Resource specification ==&lt;br /&gt;
&lt;br /&gt;
You should specify the memory and CPU requirements (if higher than the defaults) and don't exceed them.&lt;br /&gt;
If your job needs more than one CPU (thread) (on a single machine) for most of the time, reserve the given number of CPU threads with the &amp;lt;code&amp;gt;--cpus-per-task&amp;lt;/code&amp;gt; and memory with the &amp;lt;code&amp;gt;--mem&amp;lt;/code&amp;gt; options.  &lt;br /&gt;
&lt;br /&gt;
 srun -p cpu --cpus-per-task=4 --mem=8G --pty bash&lt;br /&gt;
 &lt;br /&gt;
This will give you an interactive shell with 4 threads and 8G RAM on the ''cpu'' partition.&lt;br /&gt;
&lt;br /&gt;
== Monitoring and interaction ==&lt;br /&gt;
&lt;br /&gt;
=== Job monitoring ===&lt;br /&gt;
We should be able to see what is going on when we run a job. Following examples shows usage of some typical commands:&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -a&amp;lt;/code&amp;gt; - this shows the jobs in all partitions.&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -u user&amp;lt;/code&amp;gt; - print a list of running/waiting jobs of a given user&lt;br /&gt;
* &amp;lt;code&amp;gt;squeue -j&amp;lt;JOB_ID&amp;gt;&amp;lt;/code&amp;gt; - this shows detailed info about the job with given JOB_ID (if it is still running).&lt;br /&gt;
* &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; - print available/total resources&lt;br /&gt;
&lt;br /&gt;
=== Output monitoring ===&lt;br /&gt;
The standard output of the job is written to the file specified with the option &amp;lt;code&amp;gt;-o&amp;lt;/code&amp;gt;. Similarly the errors are logged in the file specified with the option &amp;lt;code&amp;gt;-e&amp;lt;/code&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Quotas&amp;diff=85</id>
		<title>Quotas</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Quotas&amp;diff=85"/>
		<updated>2022-11-16T14:51:09Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page lists resource quotas for the AIC cluster users.&lt;br /&gt;
&lt;br /&gt;
== Disk Quotas ==&lt;br /&gt;
&lt;br /&gt;
Everyone has a disk quota set (the default was 50G as of Jan 2020).&lt;br /&gt;
&lt;br /&gt;
You can find out the quota and your current disk usage by running&lt;br /&gt;
&lt;br /&gt;
 lfs quota -h /lnet/aic&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=84</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=84"/>
		<updated>2022-11-16T14:49:20Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Basic HOWTO - modified for SLURM */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style='text-align: center;'&amp;gt;CZ.02.2.69/0.0/0.0/17_044/0008562&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style='text-align: center;'&amp;gt;Podpora rozvoje studijního prostředí na Univerzitě Karlově - VRR&amp;lt;/div&amp;gt; &lt;br /&gt;
[[File:OP_VVV_logo.jpg|frameless|center|upright=2.5]]&lt;br /&gt;
&lt;br /&gt;
== Welcome to AIC ==&lt;br /&gt;
&lt;br /&gt;
AIC (Artificial Intelligence Cluster) is a computational grid with sufficient computational capacity for research in the field of [https://en.wikipedia.org/wiki/Deep_learning deep learning] using both CPU and GPU. It was built on top of [https://arc.liv.ac.uk/trac/SGE SGE] scheduling system. MFF students of Bc. and Mgr. degrees can use it to run their experiments and learn the proper ways of grid computing in the process.&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
AIC is dedicated to UFAL students who will get an account if requested by authorized lector.&lt;br /&gt;
&lt;br /&gt;
=== Connecting to the Cluster. ===&lt;br /&gt;
Use SSH to connect to the cluster:&lt;br /&gt;
  ssh LOGIN@aic.ufal.mff.cuni.cz&lt;br /&gt;
&lt;br /&gt;
=== Basic HOWTO ===&lt;br /&gt;
&lt;br /&gt;
Following HOWTO is meant to provide only a simplified overview of the cluster usage. It is strongly recommended to read some further documentation ([[Submitting_CPU_Jobs|CPU]] or [[Submitting_GPU_Jobs|GPU]]) before running some serious experiments.&lt;br /&gt;
More serious experiments tend to take more resources. In order to avoid unexpected failures please make sure your [[Quotas|quota]] is not exceeded.&lt;br /&gt;
&lt;br /&gt;
'''Rule 0: NEVER RUN JOBS DIRECTLY ON aic.ufal.mff.cuni.cz HEADNODE. Use *srun* to get computational node shell!'''&lt;br /&gt;
&lt;br /&gt;
Suppose we want to run some computations described by a script called &amp;lt;code&amp;gt;job_script.sh&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH -J helloWorld					  # name of job&lt;br /&gt;
 #SBATCH -p cpu 	 		       		  # name of partition or queue (default is cpu)&lt;br /&gt;
 #SBATCH -o helloWorld.out				  # name of output file for this submission script&lt;br /&gt;
 #SBATCH -e helloWorld.err				  # name of error file for this submission script&lt;br /&gt;
 # run my job (some executable)&lt;br /&gt;
 sleep 5&lt;br /&gt;
 echo &amp;quot;Hello I am running on cluster!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
We need to ''submit'' the job to the cluster which is done by logging on the submit host &amp;lt;code&amp;gt;aic.ufal.mff.cuni.cz&amp;lt;/code&amp;gt; and issuing the command:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;sbatch job_script.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will enqueue our ''job'' to the default ''partition'' (or ''queue'') which is &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt;. The scheduler decides which particular machine in the specified queue has ''resources'' needed to run the job. Typically we will see a message which tells us the ID of our job (3 in this example):&lt;br /&gt;
&lt;br /&gt;
 Submitted batch job 3&lt;br /&gt;
&lt;br /&gt;
The options used in this example are specified inside the script using the ''#SBATCH'' directive. Any option can be specified either in the script or as a command line parameter (see ''man sbatch'' for details).&lt;br /&gt;
&lt;br /&gt;
We can specify custom arguments '''before''' the name of the script:&lt;br /&gt;
&lt;br /&gt;
 sbatch --export=ARG1='firstArg',ARG2='secondArg' job_script.sh&lt;br /&gt;
&lt;br /&gt;
These can be accessed in the job script as &amp;lt;code&amp;gt;$ARG1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;$ARG2&amp;lt;/code&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=83</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=83"/>
		<updated>2022-11-16T14:10:21Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Basic HOWTO */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style='text-align: center;'&amp;gt;CZ.02.2.69/0.0/0.0/17_044/0008562&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style='text-align: center;'&amp;gt;Podpora rozvoje studijního prostředí na Univerzitě Karlově - VRR&amp;lt;/div&amp;gt; &lt;br /&gt;
[[File:OP_VVV_logo.jpg|frameless|center|upright=2.5]]&lt;br /&gt;
&lt;br /&gt;
== Welcome to AIC ==&lt;br /&gt;
&lt;br /&gt;
AIC (Artificial Intelligence Cluster) is a computational grid with sufficient computational capacity for research in the field of [https://en.wikipedia.org/wiki/Deep_learning deep learning] using both CPU and GPU. It was built on top of [https://arc.liv.ac.uk/trac/SGE SGE] scheduling system. MFF students of Bc. and Mgr. degrees can use it to run their experiments and learn the proper ways of grid computing in the process.&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
AIC is dedicated to UFAL students who will get an account if requested by authorized lector.&lt;br /&gt;
&lt;br /&gt;
=== Connecting to the Cluster. ===&lt;br /&gt;
Use SSH to connect to the cluster:&lt;br /&gt;
  ssh LOGIN@aic.ufal.mff.cuni.cz&lt;br /&gt;
&lt;br /&gt;
=== Basic HOWTO ===&lt;br /&gt;
&lt;br /&gt;
Following HOWTO is meant to provide only a simplified overview of the cluster usage. It is strongly recommended to read some further documentation ([[Submitting_CPU_Jobs|CPU]] or [[Submitting_GPU_Jobs|GPU]]) before running some serious experiments.&lt;br /&gt;
More serious experiments tend to take more resources. In order to avoid unexpected failures please make sure your [[Quotas|quota]] is not exceeded.&lt;br /&gt;
&lt;br /&gt;
'''Rule 0: NEVER RUN JOBS DIRECTLY ON aic.ufal.mff.cuni.cz HEADNODE. Use *srun* to get computational node shell!'''&lt;br /&gt;
&lt;br /&gt;
Suppose we want to run some computations described by a script called &amp;lt;code&amp;gt;job_script.sh&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH -J helloWorld					  # name of job&lt;br /&gt;
 #SBATCH -p cpu 					  # name of partition or queue (default is cpu)&lt;br /&gt;
 #SBATCH -o helloWorld.out				  # name of output file for this submission script&lt;br /&gt;
 #SBATCH -e helloWorld.err				  # name of error file for this submission script&lt;br /&gt;
 # run my job (some executable)&lt;br /&gt;
 sleep 5&lt;br /&gt;
 echo &amp;quot;Hello I am running on cluster!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
We need to ''submit'' the job to the cluster which is done by logging on the submit host &amp;lt;code&amp;gt;aic.ufal.mff.cuni.cz&amp;lt;/code&amp;gt; and issuing the command:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;sbatch job_script.sh&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will enqueue our ''job'' to the default ''partition'' (or ''queue'') which is &amp;lt;code&amp;gt;cpu&amp;lt;/code&amp;gt;. The scheduler decides which particular machine in the specified queue has ''resources'' needed to run the job. Typically we will see a message which tells us the ID of our job (82 in this example):&lt;br /&gt;
&lt;br /&gt;
 Your job 82 (&amp;quot;job_script.sh&amp;quot;) has been submitted&lt;br /&gt;
&lt;br /&gt;
The basic options used in this example are:&lt;br /&gt;
* &amp;lt;code&amp;gt;-cwd&amp;lt;/code&amp;gt; - the script is executed in the current directory (the default is your &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;)&lt;br /&gt;
* &amp;lt;code&amp;gt;-j y&amp;lt;/code&amp;gt; - ''stdout'' and ''stderr'' outputs are merged and redirected to a file (&amp;lt;code&amp;gt;job_script.sh.o82&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
We have specified two parameters &amp;lt;code&amp;gt;Hello&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;World&amp;lt;/code&amp;gt;. The output of the script will be located in your &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt; directory after the script is executed. It will be merged with ''stderr'' and it should look like this:&lt;br /&gt;
&lt;br /&gt;
 AIC:ubuntu 18.04: SGE 8.1.9 configured...                                                                                              &lt;br /&gt;
 This is just a test.&lt;br /&gt;
 printing parameter1: Hello&lt;br /&gt;
 prinitng parameter2: World&lt;br /&gt;
 ======= EPILOG: Tue Jun 4 12:41:07 CEST 2019&lt;br /&gt;
 == Limits:   &lt;br /&gt;
 == Usage:    cpu=00:00:00, mem=0.00000 GB s, io=0.00000 GB, vmem=N/A, maxvmem=N/A&lt;br /&gt;
 == Duration: 00:00:00 (0 s)&lt;br /&gt;
 == Server name: cpu-node13&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=79</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=79"/>
		<updated>2022-03-22T08:21:52Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Basic HOWTO */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style='text-align: center;'&amp;gt;CZ.02.2.69/0.0/0.0/17_044/0008562&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style='text-align: center;'&amp;gt;Podpora rozvoje studijního prostředí na Univerzitě Karlově - VRR&amp;lt;/div&amp;gt; &lt;br /&gt;
[[File:OP_VVV_logo.jpg|frameless|center|upright=2.5]]&lt;br /&gt;
&lt;br /&gt;
== Welcome to AIC ==&lt;br /&gt;
&lt;br /&gt;
AIC (Artificial Intelligence Cluster) is a computational grid with sufficient computational capacity for research in the field of [https://en.wikipedia.org/wiki/Deep_learning deep learning] using both CPU and GPU. It was built on top of [https://arc.liv.ac.uk/trac/SGE SGE] scheduling system. MFF students of Bc. and Mgr. degrees can use it to run their experiments and learn the proper ways of grid computing in the process.&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
AIC is dedicated to UFAL students who will get an account if requested by authorized lector.&lt;br /&gt;
&lt;br /&gt;
=== Connecting to the Cluster. ===&lt;br /&gt;
Use SSH to connect to the cluster:&lt;br /&gt;
  ssh LOGIN@aic.ufal.mff.cuni.cz&lt;br /&gt;
&lt;br /&gt;
=== Basic HOWTO ===&lt;br /&gt;
&lt;br /&gt;
Following HOWTO is meant to provide only a simplified overview of the cluster usage. It is strongly recommended to read some further documentation ([[Submitting_CPU_Jobs|CPU]] or [[Submitting_GPU_Jobs|GPU]]) before running some serious experiments.&lt;br /&gt;
More serious experiments tend to take more resources. In order to avoid unexpected failures please make sure your [[Quotas|quota]] is not exceeded.&lt;br /&gt;
&lt;br /&gt;
'''Rule 0: NEVER RUN JOBS DIRECTLY ON aic.ufal.mff.cuni.cz HEADNODE. Use qrsh to get computational node shell!'''&lt;br /&gt;
&lt;br /&gt;
Suppose we want to run some computations described by a script called &amp;lt;code&amp;gt;job_script.sh&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 echo &amp;quot;This is just a test.&amp;quot;&lt;br /&gt;
 echo &amp;quot;printing parameter1: $1&amp;quot;&lt;br /&gt;
 echo &amp;quot;prinitng parameter2: $2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We need to ''submit'' the job to the grid which is done by logging on the submit host &amp;lt;code&amp;gt;aic.ufal.mff.cuni.cz&amp;lt;/code&amp;gt; and issuing the command:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;qsub -cwd -j y job_script.sh Hello World&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will enqueue our ''job'' to the default ''queue'' which is &amp;lt;code&amp;gt;cpu.q@*&amp;lt;/code&amp;gt;. The scheduler decides which particular machine in the specified queue has ''resources'' needed to run the job. Typically we will see a message which tells us the ID of our job (82 in this example):&lt;br /&gt;
&lt;br /&gt;
 Your job 82 (&amp;quot;job_script.sh&amp;quot;) has been submitted&lt;br /&gt;
&lt;br /&gt;
The basic options used in this example are:&lt;br /&gt;
* &amp;lt;code&amp;gt;-cwd&amp;lt;/code&amp;gt; - the script is executed in the current directory (the default is your &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;)&lt;br /&gt;
* &amp;lt;code&amp;gt;-j y&amp;lt;/code&amp;gt; - ''stdout'' and ''stderr'' outputs are merged and redirected to a file (&amp;lt;code&amp;gt;job_script.sh.o82&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
We have specified two parameters &amp;lt;code&amp;gt;Hello&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;World&amp;lt;/code&amp;gt;. The output of the script will be located in your &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt; directory after the script is executed. It will be merged with ''stderr'' and it should look like this:&lt;br /&gt;
&lt;br /&gt;
 AIC:ubuntu 18.04: SGE 8.1.9 configured...                                                                                              &lt;br /&gt;
 This is just a test.&lt;br /&gt;
 printing parameter1: Hello&lt;br /&gt;
 prinitng parameter2: World&lt;br /&gt;
 ======= EPILOG: Tue Jun 4 12:41:07 CEST 2019&lt;br /&gt;
 == Limits:   &lt;br /&gt;
 == Usage:    cpu=00:00:00, mem=0.00000 GB s, io=0.00000 GB, vmem=N/A, maxvmem=N/A&lt;br /&gt;
 == Duration: 00:00:00 (0 s)&lt;br /&gt;
 == Server name: cpu-node13&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=69</id>
		<title>Submitting CPU Jobs</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=69"/>
		<updated>2020-01-29T10:26:27Z</updated>

		<summary type="html">&lt;p&gt;Admin: Admin moved page Submitting Jobs to Submitting CPU Jobs without leaving a redirect: Preserve history of original page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The CPU jobs should be submitted to &amp;lt;code&amp;gt;cpu.q&amp;lt;/code&amp;gt; queue.&lt;br /&gt;
&lt;br /&gt;
== Resource specification ==&lt;br /&gt;
&lt;br /&gt;
You should specify the memory and CPU requirements (if higher than the defaults) and don't exceed them.&lt;br /&gt;
If your job needs more than one CPU (on a single machine) for most of the time, reserve the given number of CPU cores (and SGE slots) with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qsub -pe smp &amp;lt;number-of-CPU-cores&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The maximum for AIC cluster is 4 cores. If your job needs e.g. up to 110% CPU most of the time and just occasionally 200%, it is OK to reserve just one core (so you don't waste).&lt;br /&gt;
&lt;br /&gt;
If you are sure your job needs less than 1GB RAM, then you can skip this. Otherwise, if you need e.g. 8 GiB, you must always use &amp;lt;code&amp;gt;qsub&amp;lt;/code&amp;gt; (or &amp;lt;code&amp;gt;qrsh&amp;lt;/code&amp;gt;) with &amp;lt;code&amp;gt;-l mem_free=8G&amp;lt;/code&amp;gt;. You should specify also &amp;lt;code&amp;gt;act_mem_free&amp;lt;/code&amp;gt; with the same value and &amp;lt;code&amp;gt;h_vmem&amp;lt;/code&amp;gt; with possibly a slightly bigger value. See [[#Memory]] for details.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qsub -l mem_free=8G,act_mem_free=8G,h_vmem=12G&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Monitoring and interaction ==&lt;br /&gt;
&lt;br /&gt;
=== Job monitoring ===&lt;br /&gt;
We should be able to see what is going on when we run a job. Following examples shows usage of some typical commands:&lt;br /&gt;
* &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt; - this way we inspect all our jobs (both waiting in the queue and scheduled, i.e. running).&lt;br /&gt;
* &amp;lt;code&amp;gt;qstat [-u user]&amp;lt;/code&amp;gt; - print a list of running/waiting jobs of a given user&lt;br /&gt;
* &amp;lt;code&amp;gt;qstat -u '*' | less&amp;lt;/code&amp;gt; - this shows the jobs of all users.&lt;br /&gt;
* &amp;lt;code&amp;gt;qstat -j 121144&amp;lt;/code&amp;gt; - this shows detailed info about the job with this number (if it is still running).&lt;br /&gt;
* &amp;lt;code&amp;gt;qhost&amp;lt;/code&amp;gt; - print available/total resources&lt;br /&gt;
* &amp;lt;code&amp;gt;qacct -j job_id&amp;lt;/code&amp;gt; - print info even for ended job (for which ''qstat -j job_id'' does not work). See &amp;lt;code&amp;gt;man qacct&amp;lt;/code&amp;gt; for more.&lt;br /&gt;
&lt;br /&gt;
=== Output monitoring ===&lt;br /&gt;
If we need to see output produced by our job (suppose the ID is 121144), we can inspect the job's output (in our case stored in &amp;lt;code&amp;gt;job_script.sh.o121144&amp;lt;/code&amp;gt;) with:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;less job_script.sh.o*&amp;lt;/code&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''Hint:'' if the job is still running, press '''F''' in &amp;lt;code&amp;gt;less&amp;lt;/code&amp;gt; to simulate &amp;lt;code&amp;gt;tail -f&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== How to read output epilog ====&lt;br /&gt;
The epilog section contains some interesting pieces of information. However this it can get confusing sometimes.&lt;br /&gt;
&lt;br /&gt;
 ======= EPILOG: Tue Jun 4 12:41:07 CEST 2019&lt;br /&gt;
 == Limits:   &lt;br /&gt;
 == Usage:    cpu=00:00:00, mem=0.00000 GB s, io=0.00000 GB, vmem=N/A, maxvmem=N/A&lt;br /&gt;
 == Duration: 00:00:00 (0 s)&lt;br /&gt;
 == Server name: cpu-node13&lt;br /&gt;
&lt;br /&gt;
* ''Limits'' - on this line you can see job limits specified through &amp;lt;code&amp;gt;qsub&amp;lt;/code&amp;gt; options&lt;br /&gt;
* ''Usage'' - resource usage during computation&lt;br /&gt;
** ''cpu=HH:MM:SS'' - the accumulated CPU time usage&lt;br /&gt;
** ''mem=XY GB'' - gigabytes of RAM used times the duration of the job in seconds, so don't be afraid XY is usually a very high number (unlike in this toy example)&lt;br /&gt;
** ''io=XY GB'' - the amount of data transferred in input/output operations in GB&lt;br /&gt;
** ''vmem=XY'' - actual virtual memory consumption when the job finished&lt;br /&gt;
** ''maxvmem=XY'' - peak virtual memory consumption&lt;br /&gt;
* ''Duration'' - total execution time&lt;br /&gt;
* ''Server name'' - name of the executing server&lt;br /&gt;
&lt;br /&gt;
=== Job interaction ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qdel 121144&amp;lt;/code&amp;gt;&lt;br /&gt;
This way you can delete (''kill'') a job with a given number, or comma-or-space separated list of job numbers.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qdel \*&amp;lt;/code&amp;gt;&lt;br /&gt;
This way you can delete all your jobs. Don't be afraid - you cannot delete others jobs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qalter&amp;lt;/code&amp;gt;&lt;br /&gt;
You can change some properties of already submitted jobs (both waiting in the queue and running). Changeable properties are listed in &amp;lt;code&amp;gt;man qsub&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Advanced usage ==&lt;br /&gt;
&amp;lt;code&amp;gt;qsub '''-q''' cpu.q&amp;lt;/code&amp;gt;&lt;br /&gt;
This way your job is submitted to the CPU queue which is the default. If you need GPU use &amp;lt;code&amp;gt;gpu.q&amp;lt;/code&amp;gt; instead.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qsub '''-l''' ...&amp;lt;/code&amp;gt;&lt;br /&gt;
See &amp;lt;code&amp;gt;man complex&amp;lt;/code&amp;gt; (run it on aic) for a list of possible resources you may require (in addition to &amp;lt;code&amp;gt;mem_free&amp;lt;/code&amp;gt; etc. discussed above).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qsub '''-p''' -200&amp;lt;/code&amp;gt;&lt;br /&gt;
Define a priority of your job as a number between -1024 and 0. Only SGE admins may use a number higher than 0. Default is set to TODO. You should ask for lower priority (-1024..-101) if you submit many jobs at once or if the jobs are not urgent. SGE uses the priority to decide when to start which pending job in the queue (it computes a real number called &amp;lt;code&amp;gt;prior&amp;lt;/code&amp;gt;, which is reported in &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt;, which grows as the job is waiting in the queue). Note that once a job is started, you cannot ''unschedule'' it, so from that moment on, it is irrelevant what was its priority.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qsub '''-o''' LOG.stdout '''-e''' LOG.stderr&amp;lt;/code&amp;gt;&lt;br /&gt;
redirect std{out,err} to separate files with given names, instead of the defaults &amp;lt;code&amp;gt;$JOB_NAME.o$JOB_ID&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;$JOB_NAME.e$JOB_ID&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qsub '''-@''' optionfile&amp;lt;/code&amp;gt;&lt;br /&gt;
Instead of specifying all the &amp;lt;code&amp;gt;qsub&amp;lt;/code&amp;gt; options on the command line, you can store them in a file (you can use # comments in the file).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qsub '''-a''' 12312359&amp;lt;/code&amp;gt;&lt;br /&gt;
Execute your job no sooner than at the given time (in &amp;lt;code&amp;gt;[YY]MMDDhhmm&amp;lt;/code&amp;gt; format). An alternative to &amp;lt;code&amp;gt;sleep 3600 &amp;amp;&amp;amp; qsub ... &amp;amp;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qsub '''-b''' y&amp;lt;/code&amp;gt;&lt;br /&gt;
Treat &amp;lt;code&amp;gt;script.sh&amp;lt;/code&amp;gt; (or whatever is the name of the command you execute) as a binary, i.e. don't search for in-script options within the file, don't transfer it to the ''qmaster'' and then to the execution node. This makes the execution a bit faster and it may prevent some rare but hard-to-detect errors caused SGE interpreting the script. The script must be available on the execution node via Lustre (which is our case), etc. With &amp;lt;code&amp;gt;-b y&amp;lt;/code&amp;gt; (shortcut for &amp;lt;code&amp;gt;-b yes&amp;lt;/code&amp;gt;), &amp;lt;code&amp;gt;script.sh&amp;lt;/code&amp;gt; can be a script or a binary. With &amp;lt;code&amp;gt;-b n&amp;lt;/code&amp;gt; (which is the default for &amp;lt;code&amp;gt;qsub&amp;lt;/code&amp;gt;), &amp;lt;code&amp;gt;script.sh&amp;lt;/code&amp;gt; must be a script (text file).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qsub '''-M''' person1@email.somewhere.cz,person2@email.somewhere.cz '''-m''' beas&amp;lt;/code&amp;gt;&lt;br /&gt;
Specify the emails where you want to be notified when the job has been '''b''' started, '''e''' ended, '''a''' aborted, rescheduled or '''s''' suspended.&lt;br /&gt;
The default is now &amp;lt;code&amp;gt;-m a&amp;lt;/code&amp;gt; and the default email address is forwarded to you (so there is no need to use '''-M'''). You can use &amp;lt;code&amp;gt;-m n&amp;lt;/code&amp;gt; to override the defaults and send no emails.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qsub '''-hold_jid''' 121144,121145&amp;lt;/code&amp;gt; (or &amp;lt;code&amp;gt;qsub '''-hold_jid''' get_src.sh,get_tgt.sh&amp;lt;/code&amp;gt;)&lt;br /&gt;
The current job is not executed before all the specified jobs are completed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qsub '''-now''' y&amp;lt;/code&amp;gt;&lt;br /&gt;
Start the job immediately or not at all, i.e. don't put it as pending to the queue. This is the default for &amp;lt;code&amp;gt;qrsh&amp;lt;/code&amp;gt;, but you can change it with &amp;lt;code&amp;gt;-now n&amp;lt;/code&amp;gt; (which is the default for &amp;lt;code&amp;gt;qsub&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qsub '''-N''' my-name&amp;lt;/code&amp;gt;&lt;br /&gt;
By default the name of a job (which you can see e.g. in &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt;) is the name of the &amp;lt;code&amp;gt;script.sh&amp;lt;/code&amp;gt;. This way you can override it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qsub '''-S''' /bin/bash&amp;lt;/code&amp;gt;&lt;br /&gt;
The hashbang (&amp;lt;code&amp;gt;!#/bin/bash&amp;lt;/code&amp;gt;) in your &amp;lt;code&amp;gt;script.sh&amp;lt;/code&amp;gt; is ignored, but you can change the interpreter with ''-S''. The default interpreter is &amp;lt;code&amp;gt;/bin/bash&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qsub '''-v''' PATH[=value]&amp;lt;/code&amp;gt;&lt;br /&gt;
Export a given environment variable from the current shell to the job.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qsub '''-V'''&amp;lt;/code&amp;gt;&lt;br /&gt;
Export all environment variables. (This is not so needed now, when bash is the default interpreter and it seems your &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; is always sourced.)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qsub '''-soft''' -l ... '''-hard''' -l ... -q ...&amp;lt;/code&amp;gt;&lt;br /&gt;
By default, all the resource requirements (specified with &amp;lt;code&amp;gt;-l&amp;lt;/code&amp;gt;) and queue requirements (specified with ''-q'') are '''hard''', i.e. your job won't be scheduled unless they can be fulfilled. You can use &amp;lt;code&amp;gt;-soft&amp;lt;/code&amp;gt; to mark all following requirements as nice-to-have. And with &amp;lt;code&amp;gt;-hard&amp;lt;/code&amp;gt; you can switch back to hard requirements.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;qsub '''-sync''' y&amp;lt;/code&amp;gt;&lt;br /&gt;
This causes qsub to wait for the job to complete before exiting (with the same exit code as the job). Useful in scripts.&lt;br /&gt;
&lt;br /&gt;
== Memory ==&lt;br /&gt;
&lt;br /&gt;
* There are three commonly used options for specifying memory requirements: '''mem_free''', '''act_mem_free''' and '''h_vmem'''. Each has a different purpose.&lt;br /&gt;
* '''mem_free=1G''' means 1024×1024×1024 bytes, i.e. one [[https://en.wikipedia.org/wiki/Gibibyte|GiB (gibibyte)]]. '''mem_free=1g''' means 1000×1000×1000 bytes, i.e. one gigabyte. Similarly for the other options and other prefixes (k, K, m, M).&lt;br /&gt;
* '''mem_free''' (or '''mf''') specifies a ''consumable resource'' tracked by SGE and it affects job scheduling. Each machine has an initial value assigned (slightly lower than the real total physical RAM capacity). When you specify &amp;lt;code&amp;gt;qsub -l mem_free=4G&amp;lt;/code&amp;gt;, SGE finds a machine with '''mem_free''' &amp;gt;= 4GB, and subtracts 4GB from it. This limit is not enforced, so if a job exceeds this limit, ''it is not automatically killed'' and thus the SGE value of '''mem_free''' may not represent the real free memory. The default value is 1G. By not using this option and eating more than 1 GiB, you are breaking the rules.&lt;br /&gt;
* '''act_mem_free''' (or '''amf''') is a ÚFAL-specific option, which specifies the real amount of free memory (at the time of scheduling). You can specify it when submitting a job and it will be scheduled to a machine with at least this amount of memory free. In an ideal world, where no jobs are exceeding their ''mem_free'' requirements, we would not need this option. However, in the real world, it is recommended to use this option with the same value as ''mem_free'' to protect your job from failing with out-of-memory error (because of naughty jobs of other users).&lt;br /&gt;
* '''h_vmem''' is equivalent to setting '''ulimit -v''', i.e. it is a hard limit on the size of virtual memory (see RLIMIT_AS in &amp;lt;code&amp;gt;man setrlimit&amp;lt;/code&amp;gt;). If your job exceeds this limit, memory allocation fails (i.e., malloc or mmap will return NULL), and your job will probably crash on SIGSEGV. TODO: according to &amp;lt;code&amp;gt;man queue_conf&amp;lt;/code&amp;gt;, the job is killed with SIGKILL, not with SIGSEGV. Note that '''h_vmem''' specifies the maximal size of ''allocated_memory'', not ''used_memory'', in other words it is the VIRT column in &amp;lt;code&amp;gt;top&amp;lt;/code&amp;gt;, not the RES column. SGE does not use this parameter in any other way. Notably, job scheduling is not affected by this parameter and therefore there is no guarantee that there will be this amount of memory on the chosen machine. The problem is that some programs (e.g. Java with the default setting) allocate much more (virtual) memory than they actually use in the end. If we want to be ultra conservative, we should set '''h_vmem''' to the same value as '''mem_free'''. If we want to be only moderately conservative, we should specify something like '''h_vmem=1.5*mem_free''', because some jobs will not use the whole mem_free requested, but still our job will be killed if it allocated much more than declared. The default effectively means that your job has no limits.&lt;br /&gt;
* For GPU jobs, it is usually better to use '''h_data''' instead of '''h_vmem'''. CUDA driver allocates a lot of ''unused'' virtual memory (tens of GB per card), which is counted in '''h_vmem''', but not in '''h_data'''. All usual allocations (''malloc'', ''new'', Python allocations) seem to be included in '''h_data'''.&lt;br /&gt;
* It is recommended to ''profile your task first'' (see [[#Profiling]] below, so you can estimate reasonable memory requirements before submitting many jobs with the same task (varying in parameters which do not affect memory consumption). So for the first time, declare mem_free with much more memory than expected and ssh to a given machine and check &amp;lt;code&amp;gt;htop&amp;lt;/code&amp;gt; (sum all processes of your job) or (if the job is done quickly) check the epilog. When running other jobs of this type, set '''mem_free''' (and '''act_mem_free''' and '''h_vmem''') so you are not wasting resources, but still have some reserve.&lt;br /&gt;
* '''s_vmem''' is similar to '''h_vmem''', but instead of SIGSEGV/SIGKILL, the job is sent a SIGXCPU signal which can be caught by the job and exit gracefully before it is killed. So if you need it, set '''s_vmem''' to a lower value than '''h_vmem''' and implement SIGXCPU handling and cleanup.&lt;br /&gt;
&lt;br /&gt;
== Profiling ==&lt;br /&gt;
As stated above, you should always specify the exact memory limits when running your tasks, so that you neither waste RAM nor starve others of memory by using more than you requested. However, memory requirements can be difficult to estimate in advance. That's why you should profile your tasks first.&lt;br /&gt;
&lt;br /&gt;
A simple method is to run the task and observe the memory usage reported in the epilog, but SGE may not record transient allocations. As documented in &amp;lt;code&amp;gt;man 5 accounting&amp;lt;/code&amp;gt; and observed in &amp;lt;code&amp;gt;qconf -sconf&amp;lt;/code&amp;gt;, SGE only collects stats every '''accounting_flush_time'''. If this is not set, it defaults to '''flush_time''', which is preset to 15 seconds. But the kernel records all info immediately without polling, and you can view these exact stats by looking into &amp;lt;code&amp;gt;/proc/$PID/status&amp;lt;/code&amp;gt; while the task is running.&lt;br /&gt;
&lt;br /&gt;
You can still miss allocations made shortly before the program exits – which often happens when trying to debug why your program gets killed by SGE after exhausting the reserved space. To record these, use &amp;lt;code&amp;gt;/usr/bin/time -v&amp;lt;/code&amp;gt; (the actual binary, not the shell-builtin command &amp;lt;code&amp;gt;time&amp;lt;/code&amp;gt;). Be aware that unlike the builtin, it cannot measure shell functions and behaves differently on pipelines.&lt;br /&gt;
&lt;br /&gt;
Obtaining peak usage of multiprocess applications is trickier. Detached and backgrounded processes are ignored completely by &amp;lt;code&amp;gt;time -v&amp;lt;/code&amp;gt; and you get the maximum footprint of any children, not the sum of all maximal footprints nor the largest footprint in any instant.&lt;br /&gt;
&lt;br /&gt;
If you program in C and need to know the peak memory usage of your children, you can also use the '''wait4()''' syscall and calculate the stats yourself.&lt;br /&gt;
&lt;br /&gt;
If your job is the only one on a given machine, you can also look how much free memory is left when running the job (e.g. with &amp;lt;code&amp;gt;htop&amp;lt;/code&amp;gt; if you know when is the peak moment).&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=68</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=68"/>
		<updated>2020-01-29T10:13:12Z</updated>

		<summary type="html">&lt;p&gt;Admin: Removed link to original documentation&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style='text-align: center;'&amp;gt;CZ.02.2.69/0.0/0.0/17_044/0008562&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style='text-align: center;'&amp;gt;Podpora rozvoje studijního prostředí na Univerzitě Karlově - VRR&amp;lt;/div&amp;gt; &lt;br /&gt;
[[File:OP_VVV_logo.jpg|frameless|center|upright=2.5]]&lt;br /&gt;
&lt;br /&gt;
== Welcome to AIC ==&lt;br /&gt;
&lt;br /&gt;
AIC (Artificial Intelligence Cluster) is a computational grid with sufficient computational capacity for research in the field of [https://en.wikipedia.org/wiki/Deep_learning deep learning] using both CPU and GPU. It was built on top of [https://arc.liv.ac.uk/trac/SGE SGE] scheduling system. MFF students of Bc. and Mgr. degrees can use it to run their experiments and learn the proper ways of grid computing in the process.&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
AIC is dedicated to UFAL students who will get an account if requested by authorized lector.&lt;br /&gt;
&lt;br /&gt;
=== Basic HOWTO ===&lt;br /&gt;
&lt;br /&gt;
Following HOWTO is meant to provide only a simplified overview of the cluster usage. It is strongly recommended to read some further documentation ([[Submitting_CPU_Jobs|CPU]] or [[Submitting_GPU_Jobs|GPU]]) before running some serious experiments.&lt;br /&gt;
More serious experiments tend to take more resources. In order to avoid unexpected failures please make sure your [[Quotas|quota]] is not exceeded.&lt;br /&gt;
&lt;br /&gt;
Suppose we want to run some computations described by a script called &amp;lt;code&amp;gt;job_script.sh&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 echo &amp;quot;This is just a test.&amp;quot;&lt;br /&gt;
 echo &amp;quot;printing parameter1: $1&amp;quot;&lt;br /&gt;
 echo &amp;quot;prinitng parameter2: $2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We need to ''submit'' the job to the grid which is done by logging on the submit host &amp;lt;code&amp;gt;aic.ufal.mff.cuni.cz&amp;lt;/code&amp;gt; and issuing the command:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;qsub -cwd -j y job_script.sh Hello World&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will enqueue our ''job'' to the default ''queue'' which is &amp;lt;code&amp;gt;cpu.q@*&amp;lt;/code&amp;gt;. The scheduler decides which particular machine in the specified queue has ''resources'' needed to run the job. Typically we will see a message which tells us the ID of our job (82 in this example):&lt;br /&gt;
&lt;br /&gt;
 Your job 82 (&amp;quot;job_script.sh&amp;quot;) has been submitted&lt;br /&gt;
&lt;br /&gt;
The basic options used in this example are:&lt;br /&gt;
* &amp;lt;code&amp;gt;-cwd&amp;lt;/code&amp;gt; - the script is executed in the current directory (the default is your &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;)&lt;br /&gt;
* &amp;lt;code&amp;gt;-j y&amp;lt;/code&amp;gt; - ''stdout'' and ''stderr'' outputs are merged and redirected to a file (&amp;lt;code&amp;gt;job_script.sh.o82&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
We have specified two parameters &amp;lt;code&amp;gt;Hello&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;World&amp;lt;/code&amp;gt;. The output of the script will be located in your &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt; directory after the script is executed. It will be merged with ''stderr'' and it should look like this:&lt;br /&gt;
&lt;br /&gt;
 AIC:ubuntu 18.04: SGE 8.1.9 configured...                                                                                              &lt;br /&gt;
 This is just a test.&lt;br /&gt;
 printing parameter1: Hello&lt;br /&gt;
 prinitng parameter2: World&lt;br /&gt;
 ======= EPILOG: Tue Jun 4 12:41:07 CEST 2019&lt;br /&gt;
 == Limits:   &lt;br /&gt;
 == Usage:    cpu=00:00:00, mem=0.00000 GB s, io=0.00000 GB, vmem=N/A, maxvmem=N/A&lt;br /&gt;
 == Duration: 00:00:00 (0 s)&lt;br /&gt;
 == Server name: cpu-node13&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=9</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Main_Page&amp;diff=9"/>
		<updated>2019-06-04T14:02:58Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Access */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Welcome to AIC ==&lt;br /&gt;
&lt;br /&gt;
AIC (Artificial Intelligence Cluster) is a computational grid with '''96 CPU cores''', a total of '''1536 GB RAM''' and '''16 Nvidia GTX 1080 GPUs'''. It was built on top of [https://arc.liv.ac.uk/trac/SGE SGE] scheduling system. MFF students of Bc. and Mgr. degrees can use it to run their experiments and learn the proper ways of grid computing in the process.&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
AIC is dedicated to UFAL students who will get an account if requested by authorized lector.&lt;br /&gt;
&lt;br /&gt;
=== Basic HOWTO ===&lt;br /&gt;
&lt;br /&gt;
Following HOWTO is meant to provide only a simplified overview of the cluster usage. It is strongly recommended to read some further documentation before running some serious experiments. &lt;br /&gt;
&lt;br /&gt;
Suppose we want to run some computations described by a script called &amp;lt;code&amp;gt;job_script.sh&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 echo &amp;quot;This is just a test.&amp;quot;&lt;br /&gt;
 echo &amp;quot;printing parameter1: $1&amp;quot;&lt;br /&gt;
 echo &amp;quot;prinitng parameter2: $2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We need to ''submit'' the job to the grid which is done by logging on the submit host &amp;lt;code&amp;gt;aic.ufal.mff.cuni.cz&amp;lt;/code&amp;gt; and issuing the command &amp;lt;code&amp;gt;qsub -cwd -j y job_script.sh Hello World&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This will enqueue our ''job'' to the default ''queue'' which is &amp;lt;code&amp;gt;cpu.q@*&amp;lt;/code&amp;gt;. The scheduler decides which particular machine in the specified queue has ''resources'' needed to run the job. Typically we will see a message which tells us the ID of our job (82 in this example):&lt;br /&gt;
&lt;br /&gt;
 Your job 82 (&amp;quot;job_script.sh&amp;quot;) has been submitted&lt;br /&gt;
&lt;br /&gt;
The basic options used in this example are:&lt;br /&gt;
* &amp;lt;code&amp;gt;-cwd&amp;lt;/code&amp;gt; - the script is executed in the current directory (the default is your &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt;)&lt;br /&gt;
* &amp;lt;code&amp;gt;-j y&amp;lt;/code&amp;gt; - ''stdout'' and ''stderr'' outputs are merged and redirected to a file (&amp;lt;code&amp;gt;job_script.sh.o82&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
We have specified two parameters &amp;lt;code&amp;gt;Hello&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;World&amp;lt;/code&amp;gt;. The output of the script will be located in your &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt; directory after the script is executed. It will be merged with ''stderr'' and it should look like this:&lt;br /&gt;
&lt;br /&gt;
 AIC:ubuntu 18.04: SGE 8.1.9 configured...                                                                                              &lt;br /&gt;
 This is just a test.&lt;br /&gt;
 printing parameter1: Hello&lt;br /&gt;
 prinitng parameter2: World&lt;br /&gt;
 ======= EPILOG: Tue Jun 4 12:41:07 CEST 2019&lt;br /&gt;
 == Limits:   &lt;br /&gt;
 == Usage:    cpu=00:00:00, mem=0.00000 GB s, io=0.00000 GB, vmem=N/A, maxvmem=N/A&lt;br /&gt;
 == Duration: 00:00:00 (0 s)&lt;br /&gt;
 == Server name: cpu-node13&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=MediaWiki:Sidebar&amp;diff=8</id>
		<title>MediaWiki:Sidebar</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=MediaWiki:Sidebar&amp;diff=8"/>
		<updated>2019-06-04T13:52:35Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
* navigation&lt;br /&gt;
** mainpage|mainpage-description&lt;br /&gt;
** documentation|Documentation&lt;br /&gt;
* MediaWiki&lt;br /&gt;
** recentchanges-url|recentchanges&lt;br /&gt;
** randompage-url|randompage&lt;br /&gt;
** helppage|help-mediawiki&lt;br /&gt;
* SEARCH&lt;br /&gt;
* TOOLBOX&lt;br /&gt;
* LANGUAGES&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=7</id>
		<title>Submitting CPU Jobs</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=Submitting_CPU_Jobs&amp;diff=7"/>
		<updated>2019-06-04T13:51:09Z</updated>

		<summary type="html">&lt;p&gt;Admin: Created page with &amp;quot;== Documentation ==&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Documentation ==&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=MediaWiki:Sidebar&amp;diff=6</id>
		<title>MediaWiki:Sidebar</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=MediaWiki:Sidebar&amp;diff=6"/>
		<updated>2019-06-04T13:50:26Z</updated>

		<summary type="html">&lt;p&gt;Admin: test adding sidebar item&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
* navigation&lt;br /&gt;
** mainpage|mainpage-description&lt;br /&gt;
** documentation|doc-description&lt;br /&gt;
** recentchanges-url|recentchanges&lt;br /&gt;
** randompage-url|randompage&lt;br /&gt;
** helppage|help-mediawiki&lt;br /&gt;
* SEARCH&lt;br /&gt;
* TOOLBOX&lt;br /&gt;
* LANGUAGES&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
	<entry>
		<id>https://aic.ufal.mff.cuni.cz/index.php?title=File:Logo_ufal_110.png&amp;diff=2</id>
		<title>File:Logo ufal 110.png</title>
		<link rel="alternate" type="text/html" href="https://aic.ufal.mff.cuni.cz/index.php?title=File:Logo_ufal_110.png&amp;diff=2"/>
		<updated>2019-05-30T12:24:33Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
		
	</entry>
</feed>