Difference between revisions of "Main Page"

From UFAL AIC
m
 
(17 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 +
<div style='text-align: center;'>CZ.02.2.69/0.0/0.0/17_044/0008562</div>
 +
<div style='text-align: center;'>Podpora rozvoje studijního prostředí na Univerzitě Karlově - VRR</div>
 +
[[File:OP_VVV_logo.jpg|frameless|center|upright=2.5]]
 +
 
== Welcome to AIC ==
 
== Welcome to AIC ==
  
AIC (Artificial Intelligence Cluster) is a computational grid with sufficient CPU cores as well as GPU for research in the field of [https://en.wikipedia.org/wiki/Deep_learning deep learning]. It was built on top of [https://arc.liv.ac.uk/trac/SGE SGE] scheduling system. MFF students of Bc. and Mgr. degrees can use it to run their experiments and learn the proper ways of grid computing in the process.
+
AIC (Artificial Intelligence Cluster) is a computational grid with sufficient computational capacity for research in the field of [https://en.wikipedia.org/wiki/Deep_learning deep learning] using both CPU and GPU. It was built on top of [https://slurm.schedmd.com/ SLURM] scheduling system. MFF students of Bc. and Mgr. degrees can use it to run their experiments and learn the proper ways of grid computing in the process.
  
 
=== Access ===
 
=== Access ===
 
AIC is dedicated to UFAL students who will get an account if requested by authorized lector.
 
AIC is dedicated to UFAL students who will get an account if requested by authorized lector.
 +
 +
To change the password, this link is available: https://aic.ufal.mff.cuni.cz/pw-manager
 +
 +
There is a restriction on resources allocated by one user in group '''students''' at a given time.
 +
By default, this is set to a maximum of 4 CPU and 1 GPU.
 +
 +
=== Jupyterlab ===
 +
AIC provides also Jupyterlab portal on top of your AIC account and HOME directory. It can be found at https://aic.ufal.mff.cuni.cz/jlab . Pre-installed extensions: R, ipython, Rstudio (community), Slurm Queue Manager.
 +
 +
=== Connecting to the Cluster (directly) ===
 +
Use SSH to connect to the cluster:
 +
  ssh LOGIN@aic.ufal.mff.cuni.cz
  
 
=== Basic HOWTO ===
 
=== Basic HOWTO ===
  
Following HOWTO is meant to provide only a simplified overview of the cluster usage. It is strongly recommended to read some further [[documentation]] before running some serious experiments.  
+
Following HOWTO is meant to provide only a simplified overview of the cluster usage. It is strongly recommended to read some further documentation ([[Submitting_CPU_Jobs|CPU]] or [[Submitting_GPU_Jobs|GPU]]) before running some serious experiments.
 +
More serious experiments tend to take more resources. In order to avoid unexpected failures please make sure your [[Quotas|quota]] is not exceeded.
 +
 
 +
'''Rule 0: NEVER RUN JOBS DIRECTLY ON aic.ufal.mff.cuni.cz HEADNODE. Use <code>srun</code> to get computational node shell!'''
  
 
Suppose we want to run some computations described by a script called <code>job_script.sh</code>:
 
Suppose we want to run some computations described by a script called <code>job_script.sh</code>:
  
 
  #!/bin/bash
 
  #!/bin/bash
  echo "This is just a test."
+
  #SBATCH -J helloWorld   # name of job
  echo "printing parameter1: $1"
+
#SBATCH -p cpu         # name of partition or queue (default is cpu)
  echo "prinitng parameter2: $2"
+
#SBATCH -o helloWorld.out   # name of output file for this submission script
 +
#SBATCH -e helloWorld.err   # name of error file for this submission script
 +
# run my job (some executable)
 +
  sleep 5
 +
  echo "Hello I am running on cluster!"
  
 +
We need to ''submit'' the job to the cluster which is done by logging on the submit host <code>aic.ufal.mff.cuni.cz</code> and issuing the command:<br>
 +
<code>sbatch job_script.sh</code>
  
We need to ''submit'' the job to the grid which is done by logging on the submit host <code>aic.ufal.mff.cuni.cz</code> and issuing the command:<br>
+
This will enqueue our ''job'' to the default ''partition'' (or ''queue'') which is <code>cpu</code>. The scheduler decides which particular machine in the specified queue has ''resources'' needed to run the job. Typically we will see a message which tells us the ID of our job (3 in this example):
<code>qsub -cwd -j y job_script.sh Hello World</code>
 
  
This will enqueue our ''job'' to the default ''queue'' which is <code>cpu.q@*</code>. The scheduler decides which particular machine in the specified queue has ''resources'' needed to run the job. Typically we will see a message which tells us the ID of our job (82 in this example):
+
Submitted batch job 3
  
Your job 82 ("job_script.sh") has been submitted
+
The options used in this example are specified inside the script using the ''#SBATCH'' directive. Any option can be specified either in the script or as a command line parameter (see ''man sbatch'' for details).
  
The basic options used in this example are:
+
We can specify custom arguments '''before''' the name of the script:
* <code>-cwd</code> - the script is executed in the current directory (the default is your <code>$HOME</code>)
 
* <code>-j y</code> - ''stdout'' and ''stderr'' outputs are merged and redirected to a file (<code>job_script.sh.o82</code>)
 
  
We have specified two parameters <code>Hello</code> and <code>World</code>. The output of the script will be located in your <code>$HOME</code> directory after the script is executed. It will be merged with ''stderr'' and it should look like this:
+
sbatch --export=ARG1='firstArg',ARG2='secondArg' job_script.sh
  
AIC:ubuntu 18.04: SGE 8.1.9 configured...                                                                                             
+
These can be accessed in the job script as <code>$ARG1</code> and <code>$ARG2</code>.
This is just a test.
 
printing parameter1: Hello
 
prinitng parameter2: World
 
======= EPILOG: Tue Jun 4 12:41:07 CEST 2019
 
== Limits: 
 
== Usage:    cpu=00:00:00, mem=0.00000 GB s, io=0.00000 GB, vmem=N/A, maxvmem=N/A
 
== Duration: 00:00:00 (0 s)
 
== Server name: cpu-node13
 

Latest revision as of 16:06, 19 March 2024

CZ.02.2.69/0.0/0.0/17_044/0008562
Podpora rozvoje studijního prostředí na Univerzitě Karlově - VRR
OP VVV logo.jpg

Welcome to AIC

AIC (Artificial Intelligence Cluster) is a computational grid with sufficient computational capacity for research in the field of deep learning using both CPU and GPU. It was built on top of SLURM scheduling system. MFF students of Bc. and Mgr. degrees can use it to run their experiments and learn the proper ways of grid computing in the process.

Access

AIC is dedicated to UFAL students who will get an account if requested by authorized lector.

To change the password, this link is available: https://aic.ufal.mff.cuni.cz/pw-manager

There is a restriction on resources allocated by one user in group students at a given time. By default, this is set to a maximum of 4 CPU and 1 GPU.

Jupyterlab

AIC provides also Jupyterlab portal on top of your AIC account and HOME directory. It can be found at https://aic.ufal.mff.cuni.cz/jlab . Pre-installed extensions: R, ipython, Rstudio (community), Slurm Queue Manager.

Connecting to the Cluster (directly)

Use SSH to connect to the cluster:

 ssh LOGIN@aic.ufal.mff.cuni.cz

Basic HOWTO

Following HOWTO is meant to provide only a simplified overview of the cluster usage. It is strongly recommended to read some further documentation (CPU or GPU) before running some serious experiments. More serious experiments tend to take more resources. In order to avoid unexpected failures please make sure your quota is not exceeded.

Rule 0: NEVER RUN JOBS DIRECTLY ON aic.ufal.mff.cuni.cz HEADNODE. Use srun to get computational node shell!

Suppose we want to run some computations described by a script called job_script.sh:

#!/bin/bash
#SBATCH -J helloWorld					  # name of job
#SBATCH -p cpu 	 		       		  # name of partition or queue (default is cpu)
#SBATCH -o helloWorld.out				  # name of output file for this submission script
#SBATCH -e helloWorld.err				  # name of error file for this submission script
# run my job (some executable)
sleep 5
echo "Hello I am running on cluster!"

We need to submit the job to the cluster which is done by logging on the submit host aic.ufal.mff.cuni.cz and issuing the command:
sbatch job_script.sh

This will enqueue our job to the default partition (or queue) which is cpu. The scheduler decides which particular machine in the specified queue has resources needed to run the job. Typically we will see a message which tells us the ID of our job (3 in this example):

Submitted batch job 3

The options used in this example are specified inside the script using the #SBATCH directive. Any option can be specified either in the script or as a command line parameter (see man sbatch for details).

We can specify custom arguments before the name of the script:

sbatch --export=ARG1='firstArg',ARG2='secondArg' job_script.sh

These can be accessed in the job script as $ARG1 and $ARG2.