|
|
|
This page discusses different computing clusters in general. See [Cluster/Queue system](Cluster/Queue-system) for a discussion about an efficient usage of queue system.
|
|
|
|
|
|
|
|
# Local
|
|
|
|
|
|
|
|
## Puck and Oberon (CPU)
|
|
|
|
|
|
|
|
* [Official instructions to use local clusters](http://users.jyu.fi/%7Eveapaja/FGCI-instructions.JYU.txt)
|
|
|
|
* Connect using ssh: `ssh puck.it.jyu.fi` or `ssh oberon.it.jyu.fi`
|
|
|
|
* If you need access, contact [Vesa Apaja](https://www.jyu.fi/fi/henkilot/vesa-apaja)
|
|
|
|
* No quota. But check that you do not use all disc space yourself!
|
|
|
|
* All datafiles should be saved in `/n/work0i/username` where _i_ is 0,1,2,3,... different for different users.
|
|
|
|
* * Check how much disc space your files take: `du -hs /n/work01/username`
|
|
|
|
* * Check how much disc space is available: `df -h /n/work01/username`
|
|
|
|
* * Same discs visible in both puck and oberon
|
|
|
|
* Puck has 24CPUs/computing node. Oberon has 40. Keep that in mind if you use a code that uses OpenMP parallerization (where maximum number of CPUs is the number of CPUs per node)
|
|
|
|
* * When using OpenMP parallerization, remember to force the correct number of CPUs: `OMP_NUM_THREADS=10 ./program_for_which_I_have_reserved_10_CPUs`
|
|
|
|
* Always remember to check that you use as many CPUs as you reserve
|
|
|
|
|
|
|
|
Queue system: need a _submission script_ that specifies resources your job needs, and tells the system how to run your program. In the simplest case, the submission script can be simply
|
|
|
|
```bash
|
|
|
|
#SBATCH -n 1
|
|
|
|
#SBATCH -t 0-01:00:0
|
|
|
|
./program > outputfile
|
|
|
|
```
|
|
|
|
This reserves 1 cpu for your code, and the code can run for 1hr. If the script above is saved in the file `submit.sh`, you can submit the job to the queue by running
|
|
|
|
```bash
|
|
|
|
sbatch submit.sh
|
|
|
|
```
|
|
|
|
Results (everything the code prints) are saved in file `outputfile`. For more details, see [Queue-system](Cluster/Queue-system)
|
|
|
|
|
|
|
|
## GPU
|
|
|
|
|
|
|
|
* Available from IT, Dana knows
|
|
|
|
|
|
|
|
# CSC
|
|
|
|
|
|
|
|
Our CSC project manager: [Harri Niemi](https://www.jyu.fi/fi/henkilot/harri-niemi). To get access, first create a csc account at my.csc.fi (use Haka = your JYU account). Then let Harri know what is your CSC usernmae.
|
|
|
|
|
|
|
|
Puhti and Mahti. Puhti for smaller jobs, Mahti if you really can take advantage of having a large number of threads. If unsure, use Puhti.
|
|
|
|
|
|
|
|
Example submission script (1 cpu job), note that you need to specify account and partition
|
|
|
|
|
|
|
|
```bash
|
|
|
|
#SBATCH -n 1
|
|
|
|
#SBATCH -t 0-01:00:0
|
|
|
|
#SBATCH -p small
|
|
|
|
#SBATCH --account=lappi
|
|
|
|
./program
|
|
|
|
```
|
|
|
|
|
|
|
|
Resources
|
|
|
|
|
|
|
|
- [Puhti and Mahti partitions](https://docs.csc.fi/computing/running/batch-job-partitions/)
|
|
|
|
- [Puhti and Mahti storage](https://docs.csc.fi/computing/disk/): use _/scratch/lappi_ for data, _/projappl/lappi_ for code. Note that scratch is cleaned, and files older than 180days are automatically removed.
|
|
|
|
|
|
|
|
In addition to CPU quota, we have limited disc space. Also the number of files is limited. To see the current usage and limits, run
|
|
|
|
|
|
|
|
```bash
|
|
|
|
csc-workspaces
|
|
|
|
```
|
|
|
|
|
|
|
|
Use the following command to count files in the current directory:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
find . -maxdepth 1 -type d -print0 | xargs -0 -I {} sh -c 'echo -e $(find {} | wc -l) {}' | sort -n
|
|
|
|
```
|
|
|
|
|
|
|
|
This page discusses different computing clusters in general. See [Cluster/Queue system](Cluster/Queue-system) for a discussion about an efficient usage of queue system.
|
|
|
|
|
|
|
|
# Local
|
|
|
|
|
|
|
|
## Puck and Oberon (CPU)
|
|
|
|
|
|
|
|
* [Official instructions to use local clusters](http://users.jyu.fi/%7Eveapaja/FGCI-instructions.JYU.txt)
|
|
|
|
* Connect using ssh: `ssh puck.it.jyu.fi` or `ssh oberon.it.jyu.fi`
|
|
|
|
* If you need access, contact [Vesa Apaja](https://www.jyu.fi/fi/henkilot/vesa-apaja)
|
|
|
|
* No quota. But check that you do not use all disc space yourself!
|
|
|
|
* All datafiles should be saved in `/n/work/username`
|
|
|
|
* * Check how much disc space your files take: `du -hs /n/work/username`
|
|
|
|
* * Check how much disc space is available: `df -h /n/work/username`
|
|
|
|
* * Same discs visible in both puck and oberon
|
|
|
|
* * There is also a "scratch" disc space that everyone can access: `/n/scratch/`. That is not backed up, but is suitable for temporary data. Just create your own folder there.
|
|
|
|
* Puck has 24CPUs/computing node. Oberon has 40. Keep that in mind if you use a code that uses OpenMP parallerization (where maximum number of CPUs is the number of CPUs per node). Preferably allocate a whole node, or a small part of a node. But don't allocate e.g. 35/40 CPUs, it is then difficult for the system to fill the remaining CPUs with other jobs.
|
|
|
|
* * When using OpenMP parallerization, remember to force the correct number of CPUs: `OMP_NUM_THREADS=10 ./program_for_which_I_have_reserved_10_CPUs`
|
|
|
|
* Always remember to check that you use as many CPUs as you reserve
|
|
|
|
|
|
|
|
Queue system: need a _submission script_ that specifies resources your job needs, and tells the system how to run your program. In the simplest case, the submission script can be simply
|
|
|
|
```bash
|
|
|
|
#SBATCH -n 1
|
|
|
|
#SBATCH -t 0-01:00:0
|
|
|
|
./program > outputfile
|
|
|
|
```
|
|
|
|
This reserves 1 cpu for your code, and the code can run for 1hr. If the script above is saved in the file `submit.sh`, you can submit the job to the queue by running
|
|
|
|
```bash
|
|
|
|
sbatch submit.sh
|
|
|
|
```
|
|
|
|
Results (everything the code prints) are saved in file `outputfile`. For more details, see [Queue-system](Cluster/Queue-system)
|
|
|
|
|
|
|
|
## GPU
|
|
|
|
|
|
|
|
* Available from IT, Dana knows
|
|
|
|
|
|
|
|
# CSC
|
|
|
|
|
|
|
|
Our CSC project manager: [Harri Niemi](https://www.jyu.fi/fi/henkilot/harri-niemi). To get access, first create a csc account at my.csc.fi (use Haka = your JYU account). Then let Harri know what is your CSC usernmae.
|
|
|
|
|
|
|
|
Puhti and Mahti. Puhti for smaller jobs, Mahti if you really can take advantage of having a large number of threads. If unsure, use Puhti.
|
|
|
|
|
|
|
|
Example submission script (1 cpu job), note that you need to specify account and partition
|
|
|
|
|
|
|
|
```bash
|
|
|
|
#SBATCH -n 1
|
|
|
|
#SBATCH -t 0-01:00:0
|
|
|
|
#SBATCH -p small
|
|
|
|
#SBATCH --account=lappi
|
|
|
|
./program
|
|
|
|
```
|
|
|
|
|
|
|
|
Resources
|
|
|
|
|
|
|
|
- [Puhti and Mahti partitions](https://docs.csc.fi/computing/running/batch-job-partitions/)
|
|
|
|
- [Puhti and Mahti storage](https://docs.csc.fi/computing/disk/): use _/scratch/lappi_ for data, _/projappl/lappi_ for code. Note that scratch is cleaned, and files older than 180days are automatically removed.
|
|
|
|
|
|
|
|
In addition to CPU quota, we have limited disc space. Also the number of files is limited. To see the current usage and limits, run
|
|
|
|
|
|
|
|
```bash
|
|
|
|
csc-workspaces
|
|
|
|
```
|
|
|
|
|
|
|
|
Use the following command to count files in the current directory:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
find . -maxdepth 1 -type d -print0 | xargs -0 -I {} sh -c 'echo -e $(find {} | wc -l) {}' | sort -n
|
|
|
|
```
|
|
|
|
|
|
|
|
(Do not use that extensively, it creates a lot of disc i/o activity and CSC admins do not like that I guess, but use if you need to figure out if you are consuming all our number of files quota). |
|
|
\ No newline at end of file |