|
|
|
This page discusses different computing clusters in general. See [Cluster/Queue system](Cluster/Queue-system) for a discussion about an efficient usage of queue system.
|
|
|
|
|
|
|
|
See also
|
|
|
|
|
|
|
|
* [Cluster/Data transfer](Cluster/Data%20transfer): moving data effectively from clusters to local machine and back
|
|
|
|
* [Cluster/Ssh keys](Cluster/Ssh%20keys): login without typing your password
|
|
|
|
|
|
|
|
# Local
|
|
|
|
|
|
|
|
## Puck and Oberon (CPU)
|
| ... | ... | @@ -9,24 +14,28 @@ This page discusses different computing clusters in general. See [Cluster/Queue |
|
|
|
* If you need access, contact [Vesa Apaja](https://www.jyu.fi/fi/henkilot/vesa-apaja)
|
|
|
|
* No quota. But check that you do not use all disc space yourself!
|
|
|
|
* All datafiles should be saved in `/n/work/username`
|
|
|
|
* * Check how much disc space your files take: `du -hs /n/work/username`
|
|
|
|
* * Check how much disc space is available: `df -h /n/work/username`
|
|
|
|
* * Same discs visible in both puck and oberon
|
|
|
|
* * There is also a "scratch" disc space that everyone can access: `/n/scratch/`. That is not backed up, but is suitable for temporary data. Just create your own folder there.
|
|
|
|
* Check how much disc space your files take: `du -hs /n/work/username`
|
|
|
|
* Check how much disc space is available: `df -h /n/work/username`
|
|
|
|
* Same discs visible in both puck and oberon
|
|
|
|
* There is also a "scratch" disc space that everyone can access: `/n/scratch/`. That is not backed up, but is suitable for temporary data. Just create your own folder there.
|
|
|
|
* Puck has 24CPUs/computing node. Oberon has 40. Keep that in mind if you use a code that uses OpenMP parallerization (where maximum number of CPUs is the number of CPUs per node). Preferably allocate a whole node, or a small part of a node. But don't allocate e.g. 35/40 CPUs, it is then difficult for the system to fill the remaining CPUs with other jobs.
|
|
|
|
* * When using OpenMP parallerization, remember to force the correct number of CPUs: `OMP_NUM_THREADS=10 ./program_for_which_I_have_reserved_10_CPUs`
|
|
|
|
* When using OpenMP parallerization, remember to force the correct number of CPUs: `OMP_NUM_THREADS=10 ./program_for_which_I_have_reserved_10_CPUs`
|
|
|
|
* Always remember to check that you use as many CPUs as you reserve
|
|
|
|
|
|
|
|
Queue system: need a _submission script_ that specifies resources your job needs, and tells the system how to run your program. In the simplest case, the submission script can be simply
|
|
|
|
|
|
|
|
```bash
|
|
|
|
#SBATCH -n 1
|
|
|
|
#SBATCH -t 0-01:00:0
|
|
|
|
./program > outputfile
|
|
|
|
```
|
|
|
|
|
|
|
|
This reserves 1 cpu for your code, and the code can run for 1hr. If the script above is saved in the file `submit.sh`, you can submit the job to the queue by running
|
|
|
|
|
|
|
|
```bash
|
|
|
|
sbatch submit.sh
|
|
|
|
```
|
|
|
|
|
|
|
|
Results (everything the code prints) are saved in file `outputfile`. For more details, see [Queue-system](Cluster/Queue-system)
|
|
|
|
|
|
|
|
## GPU
|
| ... | ... | |
| ... | ... | |