| ... | ... | @@ -5,11 +5,16 @@ This page discusses different computing clusters in general. See [Cluster/Queue |
|
|
|
## Puck and Oberon (CPU)
|
|
|
|
|
|
|
|
* [Official instructions to use local clusters](http://users.jyu.fi/%7Eveapaja/FGCI-instructions.JYU.txt)
|
|
|
|
* Connect using ssh: `ssh puck.it.jyu.fi` or `ssh oberon.it.jyu.fi`
|
|
|
|
* If you need access, contact [Vesa Apaja](https://www.jyu.fi/fi/henkilot/vesa-apaja)
|
|
|
|
* No quota. But check that you do not use all disc space yourself!
|
|
|
|
* All datafiles should be saved in `/n/work0i/username` where _i_ is 0,1,2,3,... different for different users.
|
|
|
|
* Check how much disc space your files take: `du -hs /n/work01/username`
|
|
|
|
* Check how much disc space is available: `df -h /n/work01/username`
|
|
|
|
* * Check how much disc space your files take: `du -hs /n/work01/username`
|
|
|
|
* * Check how much disc space is available: `df -h /n/work01/username`
|
|
|
|
* * Same discs visible in both puck and oberon
|
|
|
|
* Puck has 24CPUs/computing node. Oberon has 40. Keep that in mind if you use a code that uses OpenMP parallerization (where maximum number of CPUs is the number of CPUs per node)
|
|
|
|
* * When using OpenMP parallerization, remember to force the correct number of CPUs: `OMP_NUM_THREADS=10 ./program_for_which_I_have_reserved_10_CPUs`
|
|
|
|
* Always remember to check that you use as many CPUs as you reserve
|
|
|
|
|
|
|
|
Queue system: need a _submission script_ that specifies resources your job needs, and tells the system how to run your program. In the simplest case, the submission script can be simply
|
|
|
|
```bash
|
| ... | ... | |
| ... | ... | |