| ... | @@ -8,11 +8,12 @@ This page discusses different computing clusters in general. See [Cluster/Queue |
... | @@ -8,11 +8,12 @@ This page discusses different computing clusters in general. See [Cluster/Queue |
|
|
* Connect using ssh: `ssh puck.it.jyu.fi` or `ssh oberon.it.jyu.fi`
|
|
* Connect using ssh: `ssh puck.it.jyu.fi` or `ssh oberon.it.jyu.fi`
|
|
|
* If you need access, contact [Vesa Apaja](https://www.jyu.fi/fi/henkilot/vesa-apaja)
|
|
* If you need access, contact [Vesa Apaja](https://www.jyu.fi/fi/henkilot/vesa-apaja)
|
|
|
* No quota. But check that you do not use all disc space yourself!
|
|
* No quota. But check that you do not use all disc space yourself!
|
|
|
* All datafiles should be saved in `/n/work0i/username` where _i_ is 0,1,2,3,... different for different users.
|
|
* All datafiles should be saved in `/n/work/username`
|
|
|
* * Check how much disc space your files take: `du -hs /n/work01/username`
|
|
* * Check how much disc space your files take: `du -hs /n/work/username`
|
|
|
* * Check how much disc space is available: `df -h /n/work01/username`
|
|
* * Check how much disc space is available: `df -h /n/work/username`
|
|
|
* * Same discs visible in both puck and oberon
|
|
* * Same discs visible in both puck and oberon
|
|
|
* Puck has 24CPUs/computing node. Oberon has 40. Keep that in mind if you use a code that uses OpenMP parallerization (where maximum number of CPUs is the number of CPUs per node)
|
|
* * There is also a "scratch" disc space that everyone can access: `/n/scratch/`. That is not backed up, but is suitable for temporary data. Just create your own folder there.
|
|
|
|
* Puck has 24CPUs/computing node. Oberon has 40. Keep that in mind if you use a code that uses OpenMP parallerization (where maximum number of CPUs is the number of CPUs per node). Preferably allocate a whole node, or a small part of a node. But don't allocate e.g. 35/40 CPUs, it is then difficult for the system to fill the remaining CPUs with other jobs.
|
|
|
* * When using OpenMP parallerization, remember to force the correct number of CPUs: `OMP_NUM_THREADS=10 ./program_for_which_I_have_reserved_10_CPUs`
|
|
* * When using OpenMP parallerization, remember to force the correct number of CPUs: `OMP_NUM_THREADS=10 ./program_for_which_I_have_reserved_10_CPUs`
|
|
|
* Always remember to check that you use as many CPUs as you reserve
|
|
* Always remember to check that you use as many CPUs as you reserve
|
|
|
|
|
|
| ... | |
... | |
| ... | | ... | |