CoEPP RC
 

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
cloud:a_walk_in_the_cloud [2015/10/01 13:33]
goncalo [Batch Jobs]
cloud:a_walk_in_the_cloud [2016/12/06 11:49]
scrosby
Line 15: Line 15:
  
 To use a cloud batch system you login to the "​interactive nodes"​. To use a cloud batch system you login to the "​interactive nodes"​.
-For the NeCTAR tier 3 cloud the nodes are //cxin01//, //cxin02// and //cxsin//.+For the NeCTAR tier 3 cloud the nodes are //cxin01//, //cxin02//, //cxin03// and //cxin04//.
 You login this way: You login this way:
 <​code>​ <​code>​
 ssh -Y <​user_name>​@cxin01.cloud.coepp.org.au ssh -Y <​user_name>​@cxin01.cloud.coepp.org.au
 ssh -Y <​user_name>​@cxin02.cloud.coepp.org.au ssh -Y <​user_name>​@cxin02.cloud.coepp.org.au
 +ssh -Y <​user_name>​@cxin03.cloud.coepp.org.au
 +ssh -Y <​user_name>​@cxin04.cloud.coepp.org.au
 </​code>​ </​code>​
-The above nodes are at Melbourne.  ​To use the Adelaide interactive node do: +The above nodes are at Melbourne.  ​
-<​code>​ +
-ssh -Y <​user_name>​@cxsin.cloud.coepp.org.au +
-</​code>​+
  
 The interactive nodes are used to submit jobs to the cloud and also for interactive use. The interactive nodes are used to submit jobs to the cloud and also for interactive use.
Line 33: Line 32:
  
 When your job runs in the cloud it does __not__ have access to your home directory. When your job runs in the cloud it does __not__ have access to your home directory.
-A directory **/​data/<​user_name>​** is available to you on both the interactive nodes and all the batch worker nodes. +A directory **/​data/<​user_name>​** is available to you on both the interactive nodes and all the batch worker nodes. ​You are recommended to now use CephFS **/​coepp/​cephfs** 
-You must place your executable files and any input data required under your **/​data/<​user_name>​** directory before +You must place your executable files and any input data required under your **/​data/<​user_name>​** or **/​coepp/​cephfs** directory before 
-submitting a batch job.  Similarly, any output files written by your batch job will be under your **/​data/<​user_name>​** directory.+submitting a batch job.  Similarly, any output files written by your batch job will be under your **/​data/<​user_name>​** or **/​coepp/​cephfs** directory.
  
 ====== Software Management ====== ====== Software Management ======
Line 116: Line 115:
 There are two batch queues in the current system: **short** and **long**. ​ Each queue has limits There are two batch queues in the current system: **short** and **long**. ​ Each queue has limits
 on CPU time and wall clock time (walltime). on CPU time and wall clock time (walltime).
-The default walltime ​and CPU time limits for the batch queues are: +The default walltime limits for the batch queues are: 
-^  queue      ^  walltime ​                ^  CPU time                 ^ +^  queue      ^  walltime ​                ^ 
-|  short      |  maximum 1 hour           ​|  ​maximum 1 hour           | +|  short      |  maximum 1 hour           ​| ​  
-|  long       ​| ​ default maximum ​96 hours |  ​default maximum 5 hours  | +|  long       ​| ​ default maximum ​7 days |   
-If your batch jobs exceed either ​time limit they will be terminated.+If your batch jobs exceed either ​the walltime ​limit they will be terminated.
  
 You can specify which queue to run on when you submit your job: You can specify which queue to run on when you submit your job:
Line 130: Line 129:
 That didn't matter for our little //fib// example as the required CPU time is very short. That didn't matter for our little //fib// example as the required CPU time is very short.
 But for your longer running jobs you must consider your required times since if But for your longer running jobs you must consider your required times since if
-you don't specify a queue, you run on the **short** queue and get a maximum of one hour of time (CPU or wall). +you don't specify a queue, you run on the **short** queue and get a maximum of one hour of time (wall). 
-If your jobs requires more than one hour of CPU time you should submit to the **long** queue (shown above).+If your jobs requires more than one hour of walltime ​you should submit to the **long** queue (shown above).
  
-If you need more than five hours of CPU time (for instance), there are ways to request extended limits using+If you need more than five hours of walltime ​(for instance), there are ways to request extended limits using
 **batch parameters**. ​ Suppose we had a very inefficient method to compute **batch parameters**. ​ Suppose we had a very inefficient method to compute
 Fibonnaci 30 that is expected to run for ten CPU hours. ​ We would need a batch file like this: Fibonnaci 30 that is expected to run for ten CPU hours. ​ We would need a batch file like this:
 <​code>​ <​code>​
-#PBS -l cput=10:00:00+#PBS -l walltime=10:00:00
 /​data/​smith/​fib 30 /​data/​smith/​fib 30
 </​code>​ </​code>​
cloud/a_walk_in_the_cloud.txt · Last modified: 2018/10/09 16:44 by scrosby
 
Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Share Alike 4.0 International
Recent changes RSS feed Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki