Showing posts with label sun grid engine. Show all posts
Showing posts with label sun grid engine. Show all posts

01 March 2013

349. SGE: removed node while jobs were queued

The Problem
There's a cluster (running ROCKS with Sun Grid Engine) which I manage remotely and which I did not set up. Instead it was the IT people at that uni who first configured it. For some reason they named the nodes
compute-0-0.local
compute-0-1.local
compute-0-2.local
compute-0-3.local
compute-0-6.local
compute-0-7.local

Recently a few extra disks were added to the system, so all jobs were suspended. However, while installing the disks the local IT peep decided to change the node names without consulting us. Now the nodes were called

compute-0-0.local
compute-0-1.local
compute-0-2.local
compute-0-3.local
compute-0-4.local
compute-0-5.local

instead. Suddenly there were two node-queues with jobs in them, but with no corresponding nodes.Trying to delete the jobs in those queues only lead to:

all.q@compute-0-5.local        BIP   0/8/8          9.12     lx26-amd64    
   5142 0.55500 submit__v3 me         r     02/27/2013 15:02:11     8        
---------------------------------------------------------------------------------
all.q@compute-0-6.local        BIP   0/8/8          -NA-     lx26-amd64    auo
   5074 0.55500 submit__nb me         dr    02/02/2013 21:53:59     8      

The Solution
It wasn't immediately obvious how to fix this, but it turned out to be simple:
qconf -cq all.q@compute-0-6.local

That clears and deletes the queue. That's all.

21 May 2012

158. Setting up ecce with qsub at An Australian University computational cluster

EDIT: this works for G09 on that particular cluster. Come back in a week or two for a more general solution (end of May 2012/beginning of June 2012).

I don't feel comfortable revealing where I work. But imagine that you end up working at an Australian University in, say, Melbourne. I do recognise that I will be giving enough information here to make it possible to identify  who I am (and there are many reasons not to want to be identifiable -- partly because students can be mean and petty, and partly because I suffer from the delusion that IT rules apply to Other People, and not me -- and have described ways of doing things you're not supposed to be doing in this blog)

Anyway.

My old write-ups of ecce are pretty bad, if not outright inaccurate. Anyway, I presume that in spite of that you've managed to set up ECCE well enough to run stuff on nodes of your local cluster.

Now it's time for the next level -- on a remote site using SGE/qsub

So far I've only tried this out with G09 -- they are currently looking to set up nwchem on the university cluster. Not sure what the best approach to the "#$ -pe g03_smp2 2" switch is for nwchem.

--START HERE --

EVERYTHING I DESCRIBE IS DONE ON YOUR DESKTOP, NOT ON THE REMOTE SYSTEM. Sorry for shouting, but don't got a-messing with the remote computational cluster -- we only want to teach ecce how to submit jobs remotely. The remote cluster should be unaffected.

1. Creating the Machine
To set up a site with a queue manager, start
ecce -admin

Do something along the lines of what's shown in the figure above.

If you're not sure whether your qsub belongs to PBS or SGE, type qstat -help and look at the first line returned, e.g. SGE 6.2u2_1.

2. Configure the site
Now, edit your ecce-6.3/apps/siteconfig/CONFIG.msgln4  (local nodes go into ~/.ECCE  but remote SITES go in apps/siteconfig --  and that's what we're working with here).

   NWChem: /usr/local/bin/NWCHEM
   Gaussian-03: /usr/local/bin/G09
   perlPath: /usr/bin/perl
   qmgrPath: /usr/bin/qsub
 
   SGE {
   #$ -S /bin/csh
   #$ -cwd
   #$ -l h_rt=$wallTime
   #$ -l h_vmem=4G
   #$ -j y
   #$ -pe g03_smp2 2

   module load gaussian/g09
    }
A word of advice -- open the file in vim (save using :wq!) or do a chmod +w on it first since it will be set to read-only by default.


3. Queue limits
The same goes for the next file, which controls various job limits, ecce-6.3/apps/siteconfig/msgln4.Q:
# Queue details for msgln4
Queues:    squ8

squ8|minProcessors:       2
squ8|maxProcessors:       6
squ8|runLimit:       4320
squ8|memLimit:       4000
squ8|scratchLimit:       0
4. Connect
In the ecce launcher-mathingy click on Machine Browser, and Set Up Remote Access for the remote cluster. Basically, type in your user name and password.

Click on machine status to make sure that it's connecting

5.Test it out!
If all is well you should be good to go