Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More flexible Job listing/killing for Slurm #173

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

ja-thomas
Copy link
Contributor

We frequently change clusters/partition on our HPC and setting the clusters in makeClusterFunctionSlurm() is not really practical (multiple users are linking to the same template/config file).

So we handle the clusters/partitions via resources. To make listing/killing of the jobs possible we need to set the squeue arguments in the functions accordingly.

This is not really nice, but I can't think of another solution. As far as I know you can't change the clusters argument of makeClusterFunctionSlurm not after or while creating the registry.

@coveralls
Copy link

Coverage Status

Coverage remained the same at 93.744% when pulling eda37ea on ja-thomas:master into 5c008ee on mllg:master.

@mllg
Copy link
Owner

mllg commented Feb 5, 2018

Can you call squeue without specifying --clusters? Can there be duplicated job.ids if you query multiple clusters?

@ja-thomas
Copy link
Contributor Author

no, squeue without the clusters argument will always return no jobs (the partition argument is optional and only used if a specific partition is actually used, which means there is a default partition but no default cluster).

There are no duplicated job ids as far as I know. I can double check, but until now the job.id was always a unique identifier over all clusters

@mllg
Copy link
Owner

mllg commented Feb 5, 2018

How about this approach:

  • You specify a comma separated list of clusters to the constructor
  • For submitJobs you select one of the clusters via a resource
  • The job listing functions will iterate over all available clusters and returns the union of all job ids. Job ids are later matched against the job ids in the data base, so it is okay if the cluster functions return a superset here. But duplicated ids would lead to inconsistencies.

@mllg
Copy link
Owner

mllg commented Feb 5, 2018

Ah wait, this does not work for killJobs() 😞

@berndbischl
Copy link
Collaborator

@mllg why dont you simply expose the "args" from listJobs in the constructor, with your settings as default?

and then users can overwrite this flexibly? isnt that the normal trick? and this changes nothing for anybody else or the internal code?

@berndbischl
Copy link
Collaborator

this here:

 listJobsQueued = function(reg) {
 args = c("-h", "-o %i", "-u $USER", "-t PD", sprintf("--clusters=%s", clusters))

just expose this as args.listjobsqueued (or whatever), with the string as a default?

@mllg
Copy link
Owner

mllg commented Mar 15, 2018

#179

Can you please comment if

  1. This is now flexible enough for you guys
  2. if we still need the clusters argument
  3. if this PR is now obsolete

@ja-thomas
Copy link
Contributor Author

I don't think this helps.
The problem is that the args are evaluated (at least when we have them optional with sprintf) at creation time of the clusterFunctions and not when they are actually called.

I hope we don't need the clusters argument anymore if we get that to work.

I think I'll take this rather ugly fix here and keep them as clusterFunctionsSlurLRZ or whatever in the config repository for our project on the lrz. Since all cluster users are linking against my batchtools.conf file anyways...

The perfect solution for us would be that clusters + partitions are resources that can be set on a job level (which is already possible, I think) and have the listing/killing calls take the values from there

@mllg
Copy link
Owner

mllg commented Mar 16, 2018

#180 ?

@mllg
Copy link
Owner

mllg commented Mar 16, 2018

I could do the same thing for partitions, but I really don't know what I'm doing. 😕

@lcomm
Copy link

lcomm commented Mar 21, 2018

This does not solve the original cluster/partition issue, but @berndbischl's suggestion of exposing the arguments would solve a problem I am encountering where all of my SLURM jobs show up as expired until done. My computing cluster has its own version of squeue (see here), but it only recognizes --noheader and not -h as assumed by the listJobs functions. Allowing users to tweak the listJobsQueued and listJobsRunning args would make it easier to use with nonstandard SLURM configurations.

@mllg
Copy link
Owner

mllg commented Mar 22, 2018

@lcomm The arguments will be exported in the next version of batchtools, I'm just waiting for some feedback on #180 before exposing the args.

@ja-thomas @berndbischl

@mllg
Copy link
Owner

mllg commented Mar 22, 2018

Ok it looks like --no-header is supported now by rc-squeue. I've changed the Slurm cluster functions to always use the longer command line arguments nevertheless.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants