Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slave-only spot instances #82

Open
nchammas opened this issue Feb 14, 2016 · 7 comments · May be fixed by #335
Open

Slave-only spot instances #82

nchammas opened this issue Feb 14, 2016 · 7 comments · May be fixed by #335
Milestone

Comments

@nchammas
Copy link
Owner

Flintrock supports launching clusters using spot instances. However, when doing so, all the cluster instances are launched as spot instances.

Generally, though, you don't want the master to be a spot instance. You want it to stick around even if various slaves are dropped so that your job can still chug along (though at a slower pace) and so that you can re-add slaves to the cluster at a later time.

We should change our support for launching clusters on spot instances such that only the slaves are spot instances.

This work (kinda) depends on #16.

@engrean
Copy link
Contributor

engrean commented Feb 26, 2016

@nchammas would this solve the problem that occurs when launching a spot instance cluster and one of the nodes ends up getting terminated which ends in the launch command just hanging forever? Or should I submit a new issue for that case?

@nchammas
Copy link
Owner Author

@engrean - Have you seen this issue since d5b086c made it in? Flintrock should now immediately error out as soon as a spot request fails.

You'll need to be running on master to get this fix.

If you're still seeing the issue, the yes, please submit a new issue so we can look into it. I'd be curious to see where exactly the launch hangs, for example.

@sylvinus
Copy link

Would love to have this feature. It would be great to be able to specify a different instance type for the master as well.

@ktdrv
Copy link

ktdrv commented Aug 1, 2017

Would it make sense to separate the add-slaves logic from launch? In other words, the workflow would be to create a master instance only first and then add all the slaves of any type you want to it with a second call.

@DBCerigo
Copy link

DBCerigo commented Oct 4, 2017

Seconding the "would love this feature", and even more so, being able to specify different master and slave instance types.

@mblackgeo
Copy link
Contributor

@nchammas is there any progress on this?

@nchammas
Copy link
Owner Author

@mblack20 - Nope. The current workaround, I believe, is to launch a regular cluster with no slaves, and then add spot slaves separately using add-slaves.

@luhhujbb luhhujbb linked a pull request Feb 23, 2021 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants