Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HPA scale-in: is pod resource usage taken into account to prioritize pods to be deleted ? #124821

Open
remiville opened this issue May 11, 2024 · 4 comments
Labels
needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling.

Comments

@remiville
Copy link

Hello,

I know that in case of node resource pressure, the controller prioritizes pods to be evicted, for example in case of node memory pressure, pods using more memory than requested are prioritised for deletion.

But for HPA scale-in I struggle to find if and how pods are sorted to prioritize them for deletion.

While looking at the PodDeletionCost feature PR I found this priorization comment in the code https://github.com/ahg-g/kubernetes/blob/2d88d2d993b36c231fdc7537212685544469c517/pkg/controller/controller_utils.go#L785

Does it mean that no resource usage is taken into account when the controller has to select pods to be deleted during a scale in event ?

For example, if a scale-in occurs because the CPU usage is lower than the threshold set in the HPA, the controller will select pods to be deleted without prioritizing them by their current CPU usage, right ?

@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 11, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@HirazawaUi
Copy link
Contributor

/sig autoscaling

@k8s-ci-robot k8s-ci-robot added sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels May 12, 2024
@ishaankalra
Copy link

Can I give it a try?

@HirazawaUi
Copy link
Contributor

Can I give it a try?

Need to wait for confirmation and triage from the sig autoscaling team.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling.
Projects
None yet
Development

No branches or pull requests

4 participants