Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scheduler_perf: Allow each test case to specify its timeout #124827

Closed
utam0k opened this issue May 12, 2024 · 7 comments
Closed

scheduler_perf: Allow each test case to specify its timeout #124827

utam0k opened this issue May 12, 2024 · 7 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling.

Comments

@utam0k
Copy link
Member

utam0k commented May 12, 2024

/kind feature
/sig scheduling

what

Allow each test case to specify its timeout. Now, it is specified 30 minutes as hard coding.

// 30 minutes should be plenty enough even for the 5000-node tests.
timeout := 30 * time.Minute

why

I admit that 30 minutes is enough for the default scheduler with the in-tree scheduler plugins, but scheduler_perf is also helpful for other schedulers with outside scheduler plugins. In that case, more than 30 minutes might be required to complete a test case.

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 12, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@utam0k
Copy link
Member Author

utam0k commented May 12, 2024

@kubernetes/sig-scheduling-approvers I'd be happy to contribute to this issue if it is acceptable.

@kerthcet
Copy link
Member

User case is first, do you have any real case here.

@sanposhiho
Copy link
Member

sanposhiho commented May 13, 2024

Agree with @kerthcet. If you want to define a test case which takes more than 30 mins to finish, it feels like that's something undesired.

Additionally to say, even if there's a usecase (let's say using ~1 hour) actually, could we just extend the global timeout to 1 hour? I don't see any necessity of per-test timeout.

@utam0k
Copy link
Member Author

utam0k commented May 14, 2024

User case is first, do you have any real case here.

It's very important. Let me share my experience. Unfortunately, I know of a scheduler with fairly poor throughput. That is like an order of magnitude slower throughput. For example, a co-scheduling plugin would be something that could significantly reduce throughput. However, we would like to verify throughput and deadlock on a large number of nodes (possibly an endurance test). In this case, I think scheduler-perf is currently appropriate. However, it will take more than 30 minutes. If preemption is also included, there is an extra possibility of this.

Additionally to say, even if there's a usecase (let's say using ~1 hour) actually, could we just extend the global timeout to 1 hour? I don't see any necessity of per-test timeout.

Certainly, this could be the case. However, in the case of fast label, for example, it is expected to fail quickly. In this case, it seems a bit wasteful to wait an hour for a deadlock or something

@kerthcet @sanposhiho Given your extensive experience, I would appreciate hearing your perspective. Could you please share your thoughts? 🙏

@sanposhiho
Copy link
Member

sanposhiho commented May 15, 2024

Even if the scheduler is 10 times slower (= 30 pods/s throughput),
the scheduler can handle 54000 scheduling cycles within 30 min theoretically.
Did your test actually get failed by this 30 min timeout? I mean, for example, didn't your test case include something mistaken and some Pods left unschedulable forever?
It sounds like too slow (as long as you used an appropriate sized test case), even if your test involves preemption.

@utam0k
Copy link
Member Author

utam0k commented May 15, 2024

Did your test actually get failed by this 30 min timeout?

No, I didn't. It's fair to reopen this when I get actually this situation. Thanks for your input.

@utam0k utam0k closed this as completed May 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling.
Projects
None yet
Development

No branches or pull requests

4 participants