-
Notifications
You must be signed in to change notification settings - Fork 38.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Flaking Test][sig-storage] should block a second pod from using an in-use ReadWriteOncePod volume on the same node #124784
Labels
kind/flake
Categorizes issue or PR as related to a flaky test.
sig/storage
Categorizes an issue or PR as relevant to SIG Storage.
triage/accepted
Indicates an issue or PR is ready to be actively worked on.
Comments
eddiezane
added
the
kind/flake
Categorizes issue or PR as related to a flaky test.
label
May 10, 2024
k8s-ci-robot
added
sig/storage
Categorizes an issue or PR as relevant to SIG Storage.
needs-triage
Indicates an issue or PR lacks a `triage/foo` label and requires one.
labels
May 10, 2024
eddiezane
changed the title
[Flaking Test][sig-storage]
[Flaking Test][sig-storage] should block a second pod from using an in-use ReadWriteOncePod volume on the same node
May 10, 2024
FYI: I used github action to do some tests on the job |
OutOfpods: Node didn't have enough resource: pods, requested: 1, used: 110, capacity: 110
|
/triage accepted |
k8s-ci-robot
added
triage/accepted
Indicates an issue or PR is ready to be actively worked on.
and removed
needs-triage
Indicates an issue or PR lacks a `triage/foo` label and requires one.
labels
May 15, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
kind/flake
Categorizes issue or PR as related to a flaky test.
sig/storage
Categorizes an issue or PR as relevant to SIG Storage.
triage/accepted
Indicates an issue or PR is ready to be actively worked on.
Which jobs are flaking?
Which tests are flaking?
Kubernetes e2e suite: [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] read-write-once-pod [MinimumKubeletVersion:1.27] should block a second pod from using an in-use ReadWriteOncePod volume on the same node
Since when has it been flaking?
Unclear. It appears to be for a while but I haven't been able to find an issue already filed.
https://storage.googleapis.com/k8s-triage/index.html?date=2024-05-09&test=should%20block%20a%20second%20pod%20from%20using%20an%20in-use%20ReadWriteOncePod
Testgrid link
https://testgrid.k8s.io/presubmits-kubernetes-blocking#pull-kubernetes-e2e-kind
Reason for failure (if possible)
Anything else we need to know?
https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/124683/pull-kubernetes-e2e-kind/1788664943130710016
Relevant SIG(s)
/sig storage
The text was updated successfully, but these errors were encountered: