-
Notifications
You must be signed in to change notification settings - Fork 288
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vsphere-csi-node-xxxxx are in CrashLoopBackOff #2519
Comments
Could you take a look on why If I got it right this pod needs to be up first so the daemonset pods can succeed. Did you use the default templates provided by CAPV or did you manually deploy CSI? |
I posted the sample output from the I used the default template and followed instructions from the quick-start page to generate cluster yaml file. |
So something prevents the vsphere-csi-controller from getting scheduled. There may be taints or something else why this happens. You need to figure out why that is and then the daemonset pods should also get ready. |
can you get the events from that namespace? |
The csi-node-driver, which is installed using tigera-operator, conflicts with vsphere-csi-node. I couldn't disable the installation of csi-node-driver, so I use |
Would be interesting to figure out together with https://github.com/kubernetes/cloud-provider-vsphere where the gaps are that both can run at the same time. (for CSI we simply consume the above). |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/kind bug
What steps did you take and what happened:
K apply
I can see the VMs being created on my vSphere account.What I see on the provisioned cluster
What did you expect to happen:
I expected to see a cluster with all pods running.
Anything else you would like to add:
Below are some of the
K
output for reference.Here are some of the env variables I have
All bootstrap pods are running without errors.
Here are the pods on the vSphere cluster that was provisioned using CAPI
Here is a sample
k describe
forvsphere-csi-node-xxxx
Environment:
kubectl version
): 1.27.3/etc/os-release
): Ubuntu 22.04 OVA image that vSphere recommends (with no changes to the OVA).The text was updated successfully, but these errors were encountered: