-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v3.1.1 ks-console crashbackoff very often #6070
Comments
the liveness settings is: |
Can you provide the return value of the following command on the node for ks-console?
|
kubelet.log.tar.gz |
I have upload the kubelet.log |
It looks like
|
kubectl version
docker version
kubectl get sc
|
You can refer to this link: kubernetes/kubernetes#74135 In our experience, this may be resolved by upgrading docker and kubelet, but the most complete solution is to stop using nfs in K8s. |
ks-console-646dd6dcb4-j4ts4_kubesphere-system is the pod that often restart,and I remove the healthy probe,then k8s remove it automatic,now the pod is "ks-console-79fc8668d7-fgxx2 " |
Do you have some suggest for the version of docker and kubelet? |
k8s v1.23.10 |
Describe the Bug
Recently,I found ks-console pod restart many times,and I use command "kubectl logs " to look the pod log,and it only have the logs after restarting,so I can't get the reason why it restart.
I use "kubectl describe pod" ,and I got the msg:"Liveness probe failed: dial tcp 172.19.0.157:8000: connect: connection timed out"
Versions Used
KubeSphere: v3.1.1
Kubernetes: v1.18
Environment
How many nodes and their hardware configuration:
OS: CentOS 7.6
master: 1 node 4cpu/8g
worker: 1 node 4cpu/8g
How To Reproduce
It often happens without reason,so I don't know how to reproduce it
Expected behavior
I want it no restart
The text was updated successfully, but these errors were encountered: