-
-
Notifications
You must be signed in to change notification settings - Fork 215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[0.7.2] Failing to assign VIPs to services with externalTrafficPolicy: Local and servicesElection: true #790
Comments
Upon creating the service, the logs for kube-vip-cloud-controller read
while all kube-vip pods in the daemonset only log the one line:
Could this issue be related to #666 ? |
Does the service has endpoints? |
The service does have endpoints, but the status field is completely empty. In fact, since all IPs are private I can show you the whole thing as kubectl reports it: apiVersion: v1
kind: Service
metadata:
annotations:
kube-vip.io/loadbalancerIPs: 192.168.62.65
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/name":"ingress-nginx"},"name":"ingress-nginx-controller","namespace":"ingress-nginx"},"spec":{"externalTrafficPolicy":"Local","ports":[{"appProtocol":"http","name":"http","port":80,"protocol":"TCP","targetPort":"http"},{"appProtocol":"https","name":"https","port":443,"protocol":"TCP","targetPort":"https"}],"selector":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/name":"ingress-nginx"},"type":"LoadBalancer"}}
creationTimestamp: "2024-03-15T13:21:10Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
implementation: kube-vip
name: ingress-nginx-controller
namespace: ingress-nginx
resourceVersion: "4456635"
uid: 80dcf0aa-dc8d-4203-9149-4d53ed153cba
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.98.171.239
clusterIPs:
- 10.98.171.239
externalTrafficPolicy: Local
healthCheckNodePort: 31335
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
loadBalancerIP: 192.168.62.65
ports:
- appProtocol: http
name: http
nodePort: 31603
port: 80
protocol: TCP
targetPort: http
- appProtocol: https
name: https
nodePort: 31239
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {} Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
implementation=kube-vip
Annotations: kube-vip.io/loadbalancerIPs: 192.168.62.65
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.98.171.239
IPs: 10.98.171.239
IP: 192.168.62.65
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31603/TCP
Endpoints: 172.20.74.39:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 31239/TCP
Endpoints: 172.20.74.39:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 31335
Events: <none>
|
Same issue here. This is my first time evaluating kube-vip, so I never used a previous version before. I think I am seeing the same issue, but it doesn't seem to have anything to do with In the rare cases where assigning a VIP to a service does work (and This is my Service definition: apiVersion: v1
kind: Service
metadata:
name: traefik-vip
annotations:
kube-vip.io/loadbalancerIPs: '172.28.180.134'
spec:
type: LoadBalancer
loadBalancerClass: kube-vip.io/kube-vip-class
selector:
app.kubernetes.io/name: traefik
ports:
- port: 443
targetPort: 8443
externalTrafficPolicy: Local There is no CCM involved. I especially set I am testing this with K3s v1.29.2+k3s1 and Kube-vip is running on all (12) nodes. The manifest has been generated by this:
Yes, even many of them, because we're deploying the service with multiple replicas:
|
Describe the bug
I'm trying to provide LoadBalancer services on a bare metal cluster, using kube-vip in ARP mode with servicesElection=true and the kube-vip-cloud-controller providing IPs.
When I create a service with
externalTrafficPolicy: Local
, thekube-vip.io/loadbalancerIPs
annotation andloadBalancerIP
field are set, but the external IP remains pending forever and the VIP does not get assigned to a Node.If I create the service with
externalTrafficPolicy: Cluster
everything works as expected, and provided the VIP happens to end up on the same node as a Pod backing the Service, I can even edit it to Local later and it will still work. However, if that Pod terminates and its replacement is scheduled on a different node, the VIP remains on the previous Node, leaving the Service unabe to reach its endpoints.To Reproduce
Steps to reproduce the behavior:
kubectl apply -f
to install the kube-vip manifest provided belowExpected behavior
When I create a new Service with
externalTrafficPolicy: Local
, its VIP is assigned to a Node that has Pods matching its selector, and if an existing Service withexternalTrafficPolicy: Local
is left with no pods on its Node, its VIP is reassigned to a different Node that has Pods.Environment:
Kube-vip.yaml
:This was generated by
Additional context
This is a new cluster created with kubeadm.
The text was updated successfully, but these errors were encountered: