Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question / Bug] kube-vip keeps assigning used IP to LoadBalancer #845

Open
gogo199432 opened this issue May 10, 2024 · 3 comments
Open

Comments

@gogo199432
Copy link

As the title says. I have setup kube-vip in my k3s cluster as both control-plane and generic load balancer. My already existing service has the IP of 192.168.1.40 given by kube-vip. Now whenever I set up a new Service of LoadBalancer, it also gets this same ip. I can manually rewrite the service and change the assigned IP and that "fixes" the issue, but it would be nice if it would work correctly from the start.

@Cellebyte
Copy link
Collaborator

Cellebyte commented May 10, 2024

Can you provide both services as yamls?
That would be easier to find out your exact problem. Additionally the deployment yaml of your kube-vip daemonset would also help.

@gogo199432
Copy link
Author

Name:                     homepage
Namespace:                homepage
Labels:                   app.kubernetes.io/instance=homepage
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=homepage
                          app.kubernetes.io/service=homepage
                          app.kubernetes.io/version=v0.6.10
                          helm.sh/chart=homepage-1.2.3
                          implementation=kube-vip
Annotations:              kube-vip.io/loadbalancerIPs: 192.168.1.40
                          kube-vip.io/vipHost: nixk3s-node962-master
Selector:                 app.kubernetes.io/instance=homepage,app.kubernetes.io/name=homepage
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.43.220.60
IPs:                      10.43.220.60
IP:                       192.168.1.40
LoadBalancer Ingress:     192.168.1.40
Port:                     http  3000/TCP
TargetPort:               http/TCP
NodePort:                 http  32060/TCP
Endpoints:                10.42.0.31:3000
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

---

Name:                     traefik
Namespace:                kube-system
Labels:                   app.kubernetes.io/instance=traefik
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=traefik
                          helm.sh/chart=traefik-28.0.0
Annotations:              kube-vip.io/loadbalancerIPs: 192.168.1.40
                          kube-vip.io/vipHost: nixk3s-node962-master
                          metallb.universe.tf/ip-allocated-from-pool: first-pool
Selector:                 app.kubernetes.io/instance=traefik-kube-system,app.kubernetes.io/name=traefik
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.43.3.134
IPs:                      10.43.3.134
IP:                       192.168.1.40
LoadBalancer Ingress:     192.168.1.40
Port:                     web  80/TCP
TargetPort:               web/TCP
NodePort:                 web  32739/TCP
Endpoints:                10.42.4.32:8000
Port:                     websecure  443/TCP
TargetPort:               websecure/TCP
NodePort:                 websecure  32522/TCP
Endpoints:                10.42.4.32:8443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

---

Name:                   kube-vip-cloud-provider
Namespace:              kube-system
CreationTimestamp:      Mon, 06 May 2024 12:09:07 +0200
Labels:                 objectset.rio.cattle.io/hash=9061fae9ac7f16521700c33212df725b45d0d780
Annotations:            deployment.kubernetes.io/revision: 1
                        objectset.rio.cattle.io/applied:
                          H4sIAAAAAAAA/6yUTW/bPAzHv8oDAs/NTu1kaVYDPQRr94IVRbGiuxQ5MBLtaJElgZLdBIW/+yDnBSm2NMXQo2Tzz7/IH/kM6NRPYq+sgQLQOX/W5pDAUhkJBVyR03ZdkwmQQE0BJQ...
                        objectset.rio.cattle.io/id:
                        objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
                        objectset.rio.cattle.io/owner-name: kube-vip-cloud-controller
                        objectset.rio.cattle.io/owner-namespace: kube-system
Selector:               app=kube-vip,component=kube-vip-cloud-provider
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app=kube-vip
                    component=kube-vip-cloud-provider
  Service Account:  kube-vip-cloud-controller
  Containers:
   kube-vip-cloud-provider:
    Image:      ghcr.io/kube-vip/kube-vip-cloud-provider:v0.0.9
    Port:       <none>
    Host Port:  <none>
    Command:
      /kube-vip-cloud-provider
      --leader-elect-resource-name=kube-vip-cloud-controller
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   kube-vip-cloud-provider-556c5d4f6c (1/1 replicas created)
Events:          <none>

---

Name:         kubevip
Namespace:    kube-system
Labels:       objectset.rio.cattle.io/hash=ec42be2fc5fee9c14d8cdc89281bd45933caa43e
Annotations:  objectset.rio.cattle.io/applied:
                H4sIAAAAAAAA/4yPT0vEMBDFv4rMuY2mf7QNeBCP4tX7JJl2Y9uZ0sSKLPvdJYdFQVY9Pt7Mj/c7Aq7hhbYYhMHArqEAjwnBHGFDHqkcZ7E4gwHdV0rfdkqr5qb8CnctnAqYAnsw8C...
              objectset.rio.cattle.io/id:
              objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
              objectset.rio.cattle.io/owner-name: kubevip
              objectset.rio.cattle.io/owner-namespace: kube-system

Data
====
range-global:
----
192.168.1.40-192.168.1.75

BinaryData
====

Events:  <none>

Kube-vip was originally deployed using this ansible repo: https://github.com/techno-tim/k3s-ansible
(since then I changed my nodes from Debian to NixOS so I won't be using this playbook anymore)

@matthewei
Copy link

I also met this issue, Who can help us?
And another question is that if i delete this loadbalancer service and I can still ping it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants