-
Notifications
You must be signed in to change notification settings - Fork 265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
K8s master nodes are never added to standard load balancer backend pool #5849
Comments
@amsuggs37 are you actually using The later will not work, as the node providerID and instance-type are set by the cloud provider that is in use when the node joins, and cannot be changed later. I suspect this is why you still see no nodes in the pool even after changing the Check the output of |
Hi @brandond, thanks for commenting. I am fairly certain I am using the That config is in the initial issue description, but I do set My From what I understand the helm deployment of the Anyway, I checked my nodes as you suggested... I cannot find any documentation on what determines these values as valid, but I can continue to dig. Do you know off the top of your head what determines them as valid? Let me know your thoughts, and thanks again for the ideas! |
That all sounds correct then! |
What happened:
When using the out-of-tree cloud provider azure as documented, I have been unable to get the cloud-controller-manager to add master nodes to the backend pool of a standard load balancer.
I followed the notes about
excludeMasterFromStandardLB
by setting to "false" and removed thenode-role.kubernetes.io/master
label from all the master nodes. Even then, the master nodes are not added to the load balancer backend pool.What you expected to happen:
Configuring the cloud provider to not
excludeMasterFromStandardLB
and also removing thenode-role.kubernetes.io/master
label from all nodes should allow the cloud controller manager to add the master nodes to the standard load balancer backend pool.How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
The same cloud provider configuration works as expected when the cluster has 1 or more worker nodes (not control-plane/etcd). Nodes with no node-role label successfully are added to the load balancer backend pool.
The following logs are present in the cloud-controller-manager deployment for a LoadBalancer type service called "http-echo". They indicate the load balancer being deleted from the vmss for seemingly no reason.
Environment:
kubectl version
): 1.24.12 (rke2 version: v1.24.12+rke2r1)Cloud Provider:
K8s bootstrap args:
cat /etc/os-release
): rhel 8.9uname -a
): 4.18.0-513.18.1.el8_9.x86_64The text was updated successfully, but these errors were encountered: