Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

K8s master nodes are never added to standard load balancer backend pool #5849

Open
amsuggs37 opened this issue Apr 4, 2024 · 3 comments
Open
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@amsuggs37
Copy link

What happened:

When using the out-of-tree cloud provider azure as documented, I have been unable to get the cloud-controller-manager to add master nodes to the backend pool of a standard load balancer.
I followed the notes about excludeMasterFromStandardLB by setting to "false" and removed the node-role.kubernetes.io/master label from all the master nodes. Even then, the master nodes are not added to the load balancer backend pool.

What you expected to happen:

Configuring the cloud provider to not excludeMasterFromStandardLB and also removing the node-role.kubernetes.io/master label from all nodes should allow the cloud controller manager to add the master nodes to the standard load balancer backend pool.

How to reproduce it (as minimally and precisely as possible):

  1. Deploy a 3 node kubernetes cluster in cloud. I'm using RKE2 with a VMSS to do so. Each node should have the control-plane and etcd node-role.
  2. Deploy the cloud-provider-azure (configuration below)
  3. Deploy a workload to the cluster and expose it with a LoadBalancer type service
  4. The service should get an ExternalIP, but checking the load balancer in azure portal reveals that there are no nodes in the "kubernetes" backend pool.

Anything else we need to know?:

The same cloud provider configuration works as expected when the cluster has 1 or more worker nodes (not control-plane/etcd). Nodes with no node-role label successfully are added to the load balancer backend pool.

The following logs are present in the cloud-controller-manager deployment for a LoadBalancer type service called "http-echo". They indicate the load balancer being deleted from the vmss for seemingly no reason.

I0403 17:59:34.268581       1 azure_loadbalancer_backendpool.go:583] bi.ReconcileBackendPools for service (default/http-echo) and vmSet (): ensuring the LB is decoupled from the VMSet
I0403 17:59:34.268600       1 azure_vmss.go:1485] ensureBackendPoolDeletedFromVMSS: vmSetName (), backendPoolID (/subscriptions/<my_subscription>/resourceGroups/<my_resourcegroup>/providers/Microsoft.Network/loadBalancers/kubernetes/backendAddressPools/kubernetes)
I0403 17:59:34.268609       1 azure_vmss.go:1505] ensureBackendPoolDeletedFromVmss: vmss (<my_vmss>)

Environment:

  • Kubernetes version (use kubectl version): 1.24.12 (rke2 version: v1.24.12+rke2r1)
  • Cloud provider or hardware configuration:
    Cloud Provider:
{
  "cloud": "AzureUSGovernmentCloud",
  "cloudProviderBackoff": false,
  "excludeMasterFromStandardLB": false,
  "loadBalancerBackendPoolConfigurationType": "nodeIp",
  "loadBalancerSku": "standard",
  "location": "usgovvirginia",
  "resourceGroup": "<my_resourcegroup>",
  "securityGroupName": "k8s-nsg",
  "securityGroupResourceGroup": "<my_resourcegroup>",
  "subnetName": "<my_subnet>",
  "subscriptionId": "<my_subscription>",
  "tenantId": "<my_tenant>",
  "useInstanceMetadata": true,
  "useManagedIdentityExtension": true,
  "vmType": "vmss",
  "vnetName": "<my_vnet>",
  "vnetResourceGroup": "<my_resourcegroup>"
}

K8s bootstrap args:

kube-controller-manager-arg:
  - "cloud-provider=external"
  - "use-service-account-credentials=true"
  - "tls-min-version=VersionTLS12"
  - "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE
_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
  - "feature-gates=CSIMigrationAWS=false"
  - "use-service-account-credentials=true"
kube-scheduler-arg:
  - "tls-min-version=VersionTLS12"
  - "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE
_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
kube-apiserver-arg:    
  - "tls-min-version=VersionTLS12"
  - "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE
_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
  - "enable-admission-plugins=NodeRestriction,PodSecurityPolicy,MutatingAdmissionWebhook,ValidatingAdmissionWebhook"
  - "request-timeout=300s"
  - "feature-gates=CSIMigrationAWS=false"
kubelet-arg: 
  - "cloud-provider=external"  
  - "azure-container-registry-config=/etc/kubernetes/cloud-config/azure.json"
  - "streaming-connection-idle-timeout=5m"
  - "feature-gates=CSIMigrationAWS=false"
  - "protect-kernel-defaults=true"
  • OS (e.g: cat /etc/os-release): rhel 8.9
  • Kernel (e.g. uname -a): 4.18.0-513.18.1.el8_9.x86_64
  • Install tools:
  • Network plugin and version (if this is a network-related bug):
  • Others:
@amsuggs37 amsuggs37 added the kind/bug Categorizes issue or PR as related to a bug. label Apr 4, 2024
@brandond
Copy link

brandond commented Apr 18, 2024

@amsuggs37 are you actually using cloud-provider-azure when deploying the nodes, or are you initially provisioning the cluster with the RKE2 embedded cloud provider, and then attempting to switch later?

The later will not work, as the node providerID and instance-type are set by the cloud provider that is in use when the node joins, and cannot be changed later. I suspect this is why you still see no nodes in the pool even after changing the excludeMasterFromStandardLB setting - the nodes do not have the correct fields to be managed by the azure cloud provider.

Check the output of kubectl get node -o yaml and confirm that the node.kubernetes.io/instance-type label and providerID are set correctly on all your nodes.

@amsuggs37
Copy link
Author

Hi @brandond, thanks for commenting.

I am fairly certain I am using the cloud-provider-azure when deploying the nodes.
I followed the documentation here and set the appropriate flags in the /etc/rancher/rke2/config.yaml file per that documentation.

That config is in the initial issue description, but I do set cloud-provider=external in the kubelet and kube-controller-manager and also disable-cloud-controller: true.

My cloud-provider-config.json is also in the initial description.

From what I understand the helm deployment of the cloud-provider-azure should have all the correct default values.

Anyway, I checked my nodes as you suggested...
The node.kubernetes.io/instance-type label is set to: Standard_DS2_v2 for all three of my nodes.
The providerID is set to: azure:///subscriptions/<my_subscription>/resourceGroups/<my_resourcegroup>/providers/Microsoft.Compute/virtualMachineScaleSets/<my_vmss>/virtualMachines/X where "X" is the number 0-2 depending on which of the three nodes you are looking at.

I cannot find any documentation on what determines these values as valid, but I can continue to dig. Do you know off the top of your head what determines them as valid?
Also, I would expect that if what you are saying is the issue, then I would not be able to add worker nodes to the LoadBalance either. If I create a cluster with a worker nodepool, those nodes are successfully added to the LoadBalancer BackendPool when I create LoadBalancer type services.

Let me know your thoughts, and thanks again for the ideas!

@brandond
Copy link

That all sounds correct then!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants