-
Notifications
You must be signed in to change notification settings - Fork 265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve Service/LoadBalancer reconciliation performance #5909
Comments
@desek can you open this very same issue also at https://github.com/Azure/AKS/issues The AKS Product Group monitors that repo and might consider your issue for their roadmap Thanks |
As recently as February, @feiskyer stated this limit is still needed: #249 (comment) - I will ask for a re-evaluation. Thanks for the issue, @desek! |
Thanks for the feedback. This couldn't be supported with current LoadBalancer sku as lots of resources are shared, but it is under the plan with container native LoadBalancer (which is still WIP). For the reconciling latency, have you tried NodeIP based SLB (e.g. set |
Yes, we're using |
I've added it here Azure/AKS#4281 |
What would you like to be added:
I'd like for a Service of type=LoadBalancer (a Service with a Public IP) to reconcile faster.
The current implementation only reconciles 1 Service at a time and
--concurrent-service-syncs
only allows1
as a value.This makes the reconcile loop, which processes all Services, to seqeuentially process 1 Service at a time.
In a cluster with 500+ Services the processing of each Service takes 5-10 seconds resulting in a reconciliation loop to take approx. 1 hour. Essentially making it a Service created just after the current reconciliation loop started taking at least double the time to reconcile (~2 hours).
I'm assuming Services are processed sequentially one-by-one due to the nature of Azure Load Balancers.
So the suggestion to improve performance in Service/LoadBalancer reconciliation either (or both):
Why is this needed:
Dupliate issue in the AKS repo: Azure/AKS#4281
The text was updated successfully, but these errors were encountered: