-
Notifications
You must be signed in to change notification settings - Fork 882
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ServiceL2Status follow-up: create resources in main namespace with speaker pod as OwnerRef #2311
Comments
cc @lwabish :) |
Because of metallb#2311 we are going to move the status instances to the metallb namespace. This might require a change in the permissions too (from cluster to namespaced) so the new version of MetalLB might not be able to delete the "legacy" instances of the CRD because they belong to namespaces the new metallb might not have permissions on. Because of this, we hide the feature behind a flag, effectively disabling it until the issue is fixed. Signed-off-by: Federico Paolinelli <[email protected]>
Because of metallb#2311 we are going to move the status instances to the metallb namespace. This might require a change in the permissions too (from cluster to namespaced) so the new version of MetalLB might not be able to delete the "legacy" instances of the CRD because they belong to namespaces the new metallb might not have permissions on. Because of this, we hide the feature behind a flag, effectively disabling it until the issue is fixed. Signed-off-by: Federico Paolinelli <[email protected]>
Because of metallb#2311 we are going to move the status instances to the metallb namespace. This might require a change in the permissions too (from cluster to namespaced) so the new version of MetalLB might not be able to delete the "legacy" instances of the CRD because they belong to namespaces the new metallb might not have permissions on. Because of this, we hide the feature behind a flag, effectively disabling it until the issue is fixed. Signed-off-by: Federico Paolinelli <[email protected]>
Because of metallb#2311 we are going to move the status instances to the metallb namespace. This might require a change in the permissions too (from cluster to namespaced) so the new version of MetalLB might not be able to delete the "legacy" instances of the CRD because they belong to namespaces the new metallb might not have permissions on. Because of this, we hide the feature behind a flag, effectively disabling it until the issue is fixed. Signed-off-by: Federico Paolinelli <[email protected]>
We must be able to skip those tests out until metallb#2311 is fixed. Signed-off-by: Federico Paolinelli <[email protected]>
Because of #2311 we are going to move the status instances to the metallb namespace. This might require a change in the permissions too (from cluster to namespaced) so the new version of MetalLB might not be able to delete the "legacy" instances of the CRD because they belong to namespaces the new metallb might not have permissions on. Because of this, we hide the feature behind a flag, effectively disabling it until the issue is fixed. Signed-off-by: Federico Paolinelli <[email protected]>
Because of #2311 we are going to move the status instances to the metallb namespace. This might require a change in the permissions too (from cluster to namespaced) so the new version of MetalLB might not be able to delete the "legacy" instances of the CRD because they belong to namespaces the new metallb might not have permissions on. Because of this, we hide the feature behind a flag, effectively disabling it until the issue is fixed. Signed-off-by: Federico Paolinelli <[email protected]>
We must be able to skip those tests out until #2311 is fixed. Signed-off-by: Federico Paolinelli <[email protected]>
Indeed.
|
If we can set an owner reference from the speaker pod itself to the status instance, then the gc will happen within kubernetes.
This is not possible. It'd mean set a webhook on the path for all the pods (because webhooks are not namespaced iirc), which is not acceptable. Also, we learned to stay away as much as possible from webhooks (see #1597)
I agree that this feels more natural. However, having the status of what metallb does inside metallb's namespace doesn't sound too much of a stretch, especially if it solves the problem of the point before. |
Filtering pod from webhooks should not be a problem with the help of label selectors or namspace selectors.But I do agree that webhooks are tricky sometimes. I'd love to follow your final advice and continue work on this improvement, but maybe will start a few days later if acceptable. |
@lwabish just checking if you are still interested in helping with this. There is obviously no rush! |
Yes I'd love to keep working on this. |
I 'll implement this maybe this weekend |
Thanks a lot! |
Is it the weekend yet? :) |
The pr was filed as you can see above :) |
Is your feature request related to a problem?
#2158 is merged, but there's a corner-case we don't handle where if a speaker is deleted permanently (either the node is gone, or the user doesn't want it to act as a speaker) the l2 statuses that it created would be left dangling.
Describe the solution you'd like
Since cross-namespace owner refs are not allowed, we think a reasonable approach would be creating the statuses in the same namespace as the speakers. When we do that, we can no longer use
servicename-node
as the resource's name, because multiple services with the same name can arrive from different namespaces.So the idea is to have the statuses created with a generated name such as
GenerateName: <node-name>-
on the same namespace as the speaker, with the speaker pod being an ownerRef of the resource. Once a speaker is restarted / gone k8s will handle the deletion for us.Additional context
No response
I've read and agree with the following
The text was updated successfully, but these errors were encountered: