-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatically re-label nodes with openebs.io/nodeid during upgrade #456
Comments
I would actually request making the label configurable, I don't see any reason to specify |
zone by itself wouldn't be good enough unless of course you limited yourself to a single zfs member in each zone. This would identify many vm's and make it impossible to schedule on the one containing the local PV. The key used by the plugin is already configurable but whatever the key is, its value needs to be uniquely identifiable for each node. Hostname will not work because once this value is set, volumes are bound to it. When replacing nodes, the hostname will change. This means you would have to recreate all of the CRD's (PV,PVC,ZVolume,ZFSNode, etc) to reference the correct zfs node |
@jnels124 would you like to contribute for this feature? |
Describe the problem/challenge you have
Once issue #450 is resolved, it would be nice if a gke upgrade can be performed in a hands off fashion. In order to do that, we need to be able to identify nodes going down as part of an upgrade and harvest their labels. Then, we would need to identify nodes coming up as part of the upgrade and attach the labels from the old node to the new.
Describe the solution you'd like
A potential solution to this is.
Add a eventHandler to the node Informer and use
gke-current-operation:operation_type: UPGRADE_NODES
to identify nodes being added/removed as part of an upgrade.cs.k8sNodeInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{ AddFunc: onNodeAdd, UpdateFunc: onUpdateNode, DeleteFunc: onDeleteNode, })
In
onNodeAdd
, we can save a reference to this node in a cache to be operated on later. Then inonDeleteNode
, can watch for the node coming down to get its labels and then attach them to the cached node fromonNodeAdd
. This would then allow kubernetes to automatically schedule the pods on the new nodes once the relabeling takes effect.Since multiple nodes can be getting upgraded at the same time, we will need a way to identify which cached node needs to be updated by a node being removed. May just be able to use the zone here since really all that matters in order to successfully mount and import the zfs pool is that they are on the same zone. Only a single node pool should be upgraded at once and since all resources within a node pool are expected to be identical, this should work.
Anything else you would like to add:
I would like some feedback from the maintainers on implementing the solution described above. Is this functionality you would accept into the repo? Do you have any additional input guidance you would like to provide? I have no problem taking a stab at the implementation but wanted to discuss here first.
Environment:
kubectl version
):/etc/os-release
):The text was updated successfully, but these errors were encountered: