Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add annotation to the the resource template to prevent removal managed resources #4788

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

CharlesQQ
Copy link
Contributor

@CharlesQQ CharlesQQ commented Apr 1, 2024

What type of PR is this?

What this PR does / why we need it:
Extend the API field orphaningDeletion to prevent removal managed resources

Which issue(s) this PR fixes:
Fixes #4709

Special notes for your reviewer:

Does this PR introduce a user-facing change?:


@karmada-bot karmada-bot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Apr 1, 2024
@CharlesQQ CharlesQQ changed the title fix(controllers): Extend the API field orphaningDeletion to prevent r… Extend the API field orphaningDeletion to prevent removal managed resources Apr 1, 2024
@CharlesQQ CharlesQQ force-pushed the prevent-removal-managed-resources branch from c0d0ad6 to a874076 Compare April 1, 2024 10:39
@karmada-bot karmada-bot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Apr 1, 2024
@codecov-commenter
Copy link

codecov-commenter commented Apr 1, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 53.34%. Comparing base (d676996) to head (2e44070).
Report is 6 commits behind head on master.

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #4788      +/-   ##
==========================================
+ Coverage   53.31%   53.34%   +0.02%     
==========================================
  Files         252      253       +1     
  Lines       20539    20565      +26     
==========================================
+ Hits        10951    10971      +20     
- Misses       8862     8868       +6     
  Partials      726      726              
Flag Coverage Δ
unittests 53.34% <100.00%> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@CharlesQQ CharlesQQ force-pushed the prevent-removal-managed-resources branch from a874076 to c5994f0 Compare April 2, 2024 02:43
@karmada-bot karmada-bot added the do-not-merge/contains-merge-commits Indicates a PR which contains merge commits. label Apr 2, 2024
@CharlesQQ CharlesQQ force-pushed the prevent-removal-managed-resources branch from 7884969 to c5994f0 Compare April 2, 2024 03:16
@karmada-bot karmada-bot removed the do-not-merge/contains-merge-commits Indicates a PR which contains merge commits. label Apr 2, 2024
@CharlesQQ CharlesQQ force-pushed the prevent-removal-managed-resources branch 3 times, most recently from 7c42d71 to 97504f8 Compare April 3, 2024 08:22
@CharlesQQ CharlesQQ force-pushed the prevent-removal-managed-resources branch 4 times, most recently from 6411574 to a302c57 Compare April 19, 2024 10:16
@karmada-bot karmada-bot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Apr 19, 2024
@CharlesQQ CharlesQQ force-pushed the prevent-removal-managed-resources branch from a302c57 to 0a5d11d Compare April 19, 2024 10:46
@karmada-bot karmada-bot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Apr 19, 2024
@CharlesQQ CharlesQQ force-pushed the prevent-removal-managed-resources branch 5 times, most recently from 9c6f601 to fb5afd1 Compare April 22, 2024 03:03

// OrphaningDeletionIncomplete means that work object and the resource willed be kept.
// But if work object deleted manually, cleanup the labels/annotations from the resource on member clusters
OrphaningDeletionIncomplete OrphaningDeletion = "Incomplete"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a usage scenario for this type?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a usage scenario for this type?

Refer to #3004

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After reading the description and discussion in this issue, it seems that the work is not required to be retained. In terms of use, what can be done to retain work?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can first just implement Complete and Never, and discuss the other options later once we have more specific requirements.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can first just implement Complete and Never, and discuss the other options later once we have more specific requirements.

All right!

if err := helper.DeleteWorks(c.Client, req.Namespace, req.Name, binding.Labels[workv1alpha2.ResourceBindingPermanentIDLabel]); err != nil {
klog.Errorf("Failed to delete works related to %s/%s: %v", binding.GetNamespace(), binding.GetName(), err)
return controllerruntime.Result{}, err
if binding.Spec.OrphaningDeletion == policyv1alpha1.OrphaningDeletionNever || binding.Spec.OrphaningDeletion == policyv1alpha1.OrphaningDeletionComplete {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest using a switch here, as it will make the logic clearer. It will also make it easier for us to add comments for each case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest using a switch here, as it will make the logic clearer. It will also make it easier for us to add comments for each case.

Only one case now, it's not necessary

@CharlesQQ CharlesQQ force-pushed the prevent-removal-managed-resources branch from 4f30d77 to ed29102 Compare April 26, 2024 07:08
// +kubebuilder:default="Never"
// +kubebuilder:validation:Enum=Never;Complete
// +optional
OrphaningDeletionPolicy OrphaningDeletionPolicyType `json:"orphaningDeletionPolicy,omitempty"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe we only need to add this field in PP. The work does not require this field.

In addition, I don't think the current implementation can fully handle all situations.

Here's a new idea:

When users set orphaningDeletionPolicy of PP as Complete.

The specific implementation should be adding an annotation on the workloadTemplate of the control plane, similar to orphaningDeletionPolicy.karmada.io: "Complete"

Then when deleting workloadTemplate in execution_controller, check if workloadTemplate has annotation orphaningDeletionPolicy.karmada.io. If it is Complete, do not delete it.

The benefits are:

  1. The personnel creating the workloadTemplate can also control whether this application needs to be retained in member clusters, rather than only those who create PP.
  2. Resources like namespace that do not have PP can also be deleted on the control plane without being deleted from member clusters.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then when deleting workloadTemplate in execution_controller, check if workloadTemplate has annotation orphaningDeletionPolicy.karmada.io. If it is Complete, do not delete it.

Execution controller must watch all origin resource such as resourceTemlate 、namespace and so on, which make too complicated, I don't think that's a good idea.
I think for namespace, can add annotation like orphaningDeletionPolicy.karmada.io: "Complete"; FederatedResourceQuota those create work object direct, by adding field orphaningDeletionPolicy: Complete if necessary

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Execution controller must watch all origin resource such as resourceTemlate 、namespace and so on

There's no need, this is existing logic.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is no different from directly adding annotations to the resource template. In addition, users can directly skip pp to control the annotation value on the resource template.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is no different from directly adding annotations to the resource template. In addition, users can directly skip pp to control the annotation value on the resource template.

Yes, I think that's reasonable.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe we only need to add this field in PP. The work does not require this field.

+1, but I prefer to decouple from PP/CPP, cause I'm afraid this is very unfriendly to administrators if the system also has the PropagationPolicyPreemption feature turned on. in addition, I feel that propagation policies are becoming more and more complex, and almost all new features must consider adding APIs to it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but I prefer to decouple from PP/CPP, cause I'm afraid this is very unfriendly to administrators if the system also has the PropagationPolicyPreemption feature turned on.

Can you give an example of this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

orphaning-delete

if decoupled from PP/CPP, the administrator doesn't care about it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I get your meaning. How do you think @CharlesQQ @chaunceyjiang @RainbowMango

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but I prefer to decouple from PP/CPP, cause I'm afraid this is very unfriendly to administrators if the system also has the PropagationPolicyPreemption feature turned on.

I do not quite understand how unfriendly here, I think the preemption is clear that after a PropagationPolicy(with higher priority) successfully preempted a resource, then the resource should be propagated as per the PropagationPolicy.

@XiShanYongYe-Chang
Copy link
Member

Hi @CharlesQQ @chaunceyjiang Is there any blockage in this PR now? Can we move forward?


// OrphaningDeletionComplete means that the resource in member clusters completely independent of karmada
// Not only cleanup the labels/annotations from the resource on member clusters, but also delete work object in control plane
OrphaningDeletionComplete OrphaningDeletionPolicyType = "Complete"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Complete sounds a bit confusing. Is it possible to call it CompleteDelete?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ArgoCD uses the term "cascade"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that using "cascade" would be better, as it can reduce the learning cost for users.

+1

It seems that using "cascade" would be better, as it can reduce the learning cost for users.

@chaunceyjiang
Copy link
Member

Hi @CharlesQQ @chaunceyjiang Is there any blockage in this PR now? Can we move forward?

#4788 (comment) Regarding this plan, it seems that there is no conclusion yet?

@XiShanYongYe-Chang
Copy link
Member

Let's invite @RainbowMango and @whitewindmills give some opinion.
/cc @RainbowMango @whitewindmills

if err := helper.DeleteWorks(c.Client, req.Namespace, req.Name, binding.Labels[workv1alpha2.ResourceBindingPermanentIDLabel]); err != nil {
klog.Errorf("Failed to delete works related to %s/%s: %v", binding.GetNamespace(), binding.GetName(), err)
return controllerruntime.Result{}, err
if binding.Spec.OrphaningDeletionPolicy == policyv1alpha1.OrphaningDeletionNever || binding.Spec.OrphaningDeletionPolicy == policyv1alpha1.OrphaningDeletionComplete {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

feel unnecessary to add such a statement.

@RainbowMango
Copy link
Member

/assign

@CharlesQQ
Copy link
Contributor Author

Regarding this plan, it seems that there is no conclusion yet?

Yes, To sum up, it seems inappropriate to put fields in pp/cpp

@chaunceyjiang
Copy link
Member

@XiShanYongYe-Chang @RainbowMango @CharlesQQ @whitewindmills
Do we have a consensus on the current proposal?

@XiShanYongYe-Chang
Copy link
Member

XiShanYongYe-Chang commented May 15, 2024

We may not reached a unanimous conclusion yet. Work suspension might also can learn from this discussion. They are both processing policies for resource templates and can be decoupled from PP.

There should be three options at this point,

  • Solution 1: Define a resource deletion policy in the PP.
    • disadvantage:
      • Coupled with the PP, the PP becomes larger and larger with the evolution.
  • Solution 2: Add an annotation to the resource template to describe the deletion policy of the target resource.
    • disadvantage:
      • Poor API evolution.
  • Solution 3: A new CRD is defined to describe a deletion policy of one or more resources.
    • disadvantage:
      • Increased learning costs and CRs

You can publish it, or we can have a meeting to discuss and decide.

@XiShanYongYe-Chang
Copy link
Member

@XiShanYongYe-Chang @RainbowMango @CharlesQQ @whitewindmills Do we have a consensus on the current proposal?

Can we set up a time to discuss this plan? When would be a good time for you, or maybe next Tuesday at the community meeting?

@buji-code
Copy link

Any update on this one?

@XiShanYongYe-Chang
Copy link
Member

@buji-code Yeah, after discussion at the community meeting, we decided to try out solution 2 and 3. This work will start in the next release, early next month.

@XiShanYongYe-Chang
Copy link
Member

Hi guys @CharlesQQ @whitewindmills @chaunceyjiang @RainbowMango @buji-code

Can we proceed with solution 2: Add an annotation to the resource template to describe the deletion policy of the target resource first, and then consider designing and adding a CRD to describe the deletion policy as the scenario develops? (Of course, the use of annotations is not excluded as the ideal end use.)

@CharlesQQ CharlesQQ force-pushed the prevent-removal-managed-resources branch from ed29102 to ac35a37 Compare May 29, 2024 07:09
@karmada-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from rainbowmango. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@CharlesQQ CharlesQQ changed the title Extend the API field orphaningDeletion to prevent removal managed resources Add annotation to the the resource template to prevent removal managed resources May 29, 2024
@CharlesQQ CharlesQQ force-pushed the prevent-removal-managed-resources branch from ac35a37 to ea6e382 Compare May 29, 2024 07:17
…vent removal managed resources

Signed-off-by: chang.qiangqiang <[email protected]>
@CharlesQQ CharlesQQ force-pushed the prevent-removal-managed-resources branch from ea6e382 to 2e44070 Compare May 29, 2024 07:37
@@ -152,11 +152,21 @@ func (c *Controller) tryDeleteWorkload(clusterName string, work *workv1alpha1.Wo
klog.Errorf("Failed to unmarshal workload, error is: %v", err)
return err
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can write an e2e to test this PR.

  1. Check whether the resources in the member cluster have cleared the unique label of karmada.
  2. Detect whether the resource still exists in the member cluster after the control plane deletes it.
  3. The resources tested should include namespace resources.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Does karmada can prevent removal of these managed resources
8 participants