Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl apply: prune should ignore no-namespaced resource if -n is not empty #110905

Open
pacoxu opened this issue Jul 1, 2022 · 28 comments
Open
Assignees
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. sig/cli Categorizes an issue or PR as relevant to SIG CLI. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@pacoxu
Copy link
Member

pacoxu commented Jul 1, 2022

kubectl apply with below yaml and follow below prune command.

**
namespace sandbox
**
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    environment: sandbox
    provider: kustomize
  labels:
    app: nginx
  name: nginx
  namespace: sandbox
spec:
  ports:
  - port: 80
  selector:
    app: nginx
  type: NodePort

seems I have the same problem. save this to a file named a.yaml

apiVersion: v1
kind: Service
metadata:
  annotations:
    environment: sandbox
    provider: kustomize
  labels:
    app: nginx
  name: nginx
  namespace: sandbox
spec:
  ports:
  - port: 80
  selector:
    app: nginx
  type: NodePort

And run kubectl apply --prune --dry-run=client --all -n sandbox -f a.yaml, the results

service/nginx created (dry run)
namespace/cattle-system pruned (dry run)

I have no idea why it want to prune the namespace/cattle-system.

Originally posted by @wd in #66430 (comment)

@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jul 1, 2022
@pacoxu
Copy link
Member Author

pacoxu commented Jul 1, 2022

I prefer to think this is a bug of kubectl.

/kind bug
/sig cli

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. sig/cli Categorizes an issue or PR as relevant to SIG CLI. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jul 1, 2022
@harry1064
Copy link

Hi @pacoxu

I was looking at the test cases for prune inside apply_test.go, seems like it never goes inside the pruning logic.
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/cmd/apply/apply_test.go#L584-L632
as we are not setting up fakeDynamicClient to handle list request for the resource under test.

Should we fix the test case as well? I have tried to setup the fakeDynamicClient and register some reactor to respond to list and delete requests. With some data alterations able to get the prune logic executed.

@pacoxu
Copy link
Member Author

pacoxu commented Jul 6, 2022

@harry1064 We may add some in separate pr.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 4, 2022
@pacoxu
Copy link
Member Author

pacoxu commented Oct 8, 2022

/remove-lifecycle stale
There are some discussions in pr and sig meetings. I will update the pr ASAP.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 8, 2022
@KnVerey
Copy link
Contributor

KnVerey commented Nov 9, 2022

/triage accepted
The deprecation warning on the current behaviour made it into 1.26 yesterday, so we'll be able to change it in 1.28 or later.

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Nov 9, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 7, 2023
@m-Bilal
Copy link
Contributor

m-Bilal commented Feb 19, 2023

May I take this up?

@manav014
Copy link
Contributor

/assign

@m-Bilal
Copy link
Contributor

m-Bilal commented Feb 22, 2023

Hi @manav014 , are you working on this? (I was intending to work on this issue, hence the comment)

@manav014
Copy link
Contributor

Hey @m-Bilal , If you are working on this, you can take that forward and assign yourself. I will check out some other issues.

@m-Bilal
Copy link
Contributor

m-Bilal commented Feb 22, 2023

@manav014 sorry I should have been clearer. I haven't started yet. So if you have started, that's fine, I can look into another one. If not, then I'm happy to take this up 😄

@manav014
Copy link
Contributor

@m-Bilal I started with that. I will share what am I planning and other research, we can collaborate on this. You can help me with optimizations and feedback. If that sounds good to you.

@pacoxu
Copy link
Member Author

pacoxu commented Feb 23, 2023

#110907 was merged in v1.26

  • kubectl apply: warning that kubectl will ignore no-namespaced resource pv & namespace in a future release if the namespace is specified and allowlist is not specified

if len(pruneResources) == 0 {
pruneResources = defaultNamespacedPruneResources
// TODO in kubectl v1.29, add back non-namespaced resource only if namespace is not specified
pruneResources = append(pruneResources, defaultNonNamespacedPruneResources...)
if namespaceSpecified {
klog.Warning("Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag.")
}
}

@vaibhav2107
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 29, 2023
@pacoxu
Copy link
Member Author

pacoxu commented Jul 24, 2023

This can be done in v1.29 release cycle.

The TODO can be fixed.

// TODO in kubectl v1.29, add back non-namespaced resource only if namespace is not specified
pruneResources = append(pruneResources, defaultNonNamespacedPruneResources...)
if namespaceSpecified {
klog.Warning("Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag.")
}

@pacoxu
Copy link
Member Author

pacoxu commented Jul 24, 2023

/help
/good-first-issue

@k8s-ci-robot
Copy link
Contributor

@pacoxu:
This request has been marked as suitable for new contributors.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.

In response to this:

/help
/good-first-issue

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Jul 24, 2023
@vaibhav2107
Copy link
Member

I'd like to work on this issue
/assign

@Affan-7
Copy link

Affan-7 commented Jul 30, 2023

seems I have the same problem. save this to a file named a.yaml

apiVersion: v1
kind: Service
metadata:
  annotations:
    environment: sandbox
    provider: kustomize
  labels:
    app: nginx
  name: nginx
  namespace: sandbox
spec:
  ports:
  - port: 80
  selector:
    app: nginx
  type: NodePort

And run kubectl apply --prune --dry-run=client --all -n sandbox -f a.yaml, the results

service/nginx created (dry run)
namespace/cattle-system pruned (dry run)

I have no idea why it want to prune the namespace/cattle-system.

Originally posted by @wd in #66430 (comment)

Hi @pacoxu I can't reproduce this.

@Affan-7
Copy link

Affan-7 commented Jul 30, 2023

The TODO can be fixed.

Can you please elaborate more on that. What is the current behaviour and what is the expected behaviour of prune.

@Affan-7
Copy link

Affan-7 commented Jul 30, 2023

I would love to work on this issue. (This is my first issue in this repository)
/assign

@pacoxu
Copy link
Member Author

pacoxu commented Jul 30, 2023

The TODO can be fixed.

Can you please elaborate more on that. What is the current behaviour and what is the expected behaviour of prune.

I edited the description. Can you try again?

@Affan-7
Copy link

Affan-7 commented Jul 31, 2023

I edited the description. Can you try again?

Still can't reproduce.

Screenshot from 2023-07-31 12-11-00

@pacoxu
Copy link
Member Author

pacoxu commented Jul 31, 2023

I mean that test namespace is created by apply. If the test namespace is not created by the apply, it will never be handle by apply --prune. In this case, you should put the test namespace yaml to the a.yaml with the service yaml.

  1. The first apply will create the ns and service.
  2. Then remove the namespace yaml from the a.yaml
  3. The second apply with --prune -n test will remove the namespace.

@pacoxu
Copy link
Member Author

pacoxu commented Apr 2, 2024

See #119687 (comment). There is a bug in previous implementation with my PR #110907.

@arushdesp
Copy link

Hi all,
Did anyone managed to solve it or you still need help? I tried to replicated it for the testing in k8s v1.27 cause i wanted to try solving it and i had the following result:
After creating namespace and pod in my next run with prune I had the specific error message:

kubectl apply --prune --dry-run=client --all -n dev -f nginx.yaml
namespace/dev configured (dry run)
pod/nginxns configured (dry run)
W0428 01:54:39.696986 4304 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag.

@pacoxu
Copy link
Member Author

pacoxu commented May 14, 2024

After creating namespace and pod in my next run with prune I had the specific error message:

This is a warning message, not an error message.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. sig/cli Categorizes an issue or PR as relevant to SIG CLI. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Status: Backlog
Development

Successfully merging a pull request may close this issue.

10 participants