Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CRDs aren't inside the crds folder #282

Closed
cwrau opened this issue Oct 16, 2023 · 16 comments
Closed

CRDs aren't inside the crds folder #282

cwrau opened this issue Oct 16, 2023 · 16 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@cwrau
Copy link

cwrau commented Oct 16, 2023

Anything else you would like to add:
This goes against helm best-practices. I understand that for non-gitops workflows having the CRDs inside the templates folder (with an if!) is necessary, but they should still be inside the crds folder for normal operations, like not needing post-* hooks for creating CRs, and gitops for CRD upgrades

Environment:

  • Cluster-api-operator version: N/A
  • Cluster-api version: N/A
  • Minikube/KIND version: N/A
  • Kubernetes version: (use kubectl version): N/A
  • OS (e.g. from /etc/os-release): N/A
  • Helm Chart version: 0.6.0

/kind bug
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api-operator/labels?q=area for the list of labels]

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Oct 16, 2023
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If CAPI Operator contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@alexander-demicev
Copy link
Contributor

Hi, thanks for opening the issue. Unfortunately, it's not possible to upgrade CRDs if they are located in the CRD folder https://helm.sh/docs/chart_best_practices/custom_resource_definitions/#some-caveats-and-explanations

@cwrau
Copy link
Author

cwrau commented Oct 16, 2023

Hi, thanks for opening the issue. Unfortunately, it's not possible to upgrade CRDs if they are located in the CRD folder helm.sh/docs/chart_best_practices/custom_resource_definitions/#some-caveats-and-explanations

Yes and no, for manual workflows, like helm upgrade, it's not easily possible (although if you're already doing manual stuff, you might as well just apply the CRDs from the crds folder 🤷)

But using gitops it's easily possible, see https://fluxcd.io/flux/components/helm/helmreleases/#crds

@alexander-demicev
Copy link
Contributor

Ideally, we want to support helm upgrade for upgrading all manifests of the operator. Would it help if we split into 2 different charts: one for CRDs and a second one for all the rest?

@alexander-demicev
Copy link
Contributor

@damdo @Fedosin @furkatgofurov7 ^^

@cwrau
Copy link
Author

cwrau commented Oct 16, 2023

Ideally, we want to support helm upgrade for upgrading all manifests of the operator.

I understand, but doing it this way breaks other stuff, even inside the chart itself, as shown in #283

Currently, you can't really use this chart in a gitops way because of this CRD stuff.
And even the manual way is broken, although I guess that could have another cause.

If the CRDs were inside the crds folder, even if only additionally, this would work.
Only if cert-manager fixes cert-manager/cert-manager#6179 first, but this is a two-part problem after all.

Would it help if we split into 2 different charts: one for CRDs and a second one for all the rest?

Not really, the problem is explicitly that the CRDs aren't in the crds folder, extracting them into another chart wouldn't help and I'd just open an issue for that chart 😅

EDIT: maybe miscommunication, but it's ok to have the CRDs in another chart, as long as it's in the crds folder, kube-prometheus-stack for example does it this way.

@alexander-demicev
Copy link
Contributor

We also have a use case where we might template CRDs, some users would like to avoid using cert-manager and prefer a custom solution, that will require removing/replacing cert-manager annotation from CRDs.
At this point, I'm not sure how to support all options without breaking others :(

@cwrau
Copy link
Author

cwrau commented Oct 16, 2023

We also have a use case where we might template CRDs

What are you templating? 🤔

some users would like to avoid using cert-manager and prefer a custom solution, that will require removing/replacing cert-manager annotation from CRDs

Exactly this isn't even possible, as the cert-manager CRDs are always created

Annotations on CRDs? Are you sure you don't mean on CRs?

@alexander-demicev
Copy link
Contributor

Yes, we set CA injector annotation on the CRD itself https://cert-manager.io/docs/concepts/ca-injector/

@cwrau
Copy link
Author

cwrau commented Oct 16, 2023

Yes, we set CA injector annotation on the CRD itself cert-manager.io/docs/concepts/ca-injector

For that special use-case I would, from your perspective, just decide on a name and hardcode that.
If the end user doesn't want to use the included cert-manager and supply their own they just have to conform to that.
And if they want to handle things without cert-manager, then that's just a not-supported configuration and they have to figure things out for themselves 😅
K8s is just not flexible in that regard 🤷

IMHO one should mainly support the best ways, which is of course subjective, and the most wanted and easy to achieve other ways.
And in my opinion easy handling of CRs, following best-practices and going for gitops are the preferred way.
Also having the templated CRDs, with an if, is a low hanging fruit for those poor souls that don't already use gitops 😉

@fgrelaud
Copy link

fgrelaud commented Dec 6, 2023

Hi all,
flux user too, i confirm that with crd in template and not in crds folder, we can have capi-operator install but the helm hooks failed to create providers (defined in values.yaml -core/bootstrap/controlPlane/infrastructure).

{"level":"error","ts":"2023-12-06T10:37:14.003Z","msg":"Reconciler error","controller":"helmrelease","controllerGroup":"helm.toolkit.fluxcd.io","controllerKind":"HelmRelease","HelmRelease":{"name":"capi-operator","namespace":"capi-operator-system"},"namespace":"capi-operator-system","name":"capi-operator","reconcileID":"8d7c96e8-16b3-4b3f-95a6-7d81c26665f5","error":"Helm install failed: failed post-install: unable to build kubernetes object for deleting hook cluster-api-operator/templates/core.yaml: resource mapping not found for name: \"cluster-api\" namespace: \"capi-system\" from \"\": no matches for kind \"CoreProvider\" in version \"operator.cluster.x-k8s.io/v1alpha2\"\nensure CRDs are installed first"}

As cwrau says, flux can create/replace crds https://fluxcd.io/flux/components/helm/helmreleases/#crds

So I wonder about the benefit of using the capi operator rather than clusterctl ?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 5, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 4, 2024
@simonostendorf
Copy link

What is the current status of this issue?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale May 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

6 participants