Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ArgoCD 2.11 - Loop of PATCH calls to Application objects #18151

Open
diranged opened this issue May 9, 2024 · 2 comments
Open

ArgoCD 2.11 - Loop of PATCH calls to Application objects #18151

diranged opened this issue May 9, 2024 · 2 comments

Comments

@diranged
Copy link

diranged commented May 9, 2024

Is it possible that #18061 has suddenly changed the behavior of the Application Controller so that it isn't comparing things the same way? We are seeing a massive (relatively speaking) jump in the PATCH requests/second that the ArgoCD App controller is making now across ~10 clusters that we did this upgrade on:

image

I suspect the issue has to do with how the code is comparing the valueFiles lists from before and after.. we had some valueFiles lists that had duplicate values.. and when we run kubectl get applications -w -o json | kubectl grep -w, we see diff's like these:

        revision: "794fcbae2671366663e107cd50079c2e96894591"
        source:
          helm:
            ignoreMissingValueFiles: true
            valueFiles:
              - "values.yaml"
              - "values.staging-xx.yaml"
              - "values.staging-xx.yaml"
              - "values/staging/values.yaml"
              - "values/staging/staging-xx/values.yaml"
              - "values/staging/staging-xx/types/values.otherfile.yaml"
              - "values/staging/staging-xx/values.staging-xx.yaml"
-             - 
+             - "values.staging-xx.yaml"
            values: "
cluster:
  accountId: ".."

Originally posted by @diranged in #18061 (comment)

@diranged
Copy link
Author

diranged commented May 9, 2024

We're reverting back to 2.10.9... but this is happening on every cluster we have across the board. We caught this in staging, so ~10 clusters or so.

@diranged
Copy link
Author

We just tried the upgrade again just for fun (no, I actually forgot about the issue and clicked the upgrade button ... 🤦) - and it's reliable, it happens again right away. Any suggestions on what to do to troubleshoot it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant