-
Notifications
You must be signed in to change notification settings - Fork 467
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to pull secrets when vault-configurer is down #1603
Comments
github-actions
bot
added
the
lifecycle/stale
Denotes an issue or PR that has become stale and will be auto-closed.
label
Mar 3, 2024
csatib02
removed
the
lifecycle/stale
Denotes an issue or PR that has become stale and will be auto-closed.
label
Mar 24, 2024
Thank you for your contribution! This issue has been automatically marked as |
github-actions
bot
added
the
lifecycle/stale
Denotes an issue or PR that has become stale and will be auto-closed.
label
May 26, 2024
github-actions
bot
removed
the
lifecycle/stale
Denotes an issue or PR that has become stale and will be auto-closed.
label
Jun 2, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug:
When vault-configurer is down, other pods in the cluster can not pull secrets.
Here is the log from vault pod while the other pods trying to pull secrets and the configurer is not up:
login unauthorized due to: lookup failed: service account unauthorized; this could mean it has been deleted or recreated with a new token
This happens because vault configurer is configuring k8s_auth with
token_reviewer_jwt
of its own (if we aren't set it explicitly) which prevents us using the local Vault pod token as reviewer JWT as described herehttps://github.com/banzaicloud/bank-vaults/blob/52cf23528eb30c90c6612d94c046697dcb3f06aa/internal/vault/auth_methods.go#L217-L222
So when the configurer is down, k8s deletes the token used, to configure k8s_auth method so then when Vault k8s auth tries to use the token with the API, it says that token has expired.
The only way to overcome it is to set
kubernetes_host
key and avoid using the default config as you can see herehttps://github.com/banzaicloud/bank-vaults/blob/52cf23528eb30c90c6612d94c046697dcb3f06aa/internal/vault/auth_methods.go#L64-L66
Expected behaviour:
Pods should be able to pull secrets when configurer is down as well.
We should not set
kubernetes_host
when running vault on the same cluster.Steps to reproduce the bug:
1 ) Omit
kubernetes_host
,token_reviewer_jwt
from your kubernetes auth config.2) Sync vault configurer
3) Scale vault configurer replicas to 0
4) Try to pull secrets with k8s_auth
Additional context:
Add any other context about the problem here.
Environment details:
/kind bug
The text was updated successfully, but these errors were encountered: