Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failure when trying to run kubetest2 --up with gcp in non-legacy mode #692

Open
jbtk opened this issue May 14, 2024 · 1 comment
Open

Failure when trying to run kubetest2 --up with gcp in non-legacy mode #692

jbtk opened this issue May 14, 2024 · 1 comment
Labels
needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@jbtk
Copy link

jbtk commented May 14, 2024

Hi! I have been starting a cluster using the legacy mode before like this:

kubetest2 gce -v 2 --repo-root ~/src/k8s.io/kubernetes --gcp-project <project-name> --legacy-mode --build --up --env=ENABLE_CUSTOM_METRICS=true --env=KUBE_ENABLE_CLUSTER_AUTOSCALER=true --env=KUBE_AUTOSCALER_MIN_NODES=3 --env=KUBE_AUTOSCALER_MAX_NODES=6 --env=KUBE_AUTOSCALER_ENABLE_SCALE_DOWN=true --env=KUBE_ADMISSION_CONTROL=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,Priority --env=ENABLE_POD_PRIORITY=true

but when I try to do a similar thing with this provider:
kubetest2 gce -v 2 --repo-root ~/src/k8s.io/cloud-provider-gcp --gcp-project <project-name> --build --up --env=ENABLE_CUSTOM_METRICS=true --env=KUBE_ENABLE_CLUSTER_AUTOSCALER=true --env=KUBE_AUTOSCALER_MIN_NODES=3 --env=KUBE_AUTOSCALER_MAX_NODES=6 --env=KUBE_AUTOSCALER_ENABLE_SCALE_DOWN=true --env=KUBE_ADMISSION_CONTROL=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,Priority --env=ENABLE_POD_PRIORITY=true –override-logs-dir

I am getting an error:
/src/k8s.io/cloud-provider-gcp/cluster/../cluster/../cluster/gce/util.sh: line 256: /src/k8s.io/cloud-provider-gcp/bazel-bin/release/kubernetes-server-linux-amd64.tar.gz.sha512: Permission denied

I see that kubetest2 actually has a test that exercises this and it works, but I am not sure why it is different for my environment. Not sure whether this is some wider issue or is it an issue of my setup.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label May 14, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If the repository mantainers determine this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

2 participants