Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[gcp/gke] Question about GKE Workload Identity Support #322

Open
julien-sugg opened this issue May 4, 2021 · 7 comments
Open

[gcp/gke] Question about GKE Workload Identity Support #322

julien-sugg opened this issue May 4, 2021 · 7 comments
Labels
kind/enhancement New feature or request kind/help wanted Extra attention is needed

Comments

@julien-sugg
Copy link
Contributor

julien-sugg commented May 4, 2021

Greetings,

I am trying to setup GKE's Workload Identity to avoid having to configure API Keys within the chart. However it doesn't seem to be supported yet.

In a few words, Workload Identity allows to configure a Google Service Account with some specific IAM role bindings and bind it to a specific Kubernetes Service Account so that we don't have to manage any API Key(s) from within the cluster.

Note that this is the recommended way to consume Google Services/APIs.

For more information:

In order to proceed with the IngressMonitorController chart, I tried the naive approach of "unsetting" the apiKey key from the config.yaml's providers section. However, doing so, I end with the following stack trace:

time="2021-05-04T10:34:02Z" level=info msg="Failed to determine Environment, will try kubernetes"
time="2021-05-04T10:34:02Z" level=info msg="Operator Version: 0.0.1"
time="2021-05-04T10:34:02Z" level=info msg="Go Version: go1.15.2"
time="2021-05-04T10:34:02Z" level=info msg="Go OS/Arch: linux/amd64"
time="2021-05-04T10:34:02Z" level=info msg="Version of operator-sdk: v0.19.0"
time="2021-05-04T10:34:02Z" level=info msg="Watching Namespace: monitoring"
I0504 10:34:03.533485       1 request.go:621] Throttling request took 1.039452829s, request: GET:https://10.67.0.1:443/apis/admissionregistration.k8s.io/v1beta
1?timeout=32s
time="2021-05-04T10:34:05Z" level=info msg="Loading YAML Configuration from secret"
time="2021-05-04T10:34:05Z" level=info msg="Registering Components."
2021/05/04 10:34:05 Error Seting Up Monitor Service:  unexpected end of JSON input
time="2021-05-04T10:34:05Z" level=info msg="Configuration added for gcloud"
time="2021-05-04T10:34:09Z" level=info msg="Could not create ServiceMonitor objecterrorservicemonitors.monitoring.coreos.com \"ingressmonitorcontroller-metrics
\" already exists"
time="2021-05-04T10:34:09Z" level=info msg="Starting the Cmd."
time="2021-05-04T10:34:09Z" level=info msg="Reconciling EndpointMonitor"
time="2021-05-04T10:34:09Z" level=error msg="Failed to parse MonitorNameTemplate, using default template `{{.Name}}-{{.Namespace}}`"
E0504 10:34:09.940694       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or ni
l pointer dereference)
goroutine 981 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x17141a0, 0x25849e0)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0xa6
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x89
panic(0x17141a0, 0x25849e0)
        /usr/local/go/src/runtime/panic.go:969 +0x175
cloud.google.com/go/monitoring/apiv3.(*UptimeCheckClient).ListUptimeCheckConfigs(0x0, 0x1b39800, 0xc0000a0000, 0xc0006a1ae0, 0x0, 0x0, 0x0, 0xc0008af9e8)
        /go/pkg/mod/cloud.google.com/[email protected]/monitoring/apiv3/uptime_check_client.go:147 +0x185
github.com/stakater/IngressMonitorController/pkg/monitors/gcloud.(*MonitorService).GetByName(0xc00069dd40, 0xc000511fa0, 0x1b, 0x1bed49fb4, 0x25a9d20, 0xffffff
a31d4de380)
        /workdir/pkg/monitors/gcloud/gcloud-monitor.go:46 +0xe5
github.com/stakater/IngressMonitorController/pkg/monitors.(*MonitorServiceProxy).GetByName(...)
        /workdir/pkg/monitors/monitor-proxy.go:82
github.com/stakater/IngressMonitorController/pkg/controller/endpointmonitor.findMonitorByName(0xc0004c82b8, 0x6, 0x1b484e0, 0xc00069dd40, 0xc000511fa0, 0x1b, 0
x25a9d20)
        /workdir/pkg/controller/endpointmonitor/endpointmonitor_controller.go:141 +0x45
github.com/stakater/IngressMonitorController/pkg/controller/endpointmonitor.(*ReconcileEndpointMonitor).Reconcile(0xc00069ddd0, 0xc0004ca1d0, 0xa, 0xc0004ca1b0
, 0x10, 0x1bec9ce9e, 0xc00059c000, 0xc00053bef8, 0xc00053bef0)
        /workdir/pkg/controller/endpointmonitor/endpointmonitor_controller.go:121 +0x425
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0000e1dd0, 0x1780300, 0xc0006b1d80, 0x0)
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:233 +0x166
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0000e1dd0, 0x203000)
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:209 +0xb0
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc0000e1dd0)
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:188 +0x2b
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0007238b0)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007238b0, 0x1afe6e0, 0xc0006a4ae0, 0x1, 0xc0005a2180)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xad
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007238b0, 0x3b9aca00, 0x0, 0x1, 0xc0005a2180)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc0007238b0, 0x3b9aca00, 0xc0005a2180)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:170 +0x3fa
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x13aa005]

goroutine 981 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x10c
panic(0x17141a0, 0x25849e0)
        /usr/local/go/src/runtime/panic.go:969 +0x175
cloud.google.com/go/monitoring/apiv3.(*UptimeCheckClient).ListUptimeCheckConfigs(0x0, 0x1b39800, 0xc0000a0000, 0xc0006a1ae0, 0x0, 0x0, 0x0, 0xc0008af9e8)
        /go/pkg/mod/cloud.google.com/[email protected]/monitoring/apiv3/uptime_check_client.go:147 +0x185
github.com/stakater/IngressMonitorController/pkg/monitors/gcloud.(*MonitorService).GetByName(0xc00069dd40, 0xc000511fa0, 0x1b, 0x1bed49fb4, 0x25a9d20, 0xffffff
a31d4de380)
        /workdir/pkg/monitors/gcloud/gcloud-monitor.go:46 +0xe5
github.com/stakater/IngressMonitorController/pkg/monitors.(*MonitorServiceProxy).GetByName(...)
        /workdir/pkg/monitors/monitor-proxy.go:82
github.com/stakater/IngressMonitorController/pkg/controller/endpointmonitor.findMonitorByName(0xc0004c82b8, 0x6, 0x1b484e0, 0xc00069dd40, 0xc000511fa0, 0x1b, 0
x25a9d20)
        /workdir/pkg/controller/endpointmonitor/endpointmonitor_controller.go:141 +0x45
github.com/stakater/IngressMonitorController/pkg/controller/endpointmonitor.(*ReconcileEndpointMonitor).Reconcile(0xc00069ddd0, 0xc0004ca1d0, 0xa, 0xc0004ca1b0
, 0x10, 0x1bec9ce9e, 0xc00059c000, 0xc00053bef8, 0xc00053bef0)
        /workdir/pkg/controller/endpointmonitor/endpointmonitor_controller.go:121 +0x425
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0000e1dd0, 0x1780300, 0xc0006b1d80, 0x0)
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:233 +0x166
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0000e1dd0, 0x203000)
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:209 +0xb0
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc0000e1dd0)
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:188 +0x2b
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0007238b0)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007238b0, 0x1afe6e0, 0xc0006a4ae0, 0x1, 0xc0005a2180)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xad
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007238b0, 0x3b9aca00, 0x0, 0x1, 0xc0005a2180)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc0007238b0, 0x3b9aca00, 0xc0005a2180)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:170 +0x3fa

Can you please confirm if Workload Identity is supported by the current version ?


Current setup:

  • helm v3.4.1
  • Kubernetes 1.19.9-gke.1400 (rapid channel)
  • ingressmonitorcontroller Helm Chart v2.0.15
  • values.yaml
watchNamespaces: ""

nameOverride: "imc"

deployment:
  annotations:
    secret.reloader.stakater.com/reload: "imc"
  replicas: 1
  operatorName: {{ .Values.operator_name | quote }}
  logLevel: {{ .Values.log_level | quote }}
  logFormat: {{ .Values.log_format | quote }}

rbac:
  create: true
  serviceAccount:
    # Unlike the other charts, the service account must be mandatorily created
    # to support GKE workload identity as IMC must use the stackdriver uptime check API
    create: true
    name: {{ .Values.service_account_name | quote }}
    annotations:
      # Bind the service account to the workload identity service account that has `roles/iam.workloadIdentityUser` IAM role
      iam.gke.io/gcp-service-account: {{ .Values.google_service_account_email | quote }}

secret:
  data:
    config.yaml: |-
      providers:
      # @see https://github.com/stakater/IngressMonitorController/blob/master/examples/configs/test-config-gcloud.yaml
        - name: gcloud
          gcloudConfig:
            projectId: {{ .Values.google_project_id | quote }}
          # apiKey is not needed as we leverage identity workload
          #apiKey:

      enableMonitorDeletion: {{ .Values.enable_monitor_deletion }}
      monitorNameTemplate: "{{`{{.Namespace}}`}}-{{`{{.IngressName}}`}}"
      resyncPeriod: 0
      creationDelay: 0

Please let me know if I missed any crucial information as I am a new-comer on this project.

Edit: note that when I provide the apiKey, it works like a charm

@nicolas-g
Copy link

Do you have any updates on this? it will be great to support Workload Identity, as it is the most secure authentication method.

@karl-johan-grahn karl-johan-grahn added kind/enhancement New feature or request kind/help wanted Extra attention is needed labels Mar 15, 2023
@Logrythmik
Copy link

Another up-vote.

@github-actions
Copy link

This issue is stale because it has been open for 60 days with no activity.

@github-actions github-actions bot added the stale label Jun 11, 2023
@nicolas-g
Copy link

any updates?

@github-actions github-actions bot removed the stale label Jun 17, 2023
@github-actions
Copy link

This issue is stale because it has been open for 60 days with no activity.

@github-actions
Copy link

This issue is stale because it has been open for 60 days with no activity.

@github-actions github-actions bot added the stale label Oct 30, 2023
@nicolas-g
Copy link

please don't close it

@github-actions github-actions bot removed the stale label Oct 31, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement New feature or request kind/help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants