Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changing nginx.ingress.kubernetes.io/auth-tls-match-cn value is ignored #10915

Open
martinbfrey opened this issue Jan 25, 2024 · 11 comments · May be fixed by #11173
Open

Changing nginx.ingress.kubernetes.io/auth-tls-match-cn value is ignored #10915

martinbfrey opened this issue Jan 25, 2024 · 11 comments · May be fixed by #11173
Assignees
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@martinbfrey
Copy link

What happened:

We run an ingress with client certificate check.

annotations:
    nginx.ingress.kubernetes.io/auth-tls-secret: rcm-planner-backend-int/mud-ca-cert
    nginx.ingress.kubernetes.io/auth-tls-verify-client: 'on'
    nginx.ingress.kubernetes.io/auth-tls-verify-depth: '1'
    nginx.ingress.kubernetes.io/auth-tls-match-cn: 'CN=api-haproxy-rcm-planner-int'

Clients with a certifcate matching the CN can access the ingress, clients with another CN or no certificate can't access - as expected.
If we change the value of nginx.ingress.kubernetes.io/auth-tls-match-cn, the clients with the now not matching CN can still access. Clients with the new, matching CN don't have access. It looks like the Ingress is ignoring changes of the nginx.ingress.kubernetes.io/auth-tls-match-cn value. After a controller restart, the ingress works as expected.
The changed annotations look like:

annotations:
    nginx.ingress.kubernetes.io/auth-tls-secret: rcm-planner-backend-int/mud-ca-cert
    nginx.ingress.kubernetes.io/auth-tls-verify-client: 'on'
    nginx.ingress.kubernetes.io/auth-tls-verify-depth: '1'
    nginx.ingress.kubernetes.io/auth-tls-match-cn: 'CN=NOMATCHapi-haproxy-rcm-planner-int'

What you expected to happen:

Changes of nginx.ingress.kubernetes.io/auth-tls-match-cn are used by the ingress without controller restart.

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.9.5
  Build:         f503c4bb5fa7d857ad29e94970eb550c2bc00b7c
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.21.6

-------------------------------------------------------------------------------

Kubernetes version (use kubectl version):

Client Version: v1.28.6
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.6

Environment:

  • Cloud provider or hardware configuration: kubeadm based vanilla Kubernetes on x86_64 virtual machines, using MetalLB loadbalancer
  • OS (e.g. from /etc/os-release):
NAME="Oracle Linux Server"
VERSION="8.9"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="8.9"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Oracle Linux Server 8.9"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:8:9:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://github.com/oracle/oracle-linux"

ORACLE_BUGZILLA_PRODUCT="Oracle Linux 8"
ORACLE_BUGZILLA_PRODUCT_VERSION=8.9
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=8.9
  • Kernel (e.g. uname -a):
Linux kint-m01 4.18.0-513.9.1.el8_9.x86_64 #1 SMP Thu Nov 30 15:31:16 PST 2023 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
    • kubeadm
  • Basic cluster related info:
    • kubectl version
Client Version: v1.28.6
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.6
  • kubectl get nodes -o wide
NAME       STATUS   ROLES           AGE    VERSION   INTERNAL-IP      EXTERNAL-IP      OS-IMAGE                  KERNEL-VERSION                CONTAINER-RUNTIME
kint-e01   Ready    <none>          630d   v1.28.6   10.162.107.158   10.162.107.158   Oracle Linux Server 8.9   4.18.0-513.9.1.el8_9.x86_64   cri-o://1.28.3
kint-e02   Ready    <none>          42d    v1.28.6   10.162.107.58    10.162.107.58    Oracle Linux Server 8.9   4.18.0-513.9.1.el8_9.x86_64   cri-o://1.28.3
kint-i01   Ready    <none>          687d   v1.28.6   172.17.114.212   172.17.114.212   Oracle Linux Server 8.9   4.18.0-513.9.1.el8_9.x86_64   cri-o://1.28.3
kint-i02   Ready    <none>          687d   v1.28.6   172.17.114.213   172.17.114.213   Oracle Linux Server 8.9   4.18.0-513.9.1.el8_9.x86_64   cri-o://1.28.3
kint-m01   Ready    control-plane   687d   v1.28.6   172.17.114.209   172.17.114.209   Oracle Linux Server 8.9   4.18.0-513.9.1.el8_9.x86_64   cri-o://1.28.3
kint-m02   Ready    control-plane   687d   v1.28.6   172.17.114.210   172.17.114.210   Oracle Linux Server 8.9   4.18.0-513.9.1.el8_9.x86_64   cri-o://1.28.3
kint-m03   Ready    control-plane   687d   v1.28.6   172.17.114.211   172.17.114.211   Oracle Linux Server 8.9   4.18.0-513.9.1.el8_9.x86_64   cri-o://1.28.3
kint-s01   Ready    <none>          687d   v1.28.6   172.17.114.214   172.17.114.214   Oracle Linux Server 8.9   4.18.0-513.9.1.el8_9.x86_64   cri-o://1.28.3
kint-w01   Ready    <none>          687d   v1.28.6   172.17.114.216   172.17.114.216   Oracle Linux Server 8.9   4.18.0-513.9.1.el8_9.x86_64   cri-o://1.28.3
kint-w02   Ready    <none>          687d   v1.28.6   172.17.114.217   172.17.114.217   Oracle Linux Server 8.9   4.18.0-513.9.1.el8_9.x86_64   cri-o://1.28.3
kint-w03   Ready    <none>          687d   v1.28.6   172.17.114.218   172.17.114.218   Oracle Linux Server 8.9   4.18.0-513.9.1.el8_9.x86_64   cri-o://1.28.3
kint-w04   Ready    <none>          687d   v1.28.6   172.17.114.219   172.17.114.219   Oracle Linux Server 8.9   4.18.0-513.9.1.el8_9.x86_64   cri-o://1.28.3
kint-w05   Ready    <none>          86d    v1.28.6   172.17.114.122   172.17.114.122   Oracle Linux Server 8.9   4.18.0-513.9.1.el8_9.x86_64   cri-o://1.28.3
  • How was the ingress-nginx-controller installed:
    • If helm was used then please show output of helm ls -A | grep -i ingress
ingress-nginx                   zkezone-nginx   5               2024-01-19 13:04:59.692826584 +0100 CET deployed        ingress-nginx-4.9.0             1.9.5
  • If helm was used then please show output of helm -n <ingresscontrollernamespace> get values <helmreleasename>
USER-SUPPLIED VALUES:
controller:
  admissionWebhooks:
    patch:
      tolerations:
      - effect: NoSchedule
        key: win.sbb.ch/external-worker
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - kint-e01
  allowSnippetAnnotations: true
  config:
    ssl-ciphers: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA
    ssl-protocols: TLSv1.2 TLSv1.3
    worker-processes: 4
  ingressClass: zkezone
  ingressClassByName: true
  ingressClassResource:
    controllerValue: k8s.io/ingress-zkezone
    default: false
    name: zkezone
  metrics:
    enabled: true
  podLabels:
    prometheus_monitoring: metricport
  replicaCount: 1
  service:
    externalIPs:
    - 10.162.107.158
    type: ClusterIP
  tolerations:
  - effect: NoSchedule
    key: win.sbb.ch/external-worker
  updateStrategy:
    rollingUpdate:
      maxSurge: 50%
      maxUnavailable: 50%
    type: RollingUpdate
  watchIngressWithoutClass: false
fullnameOverride: zkezone
  • if you have more than one instance of the ingress-nginx-controller installed in the same cluster, please provide details for all the instances
    All ingresses have the same version and are installed with helm
    • Default Ingress:
USER-SUPPLIED VALUES:
controller:
  admissionWebhooks:
    patch:
      tolerations:
      - effect: NoSchedule
        key: win.sbb.ch/infrastructure-worker
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: win.sbb.ch/infrastructure-worker
            operator: Exists
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app.kubernetes.io/instance
            operator: In
            values:
            - ingress-nginx
          - key: app.kubernetes.io/component
            operator: In
            values:
            - controller
        topologyKey: kubernetes.io/hostname
  allowSnippetAnnotations: true
  config:
    ssl-ciphers: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA
    ssl-protocols: TLSv1.2 TLSv1.3
    worker-processes: 3
  ingressClass: nginx
  ingressClassByName: true
  ingressClassResource:
    controllerValue: k8s.io/ingress-nginx
    default: true
    name: nginx
  metrics:
    enabled: true
  podLabels:
    prometheus_monitoring: metricport
  replicaCount: 2
  service:
    loadBalancerIP: 172.17.114.245
    type: LoadBalancer
  tolerations:
  - effect: NoSchedule
    key: win.sbb.ch/infrastructure-worker
  updateStrategy:
    rollingUpdate:
      maxSurge: 50%
      maxUnavailable: 50%
    type: RollingUpdate
  watchIngressWithoutClass: true
  - zke-zon2 Ingress:
USER-SUPPLIED VALUES:
controller:
  admissionWebhooks:
    patch:
      tolerations:
      - effect: NoSchedule
        key: win.sbb.ch/external-worker
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - kint-e02
  allowSnippetAnnotations: true
  config:
    ssl-ciphers: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA
    ssl-protocols: TLSv1.2 TLSv1.3
    worker-processes: 4
  ingressClass: zkezon2
  ingressClassByName: true
  ingressClassResource:
    controllerValue: k8s.io/ingress-zkezon2
    default: false
    name: zkezon2
  metrics:
    enabled: true
  podLabels:
    prometheus_monitoring: metricport
  replicaCount: 1
  service:
    externalIPs:
    - 10.162.107.58
    type: ClusterIP
  tolerations:
  - effect: NoSchedule
    key: win.sbb.ch/external-worker
  updateStrategy:
    rollingUpdate:
      maxSurge: 50%
      maxUnavailable: 50%
    type: RollingUpdate
  watchIngressWithoutClass: false
fullnameOverride: zkezon2
  • Current State of the controller:
    • kubectl describe ingressclasses
Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.9.5
              helm.sh/chart=ingress-nginx-4.9.0
Annotations:  ingressclass.kubernetes.io/is-default-class: true
              meta.helm.sh/release-name: ingress-nginx
              meta.helm.sh/release-namespace: ingress-nginx
Controller:   k8s.io/ingress-nginx
Events:       <none>


Name:         zkezon2
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.9.5
              helm.sh/chart=ingress-nginx-4.9.0
Annotations:  meta.helm.sh/release-name: ingress-nginx
              meta.helm.sh/release-namespace: zkezone2-nginx
Controller:   k8s.io/ingress-zkezon2
Events:       <none>


Name:         zkezone
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.9.5
              helm.sh/chart=ingress-nginx-4.9.0
Annotations:  meta.helm.sh/release-name: ingress-nginx
              meta.helm.sh/release-namespace: zkezone-nginx
Controller:   k8s.io/ingress-zkezone
Events:       <none>
  - `kubectl -n <ingresscontrollernamespace> get all -A -o wide`
  - `kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>`
  - `kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>`
  • Current state of ingress object, if applicable:
    • kubectl -n <appnamespace> get all,ing -o wide
AME                                      READY   STATUS    RESTARTS   AGE   IP              NODE       NOMINATED NODE   READINESS GATES
pod/zkezone-controller-857d75cdb5-mhkt6   1/1     Running   0          39m   172.17.230.62   kint-e01   <none>           <none>

NAME                                   TYPE        CLUSTER-IP      EXTERNAL-IP      PORT(S)          AGE    SELECTOR
service/zkezone-controller             ClusterIP   172.17.46.57    10.162.107.158   80/TCP,443/TCP   311d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/zkezone-controller-admission   ClusterIP   172.17.46.162   <none>           443/TCP          311d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/zkezone-controller-metrics     ClusterIP   172.17.47.59    <none>           10254/TCP        311d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES                                                                                                                    SELECTOR
deployment.apps/zkezone-controller   1/1     1            1           311d   controller   registry.k8s.io/ingress-nginx/controller:v1.9.5@sha256:b3aba22b1da80e7acfc52b115cae1d4c687172cbf2b742d5b502419c25ff340e   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME                                            DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES                                                                                                                    SELECTOR
replicaset.apps/zkezone-controller-5d4b6b89d6   0         0         0       311d    controller   k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a        app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=5d4b6b89d6
replicaset.apps/zkezone-controller-64446b4f46   0         0         0       311d    controller   k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a        app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=64446b4f46
replicaset.apps/zkezone-controller-7f9c8d4f5d   0         0         0       42d     controller   registry.k8s.io/ingress-nginx/controller:v1.5.1@sha256:4ba73c697770664c1e00e9f968de14e08f606ff961c76e5d7033a4a9c593c629   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7f9c8d4f5d
replicaset.apps/zkezone-controller-84449496db   0         0         0       6d20h   controller   registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=84449496db
replicaset.apps/zkezone-controller-857d75cdb5   1         1         1       5d23h   controller   registry.k8s.io/ingress-nginx/controller:v1.9.5@sha256:b3aba22b1da80e7acfc52b115cae1d4c687172cbf2b742d5b502419c25ff340e   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=857d75cdb5
  • kubectl -n <appnamespace> describe ing <ingressname>
Name:             rcm-planner-backend-1
Labels:           app=rcm-planner-backend
                  win.sbb.ch/argo-appname=rcm-planner-backend-test
Namespace:        rcm-planner-backend-test
Address:          172.17.46.57
Ingress Class:    zkezone
Default backend:  <default>
TLS:
  api-tls-1 terminates api-rcm-planner-test-1.mud.sbb.ch
Rules:
  Host                               Path  Backends
  ----                               ----  --------
  api-rcm-planner-test-1.mud.sbb.ch  
                                     /health   rcm-planner-backend-health:8081 (172.17.235.193:8081)
                                     /         rcm-planner-backend:8080 (172.17.235.193:8080)
Annotations:                         nginx.ingress.kubernetes.io/auth-tls-match-cn: CN=api-haproxy-rcm-planner-test
                                     nginx.ingress.kubernetes.io/auth-tls-secret: rcm-planner-backend-test/mud-ca-cert
                                     nginx.ingress.kubernetes.io/auth-tls-verify-client: on
                                     nginx.ingress.kubernetes.io/auth-tls-verify-depth: 1
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    47m (x8 over 5d)   nginx-ingress-controller  Scheduled for sync
  Normal  Sync    42m (x2 over 44m)  nginx-ingress-controller  Scheduled for sync
  Normal  Sync    41m                nginx-ingress-controller  Scheduled for sync
  • If applicable, then, your complete and exact curl/grpcurl command (redacted if required) and the reponse to the curl/grpcurl command with the -v flag
    Clients get a 403 HTTP code

  • Others:

    • Any other related information like ;
      When applying the change of the nginx.ingress.kubernetes.io/auth-tls-match-cn value, we observe the following controller log. The log covers a section where we changed the value from an invalid CN to the valid one. The clients still get a 403 response even after the reload. After restarting the controller, we see 200 responses only
2024-01-25T11:46:33.545829457+01:00 10.162.107.158 - - [25/Jan/2024:10:46:33 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.003 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.003 200 5202192cf926110ba82ba54d7ec2140c
2024-01-25T11:46:35.272618809+01:00 10.162.107.158 - - [25/Jan/2024:10:46:35 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 bcd8dc4e532280f5a647ceaa52ff2413
2024-01-25T11:46:35.370921478+01:00 10.162.107.158 - - [25/Jan/2024:10:46:35 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 5e26a2ff61e6eb9be2cc16e9d420af80
2024-01-25T11:46:35.471442805+01:00 10.162.107.158 - - [25/Jan/2024:10:46:35 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 539b356ccfdcaeec3c70e3e1acbf54aa
2024-01-25T11:46:35.562090216+01:00 10.162.107.158 - - [25/Jan/2024:10:46:35 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.001 200 bd5698cd0b87794e932cd3ca547a2402
2024-01-25T11:46:37.061495284+01:00 I0125 10:46:37.061411       2 admission.go:149] processed ingress via admission controller {testedIngressLength:3 testedIngressTime:0.048s renderingIngressLength:3 renderingIngressTime:0.001s admissionTime:43.7kBs testedConfigurationSize:0.049}
2024-01-25T11:46:37.061495284+01:00 I0125 10:46:37.061440       2 main.go:107] "successfully validated configuration, accepting" ingress="rcm-planner-backend-int/rcm-planner-backend-1"
2024-01-25T11:46:37.067445757+01:00 I0125 10:46:37.067363       2 event.go:298] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"rcm-planner-backend-int", Name:"rcm-planner-backend-1", UID:"d40ac8ba-727e-4f6e-a8d3-1f810433a0e6", APIVersion:"networking.k8s.io/v1", ResourceVersion:"333519916", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
2024-01-25T11:46:37.289921287+01:00 10.162.107.158 - - [25/Jan/2024:10:46:37 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.001 200 98b18a4c01e2b4f55a6b933abf41bdb7
2024-01-25T11:46:37.385177321+01:00 10.162.107.158 - - [25/Jan/2024:10:46:37 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 0f1aee2886f9979c7aa3486f21856311
2024-01-25T11:46:37.486010730+01:00 10.162.107.158 - - [25/Jan/2024:10:46:37 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 2c38b854643beeb78415af20c7ca6f72
2024-01-25T11:46:37.579083317+01:00 10.162.107.158 - - [25/Jan/2024:10:46:37 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.001 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 9165a32925096b35074add6584ace962
2024-01-25T11:46:39.306439195+01:00 10.162.107.158 - - [25/Jan/2024:10:46:39 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 71222835bb2ca7276a26da0fc9917f63
2024-01-25T11:46:39.402701009+01:00 10.162.107.158 - - [25/Jan/2024:10:46:39 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 93fa490adbca2a14b346efa3050532f3
2024-01-25T11:46:39.502599651+01:00 10.162.107.158 - - [25/Jan/2024:10:46:39 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - ead272c7895f7f12b35294588ac4aded
2024-01-25T11:46:39.598526523+01:00 10.162.107.158 - - [25/Jan/2024:10:46:39 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 25c83424ef1178d99ab1019ad0bedcbd
2024-01-25T11:46:41.323488518+01:00 10.162.107.158 - - [25/Jan/2024:10:46:41 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 cf105869e968dabe93cd9a3531852bec

How to reproduce this issue:

  • Install an ingress
  • Active client certifcate check using the annotations as described
  • Use a client with valid CN
  • Change the value of nginx.ingress.kubernetes.io/auth-tls-match-cn to something different than the valid CN
  • Check if the client can still access the ingress.

Anything else we need to know:
No.

@martinbfrey martinbfrey added the kind/bug Categorizes issue or PR as related to a bug. label Jan 25, 2024
@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Jan 25, 2024
@longwuyuan
Copy link
Contributor

  • This issue should ideally have triage accepted label but its not easy to reproduce by everyone de the the certs involved
  • If you add a step-by-step procedure, that includes client cert and server cert generation or usage etc (say using letsencrypt ones manually created using certbot and already available with the reader), it will help reproduce and complete triage
  • The critical information, required here, is the 2 fold state. One being the diff of the nginx.conf inside the controller pod, after you change the value of the CN annotation. Secondly a clear copy/paste comparision of the controller logs, bot pre and post the change.
  • If the relevant server block(s) and/or location blocks in the nginx.conf, inside the controller pod, do see the change, then it will be some stale cache

@longwuyuan
Copy link
Contributor

/triage needs-information

@k8s-ci-robot k8s-ci-robot added the triage/needs-information Indicates an issue needs more information in order to work on it. label Jan 27, 2024
@martinbfrey
Copy link
Author

How to reproduce

Create minikube cluster

  • install / create minikube cluster

  • start cluster with minikube start

Install nginx ingress in minikube

  • minikube addons enable ingress

  • verify, that pods are running with minikube kubectl -- get pods -n ingress-nginx

As of this writing, minikube installs ingress-nginx 1.9.4. In my production cluster we are using 1.9.5. The behaviour is the same however.

Install sample app and create ingress

  • Deployment: minikube kubectl -- create deployment web --image=gcr.io/google-samples/hello-app:1.0

  • Create a file service.yml with the following contents:

    apiVersion: v1
    kind: Service
    metadata:
      name: web
    spec:
      ports:
        - port: 8080
          protocol: TCP
      selector:
        app: web
    

    and apply it with minikube kubectl -- apply -f service.yml

  • Create a CA with openssl genrsa -des3 -out ca.key 2048

  • Create a root certificate with openssl req -x509 -new -nodes -key ca.key -sha256 -days 1825 -out ca.pem

  • Create a secret with the ca.pem in it: minikube kubectl -- create secret generic ca --from-file=ca.crt=./ca.pem

  • Create a key for the server certificate: openssl genrsa -out server.key 2048

  • Create a CSR for the server certificate: openssl req -new -key server.key -out server.csr. Answer the question for the common name with hello-world.info.

  • Create an extension file server.ext with the contents:

    authorityKeyIdentifier=keyid,issuer
    basicConstraints=CA:FALSE
    keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
    subjectAltName = @alt_names
    
    [alt_names]
    DNS.1 = hello-world.info
    
  • Sign the CSR with openssl x509 -req -in server.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out server.pem -days 825 -sha256 -extfile server.ext

  • Create the server secret with minikube kubectl -- create secret tls server --cert=./server.pem --key=./server.key

  • Create the client certificate key with openssl genrsa -out client.key 4096

  • Generate the client CSR with openssl req -new -key client.key -out client.csr -sha256 -subj '/CN=testclient'.

  • Create an extension file client.ext with the contents:

    [client]
    basicConstraints = CA:FALSE
    nsCertType = client, email
    nsComment = "Local Test Client Certificate"
    subjectKeyIdentifier = hash
    authorityKeyIdentifier = keyid,issuer
    keyUsage = critical, nonRepudiation, digitalSignature, keyEncipherment
    extendedKeyUsage = clientAuth, emailProtection
    
  • Sign the CSR with openssl x509 -req -in client.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out client.pem -days 825 -sha256 -extfile client.ext -extensions client

  • Ingress:
    File ingress.yml:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: example-ingress
      annotations:
        nginx.ingress.kubernetes.io/rewrite-target: /$1
        nginx.ingress.kubernetes.io/auth-tls-match-cn: CN=testclient
        nginx.ingress.kubernetes.io/auth-tls-secret: default/ca
        nginx.ingress.kubernetes.io/auth-tls-verify-client: 'on'
        nginx.ingress.kubernetes.io/auth-tls-verify-depth: '1'
    spec:
      tls:
        - hosts:
            - hello-world.info
          secretName: server
      rules:
        - host: hello-world.info
          http:
            paths:
              - path: /
                pathType: Prefix
                backend:
                  service:
                    name: web
                    port:
                      number: 8080
    
  • Test with curl --resolve "hello-world.info:443:$( minikube ip )" --cacert ca.pem --cert client.pem --key client.key -i https://hello-world.info

  • Create a second client CSR with: openssl req -new -key client.key -out falseclient.csr -sha256 -subj '/CN=falseclient'.

  • And sign it too: openssl x509 -req -in falseclient.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out falseclient.pem -days 825 -sha256 -extfile client.ext -extensions client

  • Test with curl --resolve "hello-world.info:443:$( minikube ip )" --cacert ca.pem --cert falseclient.pem --key client.key -i https://hello-world.info and check for client certificate unauthorized.

  • Edit the ingress and change the value of the annotation nginx.ingress.kubernetes.io/auth-tls-match-cn to CN=falseclient.

  • Test again with the two curl commands. Expectation: falseclient.pem works, client.pem fails. This is not the case however.

  • Restart the ingress controller with pod delete

  • Retry with the two curl commands. Now client.pem fails and falseclient.pem succeeds

@martinbfrey
Copy link
Author

There is no diff in nginx.conf before and after changing the value of nginx.ingress.kubernetes.io/auth-tls-match-cn. In fact, after changing the value the nginx.conf still contains the block:

        ## start server hello-world.info
        server {
                server_name hello-world.info ;

                listen 80  ;
                listen 443  ssl http2 ;

                set $proxy_upstream_name "-";

                if ( $ssl_client_s_dn !~ CN=testclient ) {
                        return 403 "client certificate unauthorized";
                }

The logs before changing the value and after chaning the value including a request with curl are attached.

Before changing the value:
nginxlog-1.txt

After changing the value:
nginxlog-2.txt

And here the resulting configuration (after changing the value, please not that the CN value is still testclient and not falseclient):
nginx-2.txt

@longwuyuan
Copy link
Contributor

@martinbfrey this is fantastic information

/tiage accepted
/priority important-longterm

Since you posted that the changed CN is not reflected in the nginx.conf until a restart of the pod, I suspect that the same thing happens if vanilla non-kubernetes nginx reverseproxy was in place.

However this means that a deep dive discussion has to occur with a nginx expert and a developer on this project or with your involvement. We have community meetings as schedule seen here https://github.com/kubernetes/community/tree/master/sig-network#meetings

I request you join a meeting to make some progress on this.

cc @rikatz @tao12345666333 @cpanato @strongjz @Gacko

@k8s-ci-robot k8s-ci-robot added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed needs-priority labels Feb 6, 2024
@longwuyuan
Copy link
Contributor

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Feb 6, 2024
@longwuyuan
Copy link
Contributor

/help

@k8s-ci-robot
Copy link
Contributor

@longwuyuan:
This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Feb 6, 2024
@longwuyuan
Copy link
Contributor

/assign

@martinbfrey
Copy link
Author

martinbfrey commented Feb 6, 2024

I think the Equal check of the authtls annotation is missing a comparison for MatchCN.

@Gacko
Copy link
Member

Gacko commented Mar 2, 2024

/assign

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
Development

Successfully merging a pull request may close this issue.

4 participants