Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CIS-1.6-k3s benchmark does not match K3s documentation and attempts to remediate causes K3s to fail to start #1501

Open
elvinasp opened this issue Sep 25, 2023 · 0 comments

Comments

@elvinasp
Copy link

Overview

cfg/cis-1.6-k3s benchmark has some failing tests, which either does not match K3s CIS hardening reference or checks completely incorrect fields.

Examples:

  • Check, that does not match description and trying to make check pass, causes K3s to fail.

[FAIL] 1.2.19 Ensure that the --insecure-port argument is set to 0 (Automated)
Rancher K3s documentation states:
1.2.19 Ensure that the --audit-log-path argument is set (Automated)
If attempt to satisfy cfg/cis-1.6-k3s is done, i.e. option added to /etc/rancher/k3s/config.yaml file

protect-kernel-defaults: true
kube-apiserver-arg:
  - 'request-timeout=60s'
  - 'service-account-lookup=true'
  - 'insecure-port=0'
kubelet-arg:...

K3s fails to start with
Sep 25 06:36:49 meta-cx8gz k3s[10499]: Error: unknown flag: --insecure-port
Sep 25 06:36:49 meta-cx8gz k3s[10499]: time="2023-09-25T06:36:49Z" level=fatal msg="apiserver exited: unknown flag: --insecure-port"

  • Check that does not find correct setting, although it is there.

Also check number (1.2.26) does not match K3s documentation (1.2.25), which makes hard to cross reference. Also check value 60 is smaller than on K3s documentation 300s (https://docs.k3s.io/security/hardening-guide#configuration-for-kubernetes-components). Check uses lte operator, which I doubt will work with string '60s' and K3s requires to specify measurement unit "s", otherwise it will fail to start.

[FAIL] 1.2.26 Ensure that the --request-timeout argument is set as appropriate (Automated)
Sep 25 06:09:37 meta-cx8gz k3s[5426]: time="2023-09-25T06:09:37Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=60s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"

How did you run kube-bench?

kube-bench run --benchmark=cis-1.6-k3s --noremediations --include-test-output

What happened?

Kube bench fails to validate appropriate configuration parameter (--request-timeout=60s) and requires non existent parameter (insecure-port) to be set, which causes K3s to fail to start. CIS-1.6-k3s benchmark check numbers does not match actual K3s documentation. See examples above.

What did you expect to happen:

kube-bench K3s benchmark to properly validate K3s configuration.

Environment

kube-bench version
0.6.17

k3s --version
k3s version v1.24.13+k3s1 (3f79b289)
go version go1.19.8

Running processes

meta-cx8gz:~ # ps -eaf | grep kube
viadmin 941 31638 0 Sep22 ? 00:00:49 /bin/alertmanager --config.file=/etc/alertmanager/config_out/alertmanager.env.yaml --storage.path=/alertmanager --data.retention=120h --cluster.listen-address= --web.listen-address=:9093 --web.external-url=http://kube-prometheus-stack-alertmanager.monitoring:9093 --web.route-prefix=/ --cluster.peer=alertmanager-kube-prometheus-stack-alertmanager-0.alertmanager-operated:9094 --cluster.reconnect-timeout=5m --web.config.file=/etc/alertmanager/web_config/web-config.yaml
nobody 1001 673 0 Sep22 ? 00:00:12 /bin/node_exporter --path.procfs=/host/proc --path.sysfs=/host/sys --path.rootfs=/host/root --path.udev.data=/host/root/run/udev/data --web.listen-address=[0.0.0.0]:9100 --collector.filesystem.mount-points-exclude=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/.+)($|/) --collector.filesystem.fs-types-exclude=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
viadmin 1410 1256 0 Sep22 ? 00:17:38 /metrics-server --cert-dir=/tmp --secure-port=10250 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s
nobody 1438 1350 0 Sep22 ? 00:01:49 /bin/operator --kubelet-service=kube-system/kube-prometheus-stack-kubelet --localhost=127.0.0.1 --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.65.1 --config-reloader-cpu-request=200m --config-reloader-cpu-limit=200m --config-reloader-memory-request=50Mi --config-reloader-memory-limit=50Mi --thanos-default-base-image=quay.io/thanos/thanos:v0.31.0 --secret-field-selector=type!=kubernetes.io/dockercfg,type!=kubernetes.io/service-account-token,type!=helm.sh/release.v1 --web.enable-tls=true --web.cert-file=/cert/cert --web.key-file=/cert/key --web.listen-address=:10250 --web.tls-min-version=VersionTLS13
nobody 3033 2659 0 Sep22 ? 00:11:18 /kube-state-metrics --port=8080 --telemetry-port=8081 --port=8080 --resources=certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,leases,limitranges,mutatingwebhookconfigurations,namespaces,networkpolicies,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets,storageclasses,validatingwebhookconfigurations,volumeattachments
root 3916 5580 0 06:48 pts/0 00:00:00 grep --color=auto kube
viadmin 17367 29658 0 06:41 ? 00:00:01 /app/cmd/cainjector/cainjector --v=2 --leader-election-namespace=kube-system
root 17743 26595 0 06:41 ? 00:00:01 /kube-vip manager
viadmin 18100 31437 0 06:41 ? 00:00:00 /app/cmd/controller/controller --v=2 --cluster-resource-namespace=cert-manager --leader-election-namespace=kube-system --acme-http01-solver-image=quay.io/jetstack/cert-manager-acmesolver:v1.11.1 --max-concurrent-challenges=60
65532 21412 21302 9 Sep22 ? 06:42:05 traefik traefik --entrypoints.metrics.address=:9100/tcp --entrypoints.ssh.address=:8022/tcp --entrypoints.traefik.address=:9000/tcp --entrypoints.web.address=:8000/tcp --entrypoints.websecure.address=:8443/tcp --api.dashboard=true --ping=true --metrics.prometheus=true --metrics.prometheus.entrypoint=metrics --providers.kubernetescrd --providers.kubernetesingress --providers.kubernetesingress.ingressendpoint.publishedservice=kube-system/traefik --entrypoints.web.http.redirections.entryPoint.to=:443 --entrypoints.web.http.redirections.entryPoint.scheme=https --entrypoints.websecure.http.tls=true --entrypoints.websecure.http.tls.certResolver=stepca --certificatesResolvers.stepca.acme.caserver=https://step-certificates.step-ca.svc.cluster.local/acme/acme/directory --certificatesResolvers.stepca.acme.email=admin --certificatesResolvers.stepca.acme.storage=/data/acme.json --certificatesResolvers.stepca.acme.tlsChallenge=true --certificatesResolvers.stepca.acme.certificatesduration=24
root 28097 27651 0 Sep22 ? 00:00:09 /csi-node-driver-registrar --v=2 --csi-address=/csi/csi.sock --kubelet-registration-path=/var/lib/kubelet/plugins/driver.longhorn.io/csi.sock

Configuration files

No specific config file used.

Anything else you would like to add:

It may be these are not the only ones as I have not yet tried to fix remaining FAIL items. So this issue might be updated.

...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant