Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] AL2023 self-managed nodegroups should support kubeletExtraConfig #7751

Closed
TiberiuGC opened this issue May 8, 2024 · 5 comments · Fixed by #7758
Closed

[Bug] AL2023 self-managed nodegroups should support kubeletExtraConfig #7751

TiberiuGC opened this issue May 8, 2024 · 5 comments · Fixed by #7758

Comments

@TiberiuGC
Copy link
Collaborator

Currently, there's a validation in place that prevents the use case described in the title, e.g.

Error: could not create cluster provider from options: kubeletExtraConfig is not supported for AmazonLinux2023 nodegroups (path=nodeGroups[0].kubeletExtraConfig)

For self-managed AL2023 nodes, we need to build the kubelet config file and pass it to the nodeadm process via nodeConfig.kubelet.flags i.e. --config.

@punkwalker
Copy link
Contributor

punkwalker commented May 8, 2024

@TiberiuGC
nodeadm does not allow passing just kubelet config via file,
It has to be the entire NodeConfig and it has to be passed as nodeadm config --config-source.
However, doing so would require executing nodeadm via user data.

EKS Optimized AMIs have Service Unit files for nodeadm /etc/systemd/system/nodeadm-config.service & /etc/systemd/system/nodeadm-run.service and it utilizes user data as source for NodeConfig.

Therefore, we should be building NodeConfig in UserData

@TiberiuGC
Copy link
Collaborator Author

TiberiuGC commented May 8, 2024

@punkwalker thanks for the feedback!

Based on these references - https://awslabs.github.io/amazon-eks-ami/nodeadm/doc/api/#kubeletoptions and https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/, I was assuming we could pass something along the lines below to the userdata 👇🏻

MIME-Version: 1.0
Content-Type: multipart/mixed; boundary=//

--//
Content-Type: application/node.eks.aws

apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig
spec:
  cluster:
    ...
  kubelet:
    config:
      clusterDNS:
      - 10.100.0.10
    flags:
    ...
    - --config=config.yaml

--//--

What do you think about this? Will this not work?

@punkwalker
Copy link
Contributor

punkwalker commented May 8, 2024

@TiberiuGC
I see, I think this should work. I will try to test it.
I just walked through the nodeadm code and I think, using --config in nodeConfig.spec.kubelet.flags may not work.

Nodeadm is using the values of nodeConfig.spec.kubelet.flags similar to KUBELET_EXTRA_ARGS of AL2 bootstrap.sh. The flags from config are concanated and stored as ENVIRONMENT variable "NODEADM_KUBELET_ARGS" in a File which is referenced in kubelet unit file. Ref

And Nodeadm also sets --config which later gets added into NODEADM_KUBELET_ARGS. Ref

Setting --config flag again might either overwrite the value of --config set by Nodeadm or it may break kubelet due to duplicate flag (not sure about the later).
Even if the nodeadm flag value is overwritten by eksctl --config flag, we would have to write the entire kubelet config.json in that file as the --config value is the file from where kubelet will pick base/default config.

So, IMO we should use nodeConfig.spec.kubelet.config via UserData for adding the kubeletExtraConfig.
What do you think?

@TiberiuGC
Copy link
Collaborator Author

@punkwalker thanks for investigating this option. Lmk if you'd like to work on the fix you've suggested, if your time allows.

@punkwalker
Copy link
Contributor

Sure @TiberiuGC I will work on it.

@cPu1 cPu1 self-assigned this May 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants