-
Notifications
You must be signed in to change notification settings - Fork 537
Replies: 1 comment · 5 replies
-
Hi 👋 Can you try it with this? - name: "vpc-cni"
version: "v1.12.1-eksbuild.2"
conflictResolution: "overwrite" |
Beta Was this translation helpful? Give feedback.
All reactions
-
I'll try, but I had the It looks like the VPC CNI is expecting an
From the capa logs it looks like some
|
Beta Was this translation helpful? Give feedback.
All reactions
-
I found that CAPA actually creates three
The |
Beta Was this translation helpful? Give feedback.
All reactions
-
Ok, I think I figured it out. I use three nodegroups (one per AZ) to enable using persistent volumes (and autoscaling eventually). So CAPA (correctly) builds one I assume that if you create a cluster with a nodegroup spanning several AZs an What was missing to make it work in my case was: apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: AWSManagedControlPlane
# ...
spec:
vpcCni:
# ...
env:
- name: AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG
value: "true"
- name: ENI_CONFIG_LABEL_DEF
value: failure-domain.beta.kubernetes.io/zone My full working configuration looks like this now (I think apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSManagedCluster
metadata:
name: infra
namespace: aws-infrastructure
spec: {}
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: AWSManagedControlPlane
metadata:
name: infra-control-plane
namespace: aws-infrastructure
spec:
eksClusterName: infrastructure
# network:
# cni:
# cniIngressRules: []
sshKeyName: default
secondaryCidrBlock: 100.64.0.0/16
vpcCni:
disable: false
env:
- name: AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG
value: "true"
- name: ENABLE_PREFIX_DELEGATION
value: "true"
- name: MINIMUM_IP_TARGET
value: "32"
- name: "WARM_IP_TARGET"
value: "6"
- name: ENI_CONFIG_LABEL_DEF
value: failure-domain.beta.kubernetes.io/zone
addons:
- name: vpc-cni
version: v1.12.1-eksbuild.2
conflictResolution: "overwrite"
- name: kube-proxy
version: v1.23.15-eksbuild.1
conflictResolution: "overwrite"
- name: coredns
version: v1.8.7-eksbuild.3
conflictResolution: "overwrite"
region: eu-central-1
version: v1.23.0
associateOIDCProvider: true
endpointAccess:
public: true
private: true |
Beta Was this translation helpful? Give feedback.
All reactions
-
Ah interesting. Thanks. I think this warrants a documentation update maybe. :) |
Beta Was this translation helpful? Give feedback.
All reactions
-
Do we have an issue in place for this? |
Beta Was this translation helpful? Give feedback.
-
Hi, I am currently trying to deploy an EKS cluster with VPC-CNI (capa v2.0.2) with addons (coredns and kube-proxy) using the
template
eks-managedmachinepool-vpccni
.clusterctl generate cluster infra --flavor eks-managedmachinepool-vpccni --kubernetes-version v1.23.0 --worker-machine-count=3 --infrastructure aws > eks-infra.yaml
The coredns addon fails to deploy because of a failure to assign an ip address:
network: add cmd: failed to assign an IP address to container
Anybody can give me a hint, what I am missing in my configuration:
Beta Was this translation helpful? Give feedback.
All reactions