Skip to content

Commit

Permalink
feat: Add support for EKS 1.26 and correct check documentation hyperlink
Browse files Browse the repository at this point in the history
  • Loading branch information
bryantbiggs committed Apr 12, 2023
1 parent 8711b73 commit bcb5908
Show file tree
Hide file tree
Showing 10 changed files with 97 additions and 112 deletions.
32 changes: 16 additions & 16 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 2 additions & 2 deletions eksup/src/version.rs
Expand Up @@ -6,15 +6,15 @@ use seq_macro::seq;
use serde::{Deserialize, Serialize};

/// Latest support version
pub const LATEST: &str = "1.25";
pub const LATEST: &str = "1.26";

#[derive(Debug, Serialize, Deserialize)]
pub struct Versions {
pub current: String,
pub target: String,
}

seq!(N in 20..=24 {
seq!(N in 20..=26 {
/// Kubernetes version(s) supported
#[derive(Clone, Copy, Debug, Serialize, Deserialize)]
pub enum KubernetesVersion {
Expand Down
9 changes: 8 additions & 1 deletion eksup/templates/data.yaml
Expand Up @@ -23,5 +23,12 @@
deprecation_url: https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-26

'1.27':
release_url: TBD
release_url: https://kubernetes.io/blog/2023/04/11/kubernetes-v1-27-release/
deprecation_url: https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-27

'1.28':
release_url: TBD

'1.29':
release_url: TBD
deprecation_url: https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-29
4 changes: 2 additions & 2 deletions eksup/templates/eks-managed-nodegroup.md
Expand Up @@ -22,7 +22,7 @@ The default update strategy for EKS managed nodegroups is a surge, rolling updat

</details>

#### Check [[EKS003]](https://clowdhaus.github.io/eksup/checks/#eks003)
#### Check [[EKS003]](https://clowdhaus.github.io/eksup/info/checks/#eks003)
{{ eks_managed_nodegroup_health }}

2. Ensure the EKS managed nodegroup(s) do not have any pending updates and they are using the latest version of their respective launch templates. If the nodegroup(s) are not using the latest launch template, it is recommended to update to the latest to avoid accidentally introducing any additional and un-intended changes during the upgrade.
Expand All @@ -36,7 +36,7 @@ The default update strategy for EKS managed nodegroups is a surge, rolling updat

</details>

Check [[EKS006]](https://clowdhaus.github.io/eksup/checks/#eks006)
Check [[EKS006]](https://clowdhaus.github.io/eksup/info/checks/#eks006)
{{ eks_managed_nodegroup_update }}

##### Upgrade
Expand Down
30 changes: 15 additions & 15 deletions eksup/templates/playbook.md
Expand Up @@ -79,7 +79,7 @@
```
</details>

#### Check [[K8S001]](https://clowdhaus.github.io/eksup/checks/#k8s001)
#### Check [[K8S001]](https://clowdhaus.github.io/eksup/info/checks/#k8s001)
{{ version_skew }}

3. Verify that there are at least 5 free IPs in the VPC subnets used by the control plane. Amazon EKS creates new elastic network interfaces (ENIs) in any of the subnets specified for the control plane. If there are not enough available IPs, then the upgrade will fail (your control plane will stay on the prior version).
Expand All @@ -96,7 +96,7 @@

</details>

#### Check [[EKS001]](https://clowdhaus.github.io/eksup/checks/#eks001)
#### Check [[EKS001]](https://clowdhaus.github.io/eksup/info/checks/#eks001)
{{ control_plane_ips }}

4. Ensure the cluster is free of any health issues as reported by Amazon EKS. If there are any issues, resolution of those issues is required before upgrading the cluster. Note - resolution in some cases may require creating a new cluster. For example, if the cluster primary security group was deleted, at this time, the only course of remediation is to create a new cluster and migrate any workloads over to that cluster (treated as a blue/green cluster upgrade).
Expand All @@ -111,7 +111,7 @@

</details>

#### Check [[EKS002]](https://clowdhaus.github.io/eksup/checks/#eks002)
#### Check [[EKS002]](https://clowdhaus.github.io/eksup/info/checks/#eks002)
{{ cluster_health }}

5. Ensure the EKS addons in use are using a version that is supported by the intended target Kubernetes version. If an addon is not compatible with the intended target Kubernetes version, upgrade the addon to a version that is compatible before upgrading the cluster.
Expand All @@ -137,7 +137,7 @@

</details>

#### Check [[EKS005]](https://clowdhaus.github.io/eksup/checks/#eks005)
#### Check [[EKS005]](https://clowdhaus.github.io/eksup/info/checks/#eks005)
{{ addon_version_compatibility }}

5. Check Kubernetes API versions currently in use and ensure any versions that are removed in the next Kubernetes release are updated prior to upgrading the cluster. There are several open source tools that can help you identify deprecated API versions in your Kubernetes manifests. The following open source projects support scanning both your cluster as well as manifest files to identify deprecated and/or removed API versions:
Expand Down Expand Up @@ -173,31 +173,31 @@ When upgrading the control plane, Amazon EKS performs standard infrastructure an

🚧 TODO - fill in analysis results

#### Check [[K8S002]](https://clowdhaus.github.io/eksup/checks/#k8s002)
#### Check [[K8S002]](https://clowdhaus.github.io/eksup/info/checks/#k8s002)
{{ min_replicas }}

#### Check [[K8S003]](https://clowdhaus.github.io/eksup/checks/#k8s003)
#### Check [[K8S003]](https://clowdhaus.github.io/eksup/info/checks/#k8s003)
{{ min_ready_seconds }}

#### Check [[K8S004]](https://clowdhaus.github.io/eksup/checks/#k8s004)
#### Check [[K8S004]](https://clowdhaus.github.io/eksup/info/checks/#k8s004)
🚧 TODO

#### Check [[K8S005]](https://clowdhaus.github.io/eksup/checks/#k8s005)
#### Check [[K8S005]](https://clowdhaus.github.io/eksup/info/checks/#k8s005)
{{ pod_topology_distribution }}

#### Check [[K8S006]](https://clowdhaus.github.io/eksup/checks/#k8s006)
#### Check [[K8S006]](https://clowdhaus.github.io/eksup/info/checks/#k8s006)
{{ readiness_probe }}

#### Check [[K8S007]](https://clowdhaus.github.io/eksup/checks/#k8s007)
#### Check [[K8S007]](https://clowdhaus.github.io/eksup/info/checks/#k8s007)
{{ termination_grace_period }}

#### Check [[K8S008]](https://clowdhaus.github.io/eksup/checks/#k8s008)
#### Check [[K8S008]](https://clowdhaus.github.io/eksup/info/checks/#k8s008)
{{ docker_socket }}

#### Check [[K8S009]](https://clowdhaus.github.io/eksup/checks/#k8s009)
#### Check [[K8S009]](https://clowdhaus.github.io/eksup/info/checks/#k8s009)
{{ pod_security_policy }}

#### Check [[K8S0011]](https://clowdhaus.github.io/eksup/checks/#k8s011)
#### Check [[K8S0011]](https://clowdhaus.github.io/eksup/info/checks/#k8s011)
{{ kube_proxy_version_skew }}

2. Inspect [AWS service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) before upgrading. Accounts that are multi-tenant or already have a number of resources provisioned may be at risk of hitting service quota limits which will cause the cluster upgrade to fail, or impede the upgrade process.
Expand All @@ -223,7 +223,7 @@ When upgrading the control plane, Amazon EKS performs standard infrastructure an

</details>

#### Check [[AWS002]](https://clowdhaus.github.io/eksup/checks/#aws002)
#### Check [[AWS002]](https://clowdhaus.github.io/eksup/info/checks/#aws002)
{{ pod_ips }}
{{/if}}

Expand Down Expand Up @@ -253,7 +253,7 @@ When upgrading the control plane, Amazon EKS performs standard infrastructure an

</details>

#### Check [[EKS004]](https://clowdhaus.github.io/eksup/checks/#eks004)
#### Check [[EKS004]](https://clowdhaus.github.io/eksup/info/checks/#eks004)
{{ addon_health }}

### Addon Upgrade
Expand Down
2 changes: 1 addition & 1 deletion eksup/templates/self-managed-nodegroup.md
Expand Up @@ -19,7 +19,7 @@ A starting point for the instance refresh configuration is to use a value of 70%

</details>

Check [[EKS007]](https://clowdhaus.github.io/eksup/checks/#eks007)
Check [[EKS007]](https://clowdhaus.github.io/eksup/info/checks/#eks007)
{{ self_managed_nodegroup_update }}

##### Upgrade
Expand Down
13 changes: 4 additions & 9 deletions examples/eks-managed/main.tf
Expand Up @@ -37,7 +37,7 @@ locals {

module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 19.5"
version = "~> 19.12"

cluster_name = local.name
cluster_version = "1.${local.minor_version}"
Expand Down Expand Up @@ -81,7 +81,7 @@ module "eks" {

module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 3.0"
version = "~> 4.0"

name = local.name
cidr = local.vpc_cidr
Expand All @@ -99,13 +99,8 @@ module "vpc" {
private_subnet_ipv6_prefixes = [3, 4, 5]
intra_subnet_ipv6_prefixes = [6, 7, 8]

enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true

enable_flow_log = true
create_flow_log_cloudwatch_iam_role = true
create_flow_log_cloudwatch_log_group = true
enable_nat_gateway = true
single_nat_gateway = true

public_subnet_tags = {
"kubernetes.io/role/elb" = 1
Expand Down
30 changes: 11 additions & 19 deletions examples/fargate-profile/main.tf
Expand Up @@ -37,7 +37,7 @@ locals {

module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 19.5"
version = "~> 19.12"

cluster_name = local.name
cluster_version = "1.${local.minor_version}"
Expand All @@ -57,17 +57,14 @@ module "eks" {
subnet_ids = module.vpc.private_subnets
control_plane_subnet_ids = module.vpc.intra_subnets

fargate_profiles = merge(
{ for i in range(3) :
"kube-system-${element(split("-", local.azs[i]), 2)}" => {
selectors = [
{ namespace = "kube-system" }
]
# We want to create a profile per AZ for high availability
subnet_ids = [element(module.vpc.private_subnets, i)]
}
},
)
fargate_profiles = {
kube_system = {
name = "kube-system"
selectors = [
{ namespace = "kube-system" }
]
}
}

tags = local.tags
}
Expand All @@ -88,13 +85,8 @@ module "vpc" {
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]
intra_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 52)]

enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true

enable_flow_log = true
create_flow_log_cloudwatch_iam_role = true
create_flow_log_cloudwatch_log_group = true
enable_nat_gateway = true
single_nat_gateway = true

public_subnet_tags = {
"kubernetes.io/role/elb" = 1
Expand Down

0 comments on commit bcb5908

Please sign in to comment.