Skip to content

Commit

Permalink
Merge pull request #13 from terraform-aws-modules/NextDeveloperTeam-a…
Browse files Browse the repository at this point in the history
…dd-worker-groups

Flexible number of worker autoscaling groups now able to be created by module consumers.
  • Loading branch information
brandonjbjelland committed Jun 11, 2018
2 parents 23f1c37 + c8997a5 commit bc80724
Show file tree
Hide file tree
Showing 18 changed files with 341 additions and 349 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,5 @@ Gemfile.lock
terraform.tfstate.d/
kubeconfig
config-map-aws-auth.yaml
eks-admin-cluster-role-binding.yaml
eks-admin-service-account.yaml
9 changes: 8 additions & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
@@ -1,16 +1,21 @@
language: ruby
sudo: required
dist: trusty

services:
- docker

rvm:
- 2.4.2

before_install:
- echo "before_install"

install:
- echo "install"
- gem install bundler --no-rdoc --no-ri
- bundle install

before_script:
- echo 'before_script'
- export AWS_REGION='us-east-1'
Expand All @@ -22,12 +27,13 @@ before_script:
- unzip terraform.zip ; rm -f terraform.zip; chmod +x terraform
- mkdir -p ${HOME}/bin ; export PATH=${PATH}:${HOME}/bin; mv terraform ${HOME}/bin/
- terraform -v

script:
- echo 'script'
- terraform init
- terraform fmt -check=true
- terraform validate -var "region=${AWS_REGION}" -var "vpc_id=vpc-123456" -var "subnets=[\"subnet-12345a\"]" -var "workers_ami_id=ami-123456" -var "cluster_ingress_cidrs=[]" -var "cluster_name=test_cluster"
- docker run --rm -v $(pwd):/app/ --workdir=/app/ -t wata727/tflint --error-with-issues
# - docker run --rm -v $(pwd):/app/ --workdir=/app/ -t wata727/tflint --error-with-issues
- cd examples/eks_test_fixture
- terraform init
- terraform fmt -check=true
Expand All @@ -40,6 +46,7 @@ script:
# script: ci/deploy.sh
# on:
# branch: master

notifications:
email:
recipients:
Expand Down
24 changes: 19 additions & 5 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,22 +5,36 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/) and this
project adheres to [Semantic Versioning](http://semver.org/).

## [[v1.0.0](https://github.com/terraform-aws-modules/terraform-aws-eks/compare/v0.2.0...v1.0.0)] - 2018-06-11]

### Added

- security group id can be provided for either/both of the cluster and the workers. If not provided, security groups will be created with sufficient rules to allow cluster-worker communication. - kudos to @tanmng on the idea ⭐
- outputs of security group ids and worker ASG arns added for working with these resources outside the module.

### Changed

- Worker build out refactored to allow multiple autoscaling groups each having differing specs. If none are given, a single ASG is created with a set of sane defaults - big thanks to @kppullin 🥨

## [[v0.2.0](https://github.com/terraform-aws-modules/terraform-aws-eks/compare/v0.1.1...v0.2.0)] - 2018-06-08]

### Added

- ability to specify extra userdata code to execute following kubelet services start.
- EBS optimization used whenever possible for the given instance type.
- When `configure_kubectl_session` is set to true the current shell will be configured to talk to the kubernetes cluster using config files output from the module.

### Changed

- files rendered from dedicated templates to separate out raw code and config from `hcl`
- `workers_ami_id` is now made optional. If not specified, the module will source the latest AWS supported EKS AMI instead.
- added ability to specify extra userdata code to execute after the second to configure and start kube services.
- When `configure_kubectl_session` is set to true the current shell will be configured to talk to the kubernetes cluster using config files output from the module.
- EBS optimization used whenever possible for the given instance type.

## [[v0.1.1](https://github.com/terraform-aws-modules/terraform-aws-eks/compare/v0.1.0...v0.1.1)] - 2018-06-07]

### Changed

- pre-commit hooks fixed and working.
- made progress on CI, advancing the build to the final `kitchen test` stage before failing.
- Pre-commit hooks fixed and working.
- Made progress on CI, advancing the build to the final `kitchen test` stage before failing.

## [v0.1.0] - 2018-06-07

Expand Down
53 changes: 26 additions & 27 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,17 @@ A terraform module to create a managed Kubernetes cluster on AWS EKS. Available
through the [Terraform registry](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws).
Inspired by and adapted from [this doc](https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html)
and its [source code](https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/eks-getting-started).
Instructions on [this post](https://aws.amazon.com/blogs/aws/amazon-eks-now-generally-available/)
can help guide you through connecting to the cluster via `kubectl`.
Read the [AWS docs on EKS to get connected to the k8s dashboard](https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html).

| Branch | Build status |
| ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| master | [![build Status](https://travis-ci.org/terraform-aws-modules/terraform-aws-eks.svg?branch=master)](https://travis-ci.org/terraform-aws-modules/terraform-aws-eks) |

## Assumptions

* You want to create a set of resources around an EKS cluster: namely an autoscaling group of workers and a security group for them.
* You've created a Virtual Private Cloud (VPC) and subnets where you intend to put this EKS.
* You want to create an EKS cluster and an autoscaling group of workers for the cluster.
* You want these resources to exist within security groups that allow communication and coordination. These can be user provided or created within the module.
* You've created a Virtual Private Cloud (VPC) and subnets where you intend to put the EKS resources.

## Usage example

Expand All @@ -28,7 +28,6 @@ module "eks" {
subnets = ["subnet-abcde012", "subnet-bcde012a"]
tags = "${map("Environment", "test")}"
vpc_id = "vpc-abcde012"
cluster_ingress_cidrs = ["24.18.23.91/32"]
}
```

Expand All @@ -52,8 +51,10 @@ This module has been packaged with [awspec](https://github.com/k1LoW/awspec) tes
3. Ensure your AWS environment is configured (i.e. credentials and region) for test.
4. Test using `bundle exec kitchen test` from the root of the repo.

For now, connectivity to the kubernetes cluster is not tested but will be in the future.
To test your kubectl connection manually, see the [eks_test_fixture README](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/eks_test_fixture/README.md).
For now, connectivity to the kubernetes cluster is not tested but will be in the
future. If `configure_kubectl_session` is set `true`, once the test fixture has
converged, you can query the test cluster from that terminal session with
`kubectl get nodes --watch --kubeconfig kubeconfig`.

## Doc generation

Expand Down Expand Up @@ -93,30 +94,28 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a

| Name | Description | Type | Default | Required |
|------|-------------|:----:|:-----:|:-----:|
| additional_userdata | Extra lines of userdata (bash) which are appended to the default userdata code. | string | `` | no |
| cluster_ingress_cidrs | The CIDRs from which we can execute kubectl commands. | list | - | yes |
| cluster_name | Name of the EKS cluster which is also used as a prefix in names of related resources. | string | - | yes |
| cluster_version | Kubernetes version to use for the cluster. | string | `1.10` | no |
| cluster_name | Name of the EKS cluster. Also used as a prefix in names of related resources. | string | - | yes |
| cluster_security_group_id | If provided, the EKS cluster will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the workers and provide API access to your current IP/32. | string | `` | no |
| cluster_version | Kubernetes version to use for the EKS cluster. | string | `1.10` | no |
| config_output_path | Determines where config files are placed if using configure_kubectl_session and you want config files to land outside the current working directory. | string | `./` | no |
| configure_kubectl_session | Configure the current session's kubectl to use the instantiated cluster. | string | `false` | no |
| ebs_optimized_workers | If left at default of true, will use ebs optimization if available on the given instance type. | string | `true` | no |
| subnets | A list of subnets to associate with the cluster's underlying instances. | list | - | yes |
| configure_kubectl_session | Configure the current session's kubectl to use the instantiated EKS cluster. | string | `true` | no |
| subnets | A list of subnets to place the EKS cluster and workers within. | list | - | yes |
| tags | A map of tags to add to all resources. | string | `<map>` | no |
| vpc_id | VPC id where the cluster and other resources will be deployed. | string | - | yes |
| workers_ami_id | AMI ID for the eks workers. If none is provided, Terraform will search for the latest version of their EKS optimized worker AMI. | string | `` | no |
| workers_asg_desired_capacity | Desired worker capacity in the autoscaling group. | string | `1` | no |
| workers_asg_max_size | Maximum worker capacity in the autoscaling group. | string | `3` | no |
| workers_asg_min_size | Minimum worker capacity in the autoscaling group. | string | `1` | no |
| workers_instance_type | Size of the workers instances. | string | `m4.large` | no |
| vpc_id | VPC where the cluster and workers will be deployed. | string | - | yes |
| worker_groups | A list of maps defining worker group configurations. See workers_group_defaults for valid keys. | list | `<list>` | no |
| worker_security_group_id | If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the EKS cluster. | string | `` | no |
| workers_group_defaults | Default values for target groups as defined by the list of maps. | map | `<map>` | no |

## Outputs

| Name | Description |
|------|-------------|
| cluster_certificate_authority_data | Nested attribute containing certificate-authority-data for your cluster. Tis is the base64 encoded certificate data required to communicate with your cluster. |
| cluster_endpoint | The endpoint for your Kubernetes API server. |
| cluster_id | The name/id of the cluster. |
| cluster_security_group_ids | description |
| cluster_version | The Kubernetes server version for the cluster. |
| config_map_aws_auth | A kubernetes configuration to authenticate to this cluster. |
| kubeconfig | kubectl config file contents for this cluster. |
| cluster_certificate_authority_data | Nested attribute containing certificate-authority-data for your cluster. This is the base64 encoded certificate data required to communicate with your cluster. |
| cluster_endpoint | The endpoint for your EKS Kubernetes API. |
| cluster_id | The name/id of the EKS cluster. |
| cluster_security_group_id | Security group ID attached to the EKS cluster. |
| cluster_version | The Kubernetes server version for the EKS cluster. |
| config_map_aws_auth | A kubernetes configuration to authenticate to this EKS cluster. |
| kubeconfig | kubectl config file contents for this EKS cluster. |
| worker_security_group_id | Security group ID attached to the EKS workers. |
| workers_asg_arns | IDs of the autoscaling groups containing workers. |
18 changes: 11 additions & 7 deletions cluster.tf
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ resource "aws_eks_cluster" "this" {
version = "${var.cluster_version}"

vpc_config {
security_group_ids = ["${aws_security_group.cluster.id}"]
security_group_ids = ["${local.cluster_security_group_id}"]
subnet_ids = ["${var.subnets}"]
}

Expand All @@ -16,39 +16,43 @@ resource "aws_eks_cluster" "this" {

resource "aws_security_group" "cluster" {
name_prefix = "${var.cluster_name}"
description = "Cluster communication with workers nodes"
description = "EKS cluster security group."
vpc_id = "${var.vpc_id}"
tags = "${merge(var.tags, map("Name", "${var.cluster_name}-eks_cluster_sg"))}"
count = "${var.cluster_security_group_id == "" ? 1 : 0}"
}

resource "aws_security_group_rule" "cluster_egress_internet" {
description = "Allow cluster egress to the Internet."
description = "Allow cluster egress access to the Internet."
protocol = "-1"
security_group_id = "${aws_security_group.cluster.id}"
cidr_blocks = ["0.0.0.0/0"]
from_port = 0
to_port = 0
type = "egress"
count = "${var.cluster_security_group_id == "" ? 1 : 0}"
}

resource "aws_security_group_rule" "cluster_https_worker_ingress" {
description = "Allow pods to communicate with the cluster API Server."
description = "Allow pods to communicate with the EKS cluster API."
protocol = "tcp"
security_group_id = "${aws_security_group.cluster.id}"
source_security_group_id = "${aws_security_group.workers.id}"
source_security_group_id = "${local.worker_security_group_id}"
from_port = 443
to_port = 443
type = "ingress"
count = "${var.cluster_security_group_id == "" ? 1 : 0}"
}

resource "aws_security_group_rule" "cluster_https_cidr_ingress" {
cidr_blocks = ["${var.cluster_ingress_cidrs}"]
description = "Allow communication with the cluster API Server."
cidr_blocks = ["${local.workstation_external_cidr}"]
description = "Allow kubectl communication with the EKS cluster API."
protocol = "tcp"
security_group_id = "${aws_security_group.cluster.id}"
from_port = 443
to_port = 443
type = "ingress"
count = "${var.cluster_security_group_id == "" ? 1 : 0}"
}

resource "aws_iam_role" "cluster" {
Expand Down
48 changes: 24 additions & 24 deletions data.tf
Original file line number Diff line number Diff line change
@@ -1,13 +1,7 @@
data "aws_region" "current" {}

data "aws_ami" "eks_worker" {
filter {
name = "name"
values = ["eks-worker-*"]
}

most_recent = true
owners = ["602401143452"] # Amazon
data "http" "workstation_external_ip" {
url = "http://icanhazip.com"
}

data "aws_iam_policy_document" "workers_assume_role_policy" {
Expand All @@ -25,6 +19,16 @@ data "aws_iam_policy_document" "workers_assume_role_policy" {
}
}

data "aws_ami" "eks_worker" {
filter {
name = "name"
values = ["eks-worker-*"]
}

most_recent = true
owners = ["602401143452"] # Amazon
}

data "aws_iam_policy_document" "cluster_assume_role_policy" {
statement {
sid = "EKSClusterAssumeRole"
Expand All @@ -40,19 +44,6 @@ data "aws_iam_policy_document" "cluster_assume_role_policy" {
}
}

data template_file userdata {
template = "${file("${path.module}/templates/userdata.sh.tpl")}"

vars {
region = "${data.aws_region.current.name}"
max_pod_count = "${lookup(local.max_pod_per_node, var.workers_instance_type)}"
cluster_name = "${var.cluster_name}"
endpoint = "${aws_eks_cluster.this.endpoint}"
cluster_auth_base64 = "${aws_eks_cluster.this.certificate_authority.0.data}"
additional_userdata = "${var.additional_userdata}"
}
}

data template_file kubeconfig {
template = "${file("${path.module}/templates/kubeconfig.tpl")}"

Expand All @@ -72,7 +63,16 @@ data template_file config_map_aws_auth {
}
}

module "ebs_optimized" {
source = "./modules/tf_util_ebs_optimized"
instance_type = "${var.workers_instance_type}"
data template_file userdata {
template = "${file("${path.module}/templates/userdata.sh.tpl")}"
count = "${length(var.worker_groups)}"

vars {
region = "${data.aws_region.current.name}"
cluster_name = "${var.cluster_name}"
endpoint = "${aws_eks_cluster.this.endpoint}"
cluster_auth_base64 = "${aws_eks_cluster.this.certificate_authority.0.data}"
max_pod_count = "${lookup(local.max_pod_per_node, lookup(var.worker_groups[count.index], "instance_type", lookup(var.workers_group_defaults, "instance_type")))}"
additional_userdata = "${lookup(var.worker_groups[count.index], "additional_userdata",lookup(var.workers_group_defaults, "additional_userdata"))}"
}
}
31 changes: 13 additions & 18 deletions examples/eks_test_fixture/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -11,18 +11,16 @@ provider "random" {
version = "= 1.3.1"
}

provider "http" {}
provider "local" {}

data "aws_availability_zones" "available" {}

data "http" "workstation_external_ip" {
url = "http://icanhazip.com"
}

locals {
workstation_external_cidr = "${chomp(data.http.workstation_external_ip.body)}/32"
cluster_name = "test-eks-${random_string.suffix.result}"
cluster_name = "test-eks-${random_string.suffix.result}"

worker_groups = "${list(
map("instance_type","t2.small",
"additional_userdata","echo foo bar"
),
)}"

tags = "${map("Environment", "test",
"GithubRepo", "terraform-aws-eks",
Expand Down Expand Up @@ -50,13 +48,10 @@ module "vpc" {
}

module "eks" {
source = "../.."
cluster_name = "${local.cluster_name}"
subnets = "${module.vpc.public_subnets}"
tags = "${local.tags}"
vpc_id = "${module.vpc.vpc_id}"
cluster_ingress_cidrs = ["${local.workstation_external_cidr}"]
workers_instance_type = "t2.small"
additional_userdata = "echo hello world"
configure_kubectl_session = true
source = "../.."
cluster_name = "${local.cluster_name}"
subnets = "${module.vpc.public_subnets}"
tags = "${local.tags}"
vpc_id = "${module.vpc.vpc_id}"
worker_groups = "${local.worker_groups}"
}
4 changes: 2 additions & 2 deletions examples/eks_test_fixture/outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@ output "cluster_endpoint" {
value = "${module.eks.cluster_endpoint}"
}

output "cluster_security_group_ids" {
output "cluster_security_group_id" {
description = "Security group ids attached to the cluster control plane."
value = "${module.eks.cluster_security_group_ids}"
value = "${module.eks.cluster_security_group_id}"
}

output "kubectl_config" {
Expand Down
24 changes: 24 additions & 0 deletions kubectl.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
resource "local_file" "kubeconfig" {
content = "${data.template_file.kubeconfig.rendered}"
filename = "${var.config_output_path}/kubeconfig"
count = "${var.configure_kubectl_session ? 1 : 0}"
}

resource "local_file" "config_map_aws_auth" {
content = "${data.template_file.config_map_aws_auth.rendered}"
filename = "${var.config_output_path}/config-map-aws-auth.yaml"
count = "${var.configure_kubectl_session ? 1 : 0}"
}

resource "null_resource" "configure_kubectl" {
provisioner "local-exec" {
command = "kubectl apply -f ${var.config_output_path}/config-map-aws-auth.yaml --kubeconfig ${var.config_output_path}/kubeconfig"
}

triggers {
config_map_rendered = "${data.template_file.config_map_aws_auth.rendered}"
kubeconfig_rendered = "${data.template_file.kubeconfig.rendered}"
}

count = "${var.configure_kubectl_session ? 1 : 0}"
}

0 comments on commit bc80724

Please sign in to comment.