Skip to content

Latest commit

 

History

History
71 lines (52 loc) · 3.47 KB

README.md

File metadata and controls

71 lines (52 loc) · 3.47 KB

cloud-platform-environments

Intro

This repository is where kubernetes namespaces are managed, across all the clusters. Kubernetes namespaces and resources are defined in the namespaces directory in this repository under the corresponding cluster name.

Functionality

The pipeline will for each defined cluster:

  1. Create a namespace as defined in the namespaces/cluster directory. If the namespace already exists on the cluster it will be ignored.
  2. Delete any namespaces that exist in the cluster but are not defined in the repository.
  3. Create any kubernetes resource that is defined under namespaces/cluster/namespace

Namespaces

The namespaces/ directory contains sub directories named after the existing cluster names, and inside, sub directories named after each of the desired namespaces you want to create for each cluster. Placed inside are the kubernetes resource files you want to create in the kubernetes format. Those will be created automatically after a push is made to the Repositories master branch by the AWS code pipeline.

AWS resources

In a similar fashion as namespaces, you can create AWS resources in your desired namespace. The file structure for that is namespaces/cluster/namespace/terraform/ and Terraform files should be placed in that route for the pipeline to be triggered and create those AWS resources. Different terraform modules exist, for example: ECR credentials, S3 bucket, and should be used to create these resources as follows:

Changes within namespaces

Changes within namespaces directory are managed by the build-environments concourse job configured here. GitHub triggers the build process using webhook. Build itself runs script whichNamespace.sh checking for last commit changes, and if it detects any within namespace folder it executes namespace.py with appropriate cluster(s) parameter.

Example terraform file

module "my_S3_bucket" {
  source = "github.com/ministryofjustice/cloud-platform-terraform-s3-bucket?ref=1.0"

  team_name = "my-team"
  bucket_id = "my-bucket"
}

resource "kubernetes_secret" "my_S3_bucket_creeds" {
  metadata {
    name = "my-S3-bucket-creeds"
  }

  data {
    access_key_id     = "${module.my_s3_bucket.access_key_id}"
    Secret_access_key = "${module.my_s3_bucket.secret_access_key}"
    Bucket_name       = "${module.my_s3_bucket.bucket_name}"
  }
}

concourse-ci/status Check

There are occasions where Terraform in the code/pipeline plan will return something like the following which appears to remove policy config. However examination of the plan will reveal that it has been re-added by terraform

      ~ policy = jsonencode(
            {
              - Statement = [
                  - {
                      - Action   = "dynamodb:*"
                      - Effect   = "Allow"
                      - Resource = "arn:aws:dynamodb:eu-west-2:1234567898:table/cp-abcdefghij123"
                      - Sid      = ""
                    },
                ]
              - Version   = "2012-10-17"
            }
        ) -> (known after apply)
        user   = "cp-dynamo-"cp-abcdefghij123