Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

port_mappings are always empty #151

Open
allexivan opened this issue Jan 12, 2024 · 15 comments
Open

port_mappings are always empty #151

allexivan opened this issue Jan 12, 2024 · 15 comments

Comments

@allexivan
Copy link

allexivan commented Jan 12, 2024

Description

When I try to create a new container definition, the port_mapping are always empty.

  • [X ] ✋ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]:
    5.7.3

  • Terraform version:
    Terraform v1.6.6
    on darwin_arm64

  • Provider version(s):

  • provider registry.terraform.io/hashicorp/aws v5.31.0
  • provider registry.terraform.io/hashicorp/random v3.6.0

Reproduction Code [Required]

Steps to reproduce the behavior:

module "ecs_container_definition" {
  source  = "terraform-aws-modules/ecs/aws//modules/container-definition"
  version = "~> 5.7.3"

  for_each = var.ecs_services

  name   = each.value["ecs_task_container_name"]
  cpu    = each.value["ecs_task_cpu"]
  memory = each.value["ecs_task_memory"]

  environment = each.value["environment"]
  secrets     = each.value["secrets"]
  essential   = true
  image       = "ar-public-maintenance"

  port_mappings = [
    {
      name          = each.value["ecs_task_container_name"]
      containerPort = each.value["ecs_task_container_port"]
      hostPort      = each.value["ecs_task_host_port"]
      protocol      = each.value["ecs_task_protocol"]
    }
  ]

  readonly_root_filesystem = each.value["ecs_task_readonly_root_filesystem"]

  enable_cloudwatch_logging = true
  log_configuration = {
    cloud_watch_log_group_name = "${var.prefix}-${var.environment}-${each.key}-task-log-group"
  }

  memory_reservation = each.value["ecs_task_memory"]

  tags = {
    Environment = var.environment
    Terraform   = "true"
  }
}

output "ecs_container_definitions" {
description = "Container definitions for each ECS service"
value = { for svc_name, def in module.ecs_container_definition : svc_name => def.container_definition }
}

Expected behavior

In AWS task definition:


{
    "taskDefinitionArn": "arn here",
    "containerDefinitions": [
        {
            "name": "demo",
            "image": "ar-public-maintenance",
            "cpu": 128,
            "memory": 256,
            "memoryReservation": 256,
            "portMappings": [
                {
                    "name": "demo",
                    "containerPort": 80,
                    "hostPort": 80,
                    "protocol": "tcp"
                }
            ],

Actual behavior

In AWS task definition:

{
    "taskDefinitionArn": "arn here",
    "containerDefinitions": [
        {
            "name": "demo",
            "image": "ar-public-maintenance",
            "cpu": 128,
            "memory": 256,
            "portMappings": [],
}

Terminal Output Screenshot(s)

Changes to Outputs:
  + ecs_container_definitions                 = {
      + app-backend = {
          + cpu                    = 128
          + environment            = [
              + {
                  + name  = "APP_ENV"
                  + value = "STAGING"
                },
              + {
                  + name  = "METRICS_PORT"
                  + value = "9091"
                },
              + {
                  + name  = "BULL_BOARD_PORT"
                  + value = "7071"
                },
              + {
                  + name  = "NODE_ENV"
                  + value = "production"
                },
            ]
          + essential              = true
          + image                  = "ar-public-maintenance"
          + interactive            = false
          + linuxParameters        = {
              + initProcessEnabled = false
            }
          + logConfiguration       = {
              + cloud_watch_log_group_name = "backend-task-log-group"
              + logDriver                  = "awslogs"
              + options                    = {
                  + awslogs-group         = "/aws/ecs//demo"
                  + awslogs-region        = "us-west-2"
                  + awslogs-stream-prefix = "ecs"
                }
            }
          + memory                 = 256
          + memoryReservation      = 256
          + mountPoints            = []
          + name                   = "demo"
          + portMappings           = [
              + {
                  + containerPort = 80
                  + hostPort      = 80
                  + name          = "demo"
                  + protocol      = "tcp"
                },
            ]
          + privileged             = false
          + pseudoTerminal         = false
          + readonlyRootFilesystem = false
          + secrets                = [
              + {
                  + name      = "AR_CORE_API_URL"
                  + valueFrom = ""
                },
            ]
          + startTimeout           = 30
          + stopTimeout            = 120
          + volumesFrom            = []
        }
    }

Additional context

Similar issue as this one:

#122

@allexivan
Copy link
Author

Does anyone know if this has any solution? Or do I need to rewrite it and use classical terraform resources instead of a module? I see others had similar issues, but I am not sure if anyone found a fix,

@bryantbiggs
Copy link
Member

if you can provide a reproduction, we can take a look and help figure out whats going on. However, the code provided above is not deployable

@allexivan
Copy link
Author

allexivan commented Jan 20, 2024

Ok. Here is a simplified version taken from here #147 (comment)

locals {
  name = "test-ecs-module"
  tags = {
    Env     = "test"
    Project = "ecs-module"
  }
}

module "cluster" {
  source = "terraform-aws-modules/ecs/aws//modules/cluster"

  cluster_name = local.name

  fargate_capacity_providers = {
    FARGATE = {
      default_capacity_provider_strategy = {
        weight = 100
      }
    }
  }
  tags = local.tags
}


module "nginx" {
  source                   = "terraform-aws-modules/ecs/aws//modules/container-definition"
  version                  = "5.7.3"
  name                     = local.name
  service                  = local.name
  essential                = true
  readonly_root_filesystem = false
  image                    = "public.ecr.aws/nginx/nginx:1.25.3"
  mount_points = [
    {
      containerPath = "/conf/"
      sourceVolume  = "conf"
      readOnly      = true
    }
  ]
  port_mappings = [
    {
      containerPort = 80
      hostPort      = 80
      protocol      = "tcp"
    }
  ]
  enable_cloudwatch_logging = false
  create_cloudwatch_log_group = false
}

output "nginx_container_definition" {
  description = "The container definition for the nginx module"
  value       = module.nginx.container_definition
}

module "service" {
  source      = "terraform-aws-modules/ecs/aws//modules/service"
  version     = "5.7.3"
  name        = local.name
  cluster_arn = module.cluster.arn

  cpu           = 256
  memory        = 512
  desired_count = 1
  launch_type   = "FARGATE"

  create_task_exec_iam_role = true
  create_tasks_iam_role     = true

  create_security_group = true
  security_group_rules = [
    {
      description = "Allow egress"
      type        = "egress"
      protocol    = "all"
      from_port   = 0
      to_port     = 65535
      cidr_blocks = ["0.0.0.0/0"]
    }
  ]
  subnet_ids       = module.vpc["main"].private_subnets
  network_mode     = "awsvpc"
  assign_public_ip = false

  container_definitions = {
    (local.name) = module.nginx.container_definition
  }

  volume = [
    {
      name : "conf"
    }
  ]

  enable_autoscaling             = false
  ignore_task_definition_changes = false
  tags                           = local.tags
  propagate_tags                 = "TASK_DEFINITION"
}

The output of nginx_container_definition:

Changes to Outputs:
  + nginx_container_definition                = {
      + environment            = []
      + essential              = true
      + image                  = "public.ecr.aws/nginx/nginx:1.25.3"
      + interactive            = false
      + linuxParameters        = {
          + initProcessEnabled = false
        }
      + logConfiguration       = {
          + logDriver = "awslogs"
          + options   = {
              + awslogs-group         = ""
              + awslogs-region        = "us-west-2"
              + awslogs-stream-prefix = "ecs"
            }
        }
      + mountPoints            = [
          + {
              + containerPath = "/conf/"
              + readOnly      = true
              + sourceVolume  = "conf"
            },
        ]
      + name                   = "test-ecs-module"
      + portMappings           = [
          + {
              + containerPort = 80
              + hostPort      = 80
              + protocol      = "tcp"
            },
        ]
      + privileged             = false
      + pseudoTerminal         = false
      + readonlyRootFilesystem = false
      + startTimeout           = 30
      + stopTimeout            = 120
      + volumesFrom            = []
    }

The actual JSON in AWS ECS Tasks:

{
    "taskDefinitionArn": "arn:aws:ecs:us-west-2:xxxxx:task-definition/test-ecs-module:1",
    "containerDefinitions": [
        {
            "name": "test-ecs-module",
            "image": "public.ecr.aws/nginx/nginx:1.25.3",
            "cpu": 0,
            "portMappings": [],
            "essential": true,
            "environment": [],
            "mountPoints": [],
            "volumesFrom": [],
            "linuxParameters": {
                "initProcessEnabled": false
            },
            "startTimeout": 30,
            "stopTimeout": 120,
            "user": "0",
            "privileged": false,
            "readonlyRootFilesystem": true,
            "interactive": false,
            "pseudoTerminal": false,
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "/aws/ecs/test-ecs-module/test-ecs-module",
                    "awslogs-region": "us-west-2",
                    "awslogs-stream-prefix": "ecs"
                }
            }
        }
    ],

As you see, portMappings and mountPoints are empty.

@bryantbiggs
Copy link
Member

ah this is a duplicate of #147 - what is the motivation for having the container definition defined on its own, outside the service?

@allexivan
Copy link
Author

allexivan commented Jan 20, 2024

Because I have many containers running on the same service and also custom dynamic env and secrets, which are not supported inside the service (because of for_each).

@bryantbiggs
Copy link
Member

custom dynamic env and secrets, which are not supported inside the service (because of for_each).

What do you mean "not supported"?

https://github.com/aws-ia/ecs-blueprints/blob/313d458c87708d4678ab4ef572f9da860045381c/terraform/fargate-examples/backstage/main.tf#L44-L54

@allexivan
Copy link
Author

allexivan commented Jan 20, 2024

Yes, that works, but I need to merge static and dynamic env and secrets from a map.

Something like this:

locals {
  additional_environment = {
    "APP_ENV"                   = var.App_Env,
    "NODE_ENV"                  = var.Node_Env,
    "CONFIG_REDIS__HOST"        = try(aws_elasticache_cluster.elastic_cache_cluster["cluster1-redis"].cache_nodes[0].address, "")
    "CONFIG_POSTGRES__USER"     = var.aurora_postgresql_v2_master_username,
    "CONFIG_POSTGRES__PASSWORD" = var.aurora_postgresql_v2_master_pwd,
    "CONFIG_POSTGRES__HOST"     = try(module.aurora_postgresql_v2.cluster_endpoint, ""),
  }

  environment_variables = [
    for key, value in local.additional_environment : {
      name  = key
      value = value != "" ? value : null
    }
  ]
}

module "ecs_container_definition" {
  source  = "terraform-aws-modules/ecs/aws//modules/container-definition"
  version = "~> 5.7.3"

  for_each = var.ecs_services

  name   = each.value["ecs_task_container_name"]
  cpu    = each.value["ecs_task_cpu"]
  memory = each.value["ecs_task_memory"]

  environment = concat([
    for item in each.value["environment"] : {
      name  = item.name
      value = item.value
    }
  ], local.environment_variables)

This works with the container-definition module, but it does not work under services module. I get

│ Error: Invalid for_each argument
│   on .terraform/modules/ecs_service/modules/service/main.tf line 525, in module "container_definition":
│  525:   for_each = { for k, v in var.container_definitions : k => v if local.create_task_definition && try(v.create, true) }

│     ├────────────────

│     │ local.create_task_definition is true

│     │ var.container_definitions will be known only after apply

The "for_each" map includes keys derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that will identify the instances of this resource. When working with unknown values in for_each, it's better to define the map keys statically in your configuration and place apply-time results only in the map values.

Maybe I am missing something or there is another solution. I cannot have hardcoded env vars and secrets because I have hundreds of services and containers, each with other settings.

@bryantbiggs
Copy link
Member

bryantbiggs commented Jan 20, 2024

And what if you do this instead:

locals {
  additional_environment = {
    "APP_ENV"                   = var.App_Env,
    "NODE_ENV"                  = var.Node_Env,
    "CONFIG_REDIS__HOST"        = lookup(aws_elasticache_cluster.elastic_cache_cluster["cluster1-redis"].cache_nodes[0], "address", null)
    "CONFIG_POSTGRES__USER"     = var.aurora_postgresql_v2_master_username,
    "CONFIG_POSTGRES__PASSWORD" = var.aurora_postgresql_v2_master_pwd,
    "CONFIG_POSTGRES__HOST"     = lookup(module.aurora_postgresql_v2, "cluster_endpoint", null),
  }
}

module "ecs_container_definition" {
  source  = "terraform-aws-modules/ecs/aws//modules/container-definition"
  version = "~> 5.7.3"

  for_each = var.ecs_services

  name   = each.value["ecs_task_container_name"]
  cpu    = each.value["ecs_task_cpu"]
  memory = each.value["ecs_task_memory"]

  environment = concat([
      for key, value in local.additional_environment : {
        name  = key
        value = value
      }
    ],
    local.environment_variables
  )

@allexivan
Copy link
Author

yea, that did the trick. Thanks!
But I am still wondering why the container_definition module does not work properly.

@allexivan
Copy link
Author

Here is my code for whoever has the same issue:

locals {
  additional_environment = {
    "ACM"        = lookup(module.acm_virginia, "acm_certificate_arn", null)
  }

  environment_variables = [
    for key, value in local.additional_environment : {
      name  = key
      value = value != "" ? value : null
    }
  ]
}

module "ecs_service" {

  source  = "terraform-aws-modules/ecs/aws//modules/service"
  version = "~> 5.7.3"

  for_each = var.ecs_services

  name        = "${var.prefix}-${var.environment}-${each.value["ecs_service_name"]}-service"
  cluster_arn = module.ecs_cluster["cluster1"].arn

 ## Container definitions
  container_definitions = jsondecode(jsonencode({
    (each.value["ecs_task_container_name"]) = {
      cpu    = each.value["ecs_task_cpu"]
      memory = each.value["ecs_task_memory"]
      
      environment = concat([
        for item in each.value["environment"] :
        {
          name  = item.name
          value = item.value
        }
        ], [
        for key, value in local.additional_environment :
        {
          name  = key
          value = value
        }
      ])

      # environment = each.value["environment"]
      secrets   = each.value["secrets"]
      essential = true
      image     = "public-maintenance"

      port_mappings = [
        {
          name          = each.value["ecs_task_container_name"]
          containerPort = each.value["ecs_task_container_port"]
          hostPort      = each.value["ecs_task_host_port"]
          protocol      = each.value["ecs_task_protocol"]
        }
      ]
    }
  }))

@allexivan
Copy link
Author

And what if you do this instead:

locals {
  additional_environment = {
    "APP_ENV"                   = var.App_Env,
    "NODE_ENV"                  = var.Node_Env,
    "CONFIG_REDIS__HOST"        = lookup(aws_elasticache_cluster.elastic_cache_cluster["cluster1-redis"].cache_nodes[0], "address", null)
    "CONFIG_POSTGRES__USER"     = var.aurora_postgresql_v2_master_username,
    "CONFIG_POSTGRES__PASSWORD" = var.aurora_postgresql_v2_master_pwd,
    "CONFIG_POSTGRES__HOST"     = lookup(module.aurora_postgresql_v2, "cluster_endpoint", null),
  }
}

module "ecs_container_definition" {
  source  = "terraform-aws-modules/ecs/aws//modules/container-definition"
  version = "~> 5.7.3"

  for_each = var.ecs_services

  name   = each.value["ecs_task_container_name"]
  cpu    = each.value["ecs_task_cpu"]
  memory = each.value["ecs_task_memory"]

  environment = concat([
      for key, value in local.additional_environment : {
        name  = key
        value = value
      }
    ],
    local.environment_variables
  )

Actually it seems it works only when the resources exist already. However, if I try to plan it from scratch, I still have the same issue:

Error: Invalid for_each argument
│ on .terraform/modules/ecs_service/modules/service/main.tf line 525, in module "container_definition":
│ 525: for_each = { for k, v in var.container_definitions : k => v if local.create_task_definition && try(v.create, true) }
│ │ local.create_task_definition is true
│ │ var.container_definitions will be known only after apply
│ The "for_each" map includes keys derived from resource attributes that
│ cannot be determined until apply, and so Terraform cannot determine the
│ full set of keys that will identify the instances of this resource.

@allexivan
Copy link
Author

locals {
  additional_environment = {
    "APP_ENV"                   = var.App_Env,
    "NODE_ENV"                  = var.Node_Env,
    "CONFIG_REDIS__HOST"        = try(lookup(aws_elasticache_cluster.elastic_cache_cluster["cluster1-redis"].cache_nodes[0], "address"), "")
    "CONFIG_POSTGRES__USER"     = var.aurora_postgresql_v2_master_username,
    "CONFIG_POSTGRES__PASSWORD" = var.aurora_postgresql_v2_master_pwd,
    "CONFIG_POSTGRES__HOST"     = try(lookup(module.aurora_postgresql_v2, "cluster_endpoint"), "")
    "BIDGEMMER_DATABASE_URL"    = "VALUE3"
  }

  environment_variables = [
    for key, value in local.additional_environment : {
      name  = key
      value = value != "" ? value : null
    }
  ]
}

this also fails. I also added depends_on for the module.
I will have to do a workaround and hardcode some variables.

@allexivan
Copy link
Author

allexivan commented Jan 21, 2024

this doesn't work either:

https://github.com/aws-ia/ecs-blueprints/blob/313d458c87708d4678ab4ef572f9da860045381c/terraform/fargate-examples/backstage/main.tf#L44-L54

My code:

module "ecs_service" {

  source  = "terraform-aws-modules/ecs/aws//modules/service"
  version = "~> 5.7.3"

  for_each = var.ecs_services

  name        = "${var.prefix}-${var.environment}-${each.value["ecs_service_name"]}-service"
  cluster_arn = module.ecs_cluster["cluster1"].arn
  
  ## Container definitions
  container_definitions = jsondecode(jsonencode({
    (each.value["ecs_task_container_name"]) = {
      cpu    = each.value["ecs_task_cpu"]
      memory = each.value["ecs_task_memory"]
      environment = [
        { name = "APP_ENV", value = var.App_Env },
        { name = "NODE_ENV", value = var.Node_Env },
        { name = "CONFIG_REDIS__HOST", value = try(lookup(aws_elasticache_cluster.elastic_cache_cluster["cluster1-redis"].cache_nodes[0], "address"), "") },
        { name = "CONFIG_POSTGRES__USER", value = var.aurora_postgresql_v2_master_username },
        { name = "CONFIG_POSTGRES__PASSWORD", value = var.aurora_postgresql_v2_master_pwd },
        { name = "CONFIG_POSTGRES__HOST", value = try(lookup(module.aurora_postgresql_v2, "cluster_endpoint"), "") },
      ]

Error: Invalid for_each argument
│ on .terraform/modules/ecs_service/modules/service/main.tf line 525, in module "container_definition":
│ 525: for_each = { for k, v in var.container_definitions : k => v if local.create_task_definition && try(v.create, true) }
│ │ local.create_task_definition is true
│ │ var.container_definitions will be known only after apply
│ The "for_each" map includes keys derived from resource attributes that
│ cannot be determined until apply, and so Terraform cannot determine the
│ full set of keys that will identify the instances of this resource.

@omi-jobs
Copy link

We also have an error regarding this line when creating an ECS cluster with a service and task definition. We have multiple containers inside a task definition. The error is the same as ours when running the terraform apply command:

Error: Invalid for_each argument
│ 
│   on .terraform/modules/container.ecs/modules/service/main.tf line 525, in module "container_definition":
│  525:   for_each = { for k, v in var.container_definitions : k => v if local.create_task_definition && try(v.create, true) }
│     ├────────────────
│     │ local.create_task_definition is true
│     │ var.container_definitions will be known only after apply
│ 
│ The "for_each" map includes keys derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that will identify the instances of this resource.
│ 
│ When working with unknown values in for_each, it's better to define the map keys statically in your configuration and place apply-time results only in the map values.
│ 
│ Alternatively, you could use the -target planning option to first apply only the resources that the for_each value depends on, and then apply a second time to fully converge.

Please let us know what the solution would be to fix it. @bryantbiggs

The workaround that we tried is to first create the ECS cluster and service, and comment out the task definition. Then, uncomment the task definition and run terraform apply

@bryantbiggs
Copy link
Member

the actual issue is #147 which we do not have a fix at this time

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants