Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extension does not work with Terraform workspaces that use prefix configuration #21

Open
lkolchin opened this issue Jul 7, 2022 · 12 comments
Labels
bug Something isn't working

Comments

@lkolchin
Copy link

lkolchin commented Jul 7, 2022

Hello,

I'm working with multiple workspaces (with either s3, local or tf cloud backends).
Now I've enabled infracost extension and in vscode I'm getting an error (in the vscode editor):

Could not run the infracost cmd in the /Users/USERNAME/repos/healthscope/Some-Infrastructure directory.
If this is a multi-project workspace please try opening just a single project. 
If this problem continues please open an issue here: https://github.com/infracost/vscode-infracost.

It does work on the cli when I specify the workspace though:

$ infracost breakdown --path . --terraform-workspace dev
Evaluating Terraform directory at .
  ✔ Downloading Terraform modules
  ✔ Evaluating Terraform directory
  ✔ Retrieving cloud prices to calculate costs

Project: Some-Infrastructure/some-aws-shakeout-ec2/app1
Workspace: dev

 Name                                                   Monthly Qty  Unit              Monthly Cost

 aws_alb.application_load_balancer
 ├─ Application load balancer                                   730  hours                   $18.40
 └─ Load balancer capacity units                     Monthly cost depends on usage: $5.84 per LCU

 aws_autoscaling_group.asg
 └─ aws_launch_template.apptemplate
    ├─ Instance usage (Linux/UNIX, on-demand, )                 730  hours                    $0.00
    └─ EC2 detailed monitoring                                    7  metrics                  $2.10

 aws_cloudwatch_metric_alarm.all_alarms["cpu_high"]
 └─ High resolution                                               1  alarm metrics            $0.30

 aws_cloudwatch_metric_alarm.all_alarms["cpu_low"]
 └─ High resolution                                               1  alarm metrics            $0.30

 OVERALL TOTAL                                                                               $21.10
──────────────────────────────────
28 cloud resources were detected:
∙ 4 were estimated, 2 of which include usage-based costs, see https://infracost.io/usage-file
∙ 22 were free, rerun with --show-skipped to see details
∙ 2 are not supported yet, rerun with --show-skipped to see details

Currently got a 'dev' workspace:

$ terraform workspace list
* dev

Why can't you do something like this in the extension to figure out the active workspace?

terraform_workspace="$(command terraform workspace show 2>/dev/null)"

Any way to fix it?

@hugorut hugorut added the bug Something isn't working label Jul 8, 2022
@hugorut
Copy link
Collaborator

hugorut commented Jul 8, 2022

Hey @lkolchin, thanks for raising this issue. So I've got a couple of questions, I have a sneaky suspicion that the extension might not actually be failing because of the workspace:

  1. if you run infracost breakdown --path . in the Some-Infrastructure/some-aws-shakeout-ec2/app1 directory. Do you get a valid output?
  2. Could you open the Some-Infrastructure/some-aws-shakeout-ec2/app1 directory in VSCode, and with the VSCode extension enabled check the "Infracost Debug" output. This can be found by going to View > Output and then selecting "Infracost Debug" from the dropdown.

infracost-debug-log

when you've located that and you received the

Could not run the infracost cmd in the /Users/USERNAME/repos/healthscope/Some-Infrastructure directory.
If this is a multi-project workspace please try opening just a single project. 
If this problem continues please open an issue here: https://github.com/infracost/vscode-infracost.

there should be some further error information in the logs that I'd be grateful if you post here. Many thanks

@lkolchin
Copy link
Author

lkolchin commented Jul 8, 2022

Thanks for your reply,

Without specifying the workspace it fails (In the app1 directory):

$ infracost breakdown --path .
Evaluating Terraform directory at .
  ✖ Downloading Terraform remote variables
$ infracost breakdown --path . --terraform-workspace dev
Evaluating Terraform directory at .
  ✔ Downloading Terraform modules
  ✔ Evaluating Terraform directory
  ✔ Retrieving cloud prices to calculate costs

Project: HSO-Infrastructure/hso-aws-shakeout-ec2/app1
Workspace: dev

 Name                                                   Monthly Qty  Unit              Monthly Cost

 aws_alb.application_load_balancer
 ├─ Application load balancer                                   730  hours                   $18.40
 └─ Load balancer capacity units                     Monthly cost depends on usage: $5.84 per LCU

 aws_autoscaling_group.asg
 └─ aws_launch_template.apptemplate
    ├─ Instance usage (Linux/UNIX, on-demand, )                 730  hours                    $0.00
    └─ EC2 detailed monitoring                                    7  metrics                  $2.10

 aws_cloudwatch_metric_alarm.all_alarms["cpu_high"]
 └─ High resolution                                               1  alarm metrics            $0.30

 aws_cloudwatch_metric_alarm.all_alarms["cpu_low"]
 └─ High resolution                                               1  alarm metrics            $0.30

 OVERALL TOTAL                                                                               $21.10
──────────────────────────────────
28 cloud resources were detected:
∙ 4 were estimated, 2 of which include usage-based costs, see https://infracost.io/usage-file
∙ 22 were free, rerun with --show-skipped to see details
∙ 2 are not supported yet, rerun with --show-skipped to see details

And seeing exactly the same issue in the "Debug Output":
�[91mError:�[0m --terraform-workspace is not specified. Unable to detect organization or workspace.

I am using TF cloud as a backend that looks something like that:

terraform {
  backend "remote" {
    hostname     = "app.terraform.io"
    organization = "Some-Infrastructure"

    workspaces {
      prefix = "some-aws-shakeout-ec2-app1-"
    }
  }
}

I'm using a pretty neat way where my workspaces are essentially the same across 'local', 's3', 'TF Cloud' backends which allows for a great code mobility.
And if I'm in the dev CLI workspace the TF Cloud workspace's name would be - some-aws-shakeout-ec2-app1-dev

If you can't replicate this error using the above info I could create a dummy repo for you to try.

@hugorut
Copy link
Collaborator

hugorut commented Jul 9, 2022

Thanks, @lkolchin, for the follow-up information; ah yes, this is a use case we run into quite a lot. Right now, as you're aware, the extension doesn't support this, as it's somewhat naive in the way it runs the underlying Infracost command. I'll look at supporting this use case ASAP, probably with the option to configure the workspace through the UI.

In the meantime, to get the Infracost extension actually working with your setup you could do something like so:

terraform {
  backend "remote" {
    hostname     = "app.terraform.io"
    organization = "Some-Infrastructure"

    workspaces {
      # prefix = "some-aws-shakeout-ec2-app1-"
      name = "some-aws-shakeout-ec2-app1-dev"
    }
  }
}

In your Terraform project. I know this isn't great, but it's a workaround until we get this implemented.

@hugorut hugorut changed the title vscode extension and workspaces - doesn't work Extension does not work with Terraform workspaces that use prefix configuration Jul 9, 2022
@lkolchin
Copy link
Author

lkolchin commented Jul 9, 2022

Thanks @hugorut ,

Unfortunately using specific workspace in the backend config (using "name=workspacename") is not an option as we have those workspaces set in a somewhat dynamic way in our CI/CD. There might be multiple CLI workspaces dev/test/prod that work off the same repo and correspond to different Terraform Cloud workspaces (some-aws-shakeout-ec2-app1-dev/some-aws-shakeout-ec2-app1-test/some-aws-shakeout-ec2-app1-prod).
When terraform runs we have a snippet in the TF code that auto-detects what workspace is currently used and allows us to load variables from vars module according to the current workspace used (dev/test/prod)

Here is a little snippet that can give you an idea on how we are doing it (Might give you some ideas when you work on your fix):

# IMPORTANT: variable named remote_workspace_env is required for each Terraform Cloud workspace
# for example for tf-aws-sg-ebs-example-dev workspace you'd need to set remote_workspace_env=dev variable
# so either call it:
# TF_VAR_remote_workspace_env=dev (Environment variable)
# or
# remote_workspace_env=dev (Terraform variable)

# https://github.com/hashicorp/terraform/issues/22802
# https://support.hashicorp.com/hc/en-us/articles/360022827893-Terraform-workspace-value-is-always-default
variable "remote_workspace_env" {
  default     = "default"
  description = <<EOF
This is the workspace indicated via the environment or some other mechanism. This is useful in 
an remote execution environment where the workspace name will not necessary be the same as the active
workspace of your local environment, such as when using remote execution in Terraform Cloud.
  EOF
}

# https://github.com/hashicorp/terraform/issues/22802
# https://support.hashicorp.com/hc/en-us/articles/360022827893-Terraform-workspace-value-is-always-default
# With Terraform 1.1.0 or later HashiCorp finally fixed a long standing issue (see 22802 issue above) where TF Cloud terraform.workspace interpolation was always showing as 'default'
# Hence the following locals block caters for any version of terraform.
locals {
  # if the workspace env (workspace name) is "default" (such as any run in Terraform Cloud with terraform < 1.1.0), then 
  # var.remote_workspace_env is the chosen value otherwise it's terraform.workspace's env suffix (for example for tf-aws-sg-ebs-example-dev the value of $element(...) would be 'dev')
  workspace_env = terraform.workspace == "default" ? var.remote_workspace_env : "${element(split("-", terraform.workspace), length(split("-", terraform.workspace)) - 1)}"
}

#-----------------------------------------------------------------------------------------------------------------------
# Fetch variables based on Environment
#-----------------------------------------------------------------------------------------------------------------------
module "vars" {
  source        = "./modules/vars"
  workspace_env = local.workspace_env
}

Happy to test it again when you come up with a fix :)

@hugorut
Copy link
Collaborator

hugorut commented Jul 9, 2022

Unfortunately using specific workspace in the backend config (using "name=workspacename") is not an option as we have those workspaces set in a somewhat dynamic way in our CI/CD

Hi @lkolchin, I wasn't suggesting committing these changes, just setting the workspace as something static whilst you work on the project locally.

Thanks for the snippet/example 👍

@lkolchin
Copy link
Author

Hi @hugorut ,

Just tested your suggestion locally.
Changed backend to:

terraform {
  backend "remote" {
    hostname     = "app.terraform.io"
    organization = "some-Infrastructure"

    workspaces {
      # prefix = "some-aws-shakeout-ec2-app1-"
      name = "some-aws-shakeout-ec2-app1-dev"
    }
  }
}

And it didn't help as vscode didn't show anything.

debug: initializing workspace
debug: running Infracost in project: /Users/lkolchin/repos/some/some-Infrastructure/submit/shakeout/some-aws-shakeout-ec2/app1
debug: running Infracost cmd INFRACOST_CLI_PLATFORM=vscode infracost breakdown --path /Users/lkolchin/repos/some/some-Infrastructure/submit/shakeout/some-aws-shakeout-ec2/app1 --format json --log-level info
debug: found project [object Object]
debug: adding file: /Users/lkolchin/repos/some/some-Infrastructure/submit/shakeout/some-aws-shakeout-ec2/app1/main.tf to project: /Users/lkolchin/repos/some/some-Infrastructure/submit/shakeout/some-aws-shakeout-ec2/app1
debug: adding file: /Users/lkolchin/repos/some/some-Infrastructure/submit/shakeout/some-aws-shakeout-ec2/app1/main.tf to project: /Users/lkolchin/repos/some/some-Infrastructure/submit/shakeout/some-aws-shakeout-ec2/app1
debug: adding file: /Users/lkolchin/repos/some/some-Infrastructure/submit/shakeout/some-aws-shakeout-ec2/app1/main.tf to project: /Users/lkolchin/repos/some/some-Infrastructure/submit/shakeout/some-aws-shakeout-ec2/app1
debug: adding file: /Users/lkolchin/repos/some/some-Infrastructure/submit/shakeout/some-aws-shakeout-ec2/app1/main.tf to project: /Users/lkolchin/repos/some/some-Infrastructure/submit/shakeout/some-aws-shakeout-ec2/app1

image

@hugorut
Copy link
Collaborator

hugorut commented Jul 10, 2022

Hmm @lkolchin this seems odd, I'm working on some fixes this weekend which should give some better debug support so once those are live could I ask you to run again and see what's going on? Many thanks for proving these useful comments 🙏

@lkolchin
Copy link
Author

Sure, no worries I'll give it a go.
Just let me know when there is a new version :)

@lkolchin
Copy link
Author

Unfortunately this is still broken in the new 0.10.9 version when testing in vscode - msg="could not load vars from Terraform Cloud: --terraform-workspace is not specified. Unable to detect organization or workspace.":

On top of that the new CLI version is not in the homebrew and when you install it manually (in my case on the Apple M1 silicon) you get this which forces you to add security exception manually:
image

@hugorut
Copy link
Collaborator

hugorut commented Aug 12, 2022

Hi @lkolchin,

hmm, seem's like we'll need a way for users to provide this workspace in the editor. A workaround for this at the moment is to try and set a global env variable called TERRAFORM_WORKSPACE. This should be picked up by the vscode shell that we run infracost within.

Regarding the version, I'm seeing the 10.9 infracost in brew. When I run brew upgrade infracost I get the following:
SCR-20220812-c6u

What is your machine showing?

The warning from Apple is if you have manually downloaded the release through chrome. I don't recommend doing this, but instead using our manual install script which downloads the latest release from github:

# Downloads the CLI based on your OS/arch and puts it in /usr/local/bin
curl -fsSL https://raw.githubusercontent.com/infracost/infracost/master/scripts/install.sh | sh

@lkolchin
Copy link
Author

Thanks @hugorut ,

I think there is a problem with 10.9 for M1 (Apple silicon - arm64 arch).
Works for you 'coz you are on amd64 arch:

$ brew install infracost
Warning: infracost 0.10.6 is already installed and up-to-date.
To reinstall 0.10.6, run:
  brew reinstall infracost
$ brew install [email protected]
Warning: No available formula with the name "[email protected]". Did you mean infracost?
==> Searching for similarly named formulae...
This similarly named formula was found:
infracost ✔
To install it, run:
  brew install infracost ✔
==> Searching for a previously deleted formula (in the last month)...
Error: No previously deleted formula found.
==> Searching taps on GitHub...
Error: No formulae found in taps.

You need to upload the version for arm64?

Anyway, I've tried your suggestion with:

$ export TERRAFORM_WORKSPACE=dev && code .

And it still doesn't show the costs in the vscode:

image

@hugorut
Copy link
Collaborator

hugorut commented Aug 15, 2022

@lkolchin I'm running on m1, this isn't the problem. I think it's the fact that we don't provide versioned formula for brew. Just distribute a single version which is updated with each release. Hence why if you run brew upgrade infracost this should update your CLI.

Ok thanks for the update on the env var, this looks like we'll have to modify the underlying process. This is on the roadmap, but I don't have an estimate when we'll get to it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants