You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have tried to reuse AWS resources after the first terraform apply. In a way running the apply command with this, -var='locat_plan_filename="script1.py"' , but the apply failed with the below logs.
Logs
⚡ terraform plan
Acquiring state lock. This may take a few moments...
module.locust.tls_private_key.loadtest: Refreshing state... [id=a5b616973aaf0f278d108eb3d874fed573c0ca87]
module.locust.aws_key_pair.loadtest: Refreshing state... [id=locuster-dist-loadtest-keypair]
module.locust.data.aws_subnet.current: Reading...
module.locust.data.aws_ami.amazon_linux_2: Reading...
module.locust.aws_iam_role.loadtest: Refreshing state... [id=locuster-dist-loadtest-role]
module.locust.null_resource.key_pair_exporter: Refreshing state... [id=7139382739481599406]
module.locust.data.aws_subnet.current: Read complete after 1s [id=subnet-****]
module.locust.data.aws_vpc.current: Reading...
module.locust.data.aws_vpc.current: Read complete after 0s [id=vpc-****]
module.locust.aws_security_group.loadtest: Refreshing state... [id=sg-0cc7e185d0b6035e6]
module.locust.aws_iam_instance_profile.loadtest: Refreshing state... [id=locuster-dist-loadtest-profile]
module.locust.data.aws_ami.amazon_linux_2: Read complete after 3s [id=ami-0d08ef957f0e4722b]
module.locust.aws_instance.nodes[3]: Refreshing state... [id=i-0dcf1bab9c079d17d]
module.locust.aws_instance.nodes[2]: Refreshing state... [id=i-0e9a8ccbe930343a8]
module.locust.aws_instance.nodes[0]: Refreshing state... [id=i-0cc320e0c6964a3d4]
module.locust.aws_instance.nodes[1]: Refreshing state... [id=i-0500f93dc3e480a58]
module.locust.aws_instance.leader: Refreshing state... [id=i-01069a4aa417686e4]
module.locust.null_resource.spliter_execute_command: Refreshing state... [id=4082036881688020582]
module.locust.null_resource.setup_leader: Refreshing state... [id=6433090945558305086]
module.locust.null_resource.setup_nodes[2]: Refreshing state... [id=7349787731921181745]
module.locust.null_resource.setup_nodes[0]: Refreshing state... [id=4438619351419084083]
module.locust.null_resource.setup_nodes[1]: Refreshing state... [id=87202048137344834]
module.locust.null_resource.setup_nodes[3]: Refreshing state... [id=462707723291235854]
module.locust.null_resource.executor[0]: Refreshing state... [id=1416692590300092713]
╷
│ Error: Cycle: module.locust.null_resource.setup_nodes[2] (destroy), module.locust.null_resource.setup_nodes[0] (destroy), module.locust.null_resource.setup_nodes[3] (destroy), module.locust.null_resource.executor[0] (destroy), module.locust.null_resource.setup_nodes[1] (destroy)
│
│
╵
Releasing state lock. This may take a few moments...
Proposal
Change the dependency between null_resource.executor and null_resource.setup_nodes.
As Locust requires to launch a master node first, and then workers to register recognized workers only. IMHO, the cycle would be solved with the dependency reverse, unless null_resource.setup_nodes has a critical reason that should be done before than null_resource.executor.
Problem
I have tried to reuse AWS resources after the first terraform apply. In a way running the apply command with this,
-var='locat_plan_filename="script1.py"'
, but the apply failed with the below logs.Logs
Proposal
null_resource.executor
andnull_resource.setup_nodes
.As Locust requires to launch a master node first, and then workers to register recognized workers only. IMHO, the cycle would be solved with the dependency reverse, unless
null_resource.setup_nodes
has a critical reason that should be done before thannull_resource.executor
.null_resource.setup_nodes
and `nu… #29The text was updated successfully, but these errors were encountered: