Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provisioning over 50 Instances (F5 Workshop) causes Teardown to Fail. #1944

Open
VDI-Tech-Guy opened this issue Apr 25, 2023 · 3 comments
Open
Assignees

Comments

@VDI-Tech-Guy
Copy link

VDI-Tech-Guy commented Apr 25, 2023

Problem Summary

In the code you can provision 50+ labs however when doing a teardown of 50 or greater the teardown script will fail due to a limitation in AWS BOTO b/c it has a max limit of 200 Objects (In the F5 Labs case [50 x (2 Web Servers + 1 F5 BIG-IP + 1 Ansible Node)] + Control Node (201 Objects) this will cause the code to fundementally fail during the teardown process.

I have coded a way out of this issue which i have tested multiple times (50 Students) and this code deprovisions the lab appropriately and automatedly as the only way to get around this in the past was going into the AWS Console and manually deleting enough objects to allow the code to teardown (which isnt automated)

@heatmiser and i have had discussion on this and think its very important for RHDP and Anyone working with the code that provisions over 200 objects would need (feel free to adjust your number on this as i set to delete to 100 objects) to ensure plenty of space.

workshop/roles/manage_ec2_instances/tasks/teardown.yml

Code added at line 102 Before Install AWS CLI and after Debug all _workshop_vpc2_nodes

- name: debug count vpc1
  debug: 
    var: "{{all_workshop_vpc_nodes.instances | length }}"
  when:
    - debug_teardown

# Destroy VPC 1 instances
- name: destroy EC2 last 100 instances
  amazon.aws.ec2_instance:
    region: "{{ ec2_region }}"
    state: absent
    instance_ids: "{{ all_workshop_vpc_nodes.instances[-100:] | map(attribute='instance_id') | list }}"
    wait: true
    wait_timeout: "{{ student_total * 300 | int}}"
  register: result_ec2_destroy
  when: all_workshop_vpc_nodes.instances

# retrieve instances for VPC 1
- name: re-grab vpc node facts for workshop after deletion of last 100
  ec2_instance_info:
    region: "{{ ec2_region }}"
    filters:
      "vpc-id": "{{ec2_vpc_id}}"
  register: all_workshop_vpc_nodes

Issue Type

Bug

Extra vars file

---
# region where the nodes will live
ec2_region: us-west-2

# name prefix for all the VMs
ec2_name_prefix: f5-testdrive-test 
#F5-TestDrive-Test

# creates student_total of workbenches for the workshop
student_total: 1 

# Set the right workshop type, like network, rhel or f5 (see above)
workshop_type: f5

# Generate offline token to authenticate the calls to Red Hat's APIs
# Can be accessed at https://access.redhat.com/management/api
offline_token: "..."

# Required for podman authentication to registry.redhat.io
redhat_username: MyRHUser
redhat_password: "s^perSecretP@ss!"

#####OPTIONAL VARIABLES

# add prebuilt false
pre_build: false

# turn DNS on for control nodes, and set to type in valid_dns_type
dns_type: aws

# password for Ansible control node
admin_password: s^perSecretP@ss!

# Sets the Route53 DNS zone to use for Amazon Web Services
workshop_dns_zone: "mydomain.com"

# automatically installs Tower to control node
controllerinstall: true

# SHA value of targeted AAP bundle setup files.
provided_sha_value: 7456b98f2f50e0e1d4c93fb4e375fe8a9174f397a5b1c0950915224f7f020ec4

# default vars for ec2 AMIs (ec2_info) are located in provisioner/roles/manage_ec2_instances/defaults/main/main.yml
# select ec2_info AMI vars can be overwritten via ec2_xtra vars, e.g.:
ec2_xtra:
  f5node:
    owners: 679593333241
    size: t2.large
    os_type: linux
    disk_volume_type: gp3
    disk_space: 82
    disk_iops: 3000
    disk_throughput: 125
    architecture: x86_64
    filter: 'F5 BIGIP-16*PAYG-Best 25Mbps*'
    username: admin

f5_ee: "quay.io/f5_business_development/mmabis-ee-test:latest"

Ansible Playbook Output

N/A at this time, but this is a known issue as BOTO has a 200 object limit

Ansible Version

[ec2-user@ip-10-0-100-29 provisioner]$ ansible --version
ansible [core 2.14.4]
  config file = /git/workshops-main-branch/provisioner/ansible.cfg
  configured module search path = ['/home/ec2-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/ec2-user/.local/lib/python3.9/site-packages/ansible
  ansible collection location = /home/ec2-user/.ansible/collections:/usr/share/ansible/collections
  executable location = /home/ec2-user/.local/bin/ansible
  python version = 3.9.16 (main, Dec  8 2022, 00:00:00) [GCC 11.3.1 20221121 (Red Hat 11.3.1-4)] (/usr/bin/python3)
  jinja version = 3.1.2
  libyaml = True

Ansible Configuration

[ec2-user@ip-10-0-100-29 provisioner]$ ansible-config dump --only-changed
CONFIG_FILE() = /git/workshops-main-branch/provisioner/ansible.cfg
DEFAULT_FORKS(/git/workshops-main-branch/provisioner/ansible.cfg) = 50
DEFAULT_HOST_LIST(/git/workshops-main-branch/provisioner/ansible.cfg) = ['/git/workshops-main-branch/provisioner/hosts']
HOST_KEY_CHECKING(/git/workshops-main-branch/provisioner/ansible.cfg) = False
PERSISTENT_COMMAND_TIMEOUT(/git/workshops-main-branch/provisioner/ansible.cfg) = 60
PERSISTENT_CONNECT_TIMEOUT(/git/workshops-main-branch/provisioner/ansible.cfg) = 60
RETRY_FILES_ENABLED(/git/workshops-main-branch/provisioner/ansible.cfg) = False

Ansible Execution Node

CLI Ansible (Ansible Core)

Operating System

[ec2-user@ip-10-0-100-29 provisioner]$ cat /etc/redhat-release 
CentOS Stream release 9
@VDI-Tech-Guy
Copy link
Author

@heatmiser

@heatmiser
Copy link
Contributor

This issue would be resolved with #1589 ...ultimately up to @IPvSean and team to decide on how to handle.

@heatmiser
Copy link
Contributor

Actually, probably spoke too quickly...would need to revisit code and perhaps make some modifications, this PR was submitted a while back.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants