Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

[馃悰 Bug]: chart - autoscaling too many browser nodes #2160

Open
ofirdassa9 opened this issue Mar 4, 2024 · 5 comments
Open

[馃悰 Bug]: chart - autoscaling too many browser nodes #2160

ofirdassa9 opened this issue Mar 4, 2024 · 5 comments
Labels
I-autoscaling-k8s Issue relates to autoscaling in Kubernetes, or the scaler in KEDA

Comments

@ofirdassa9
Copy link

What happened?

I run a simple selenium test that gets a remote driver from the hub and goes to facebook.com and then to google.com
As long as the test is live (doesn't matter if it's sleeping, or actually doing something), more and more chrome-nodes are being deployed (I tried Firefox and Edge as well, I get the same result), until there are 8 which is the default limit.
I use Keda that is install with the chart and not an existing one.
This happens also in my EKS and my docker-desktop clusters
I used port-forward to reach out the hub service from my browser
the python script of the test:

import time
from selenium import webdriver

# URL for the remote Chrome WebDriver
remote_url = "http://automation:automation@localhost:4444/wd/hub"  # Replace this with the actual URL of your remote WebDriver

# Setting up the Chrome options
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument("--disable-dev-shm-usage")

# Getting the remote WebDriver
driver = webdriver.Remote(remote_url, options=chrome_options)

# Navigating to Facebook
driver.get("https://www.facebook.com")

# Printing the title of the page
print("Title of the page:", driver.title)

driver.get("https://www.google.com")

# Printing the title of the page
print("Title of the page:", driver.title)
# Closing the WebDriver
time.sleep(60)
driver.quit()

my values.yaml:

basicAuth:
  username: "automation"
  password: "automation"

autoscaling:
  enabled: true

Command used to start Selenium Grid with Docker (or Kubernetes)

helm install selenium-grid -n selenium-grid docker-selenium/selenium-grid -f values.yaml --create-namespace

Relevant log output

no relevant output logs

Operating System

EKS, Docker desktop

Docker Selenium version (image tag)

4.18.1-20240224

Selenium Grid chart version (chart version)

0.28.3

Copy link

github-actions bot commented Mar 4, 2024

@ofirdassa9, thank you for creating this issue. We will troubleshoot it as soon as we can.


Info for maintainers

Triage this issue by using labels.

If information is missing, add a helpful comment and then I-issue-template label.

If the issue is a question, add the I-question label.

If the issue is valid but there is no time to troubleshoot it, consider adding the help wanted label.

If the issue requires changes or fixes from an external project (e.g., ChromeDriver, GeckoDriver, MSEdgeDriver, W3C), add the applicable G-* label, and it will provide the correct link and auto-close the issue.

After troubleshooting the issue, please add the R-awaiting answer label.

Thank you!

@VietND96
Copy link
Member

VietND96 commented Mar 6, 2024

@ofirdassa9, can you read through #2133
For the right fix, we want help investigation and fixing the Scaler at the upstream KEDA project - https://github.com/kedacore/keda/blob/main/pkg/scalers/selenium_grid_scaler.go

@ofirdassa9
Copy link
Author

@VietND96 It looks like when setting

autoscaling:
  scaledJobOptions:
    scalingStrategy:
      strategy: default

it behaves as expected. thank you!

shouldn't this be the default value for the helm chart?

@andrii-rymar
Copy link

@ofirdassa9 in my case the default strategy doesn't work well enough. For some reason scaler doesn't create an expected amount of jobs when they are requested. I can see some overscaling with the accurate one sometimes but at least new session requests do not stay in the queue for no reason.

@VietND96
Copy link
Member

@andrii-rymar, you describe about kind of this issue, right? Something like, with default strategy, given that Queue has 6 requests coming, 6 Node pods will be up. There are 6 Node pods are up and running, however only 5 sessions are able to create and remaining 1 request stay in queue until it failed with reason selenium.common.exceptions.SessionNotCreatedException: Message: Could not start a new session. Could not start a new session. Unable to create new session

@VietND96 VietND96 added I-autoscaling-k8s Issue relates to autoscaling in Kubernetes, or the scaler in KEDA and removed needs-triaging labels Mar 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
I-autoscaling-k8s Issue relates to autoscaling in Kubernetes, or the scaler in KEDA
Projects
None yet
Development

No branches or pull requests

3 participants