Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

@harshibar when i run get_link.py it only opens glassdoor website , it doesnt search for jobs. plz help #14

Open
aisha01malik opened this issue Jan 1, 2021 · 4 comments

Comments

@aisha01malik
Copy link

@harshibar @harshibar-youtube plz help

@eminvergil
Copy link

success = go_to_listings(driver) this function gives error

@Tuhin-thinks
Copy link

success = go_to_listings(driver) this function gives error

Also, one may notice that he/she not getting relevant results from the search, because the search probably is working as location based, but the position text is never getting written.

I fixed the issue by adding some CSS selectors, the way of getting elements using fixed, id and XPATH isn't really effective.

Also, the page navigation is only of 4 pages, you can easily increase that and search as long as the search has results using the page next button.

@eminvergil
Copy link

eminvergil commented Jan 30, 2021

@Tuhin-thinks yes i fixed that issue using Full XPATH but when it collects all url data it doesnt work,it doesnt fill the forms .Did you make this bot work ? and if so can i get contact with you or can you share your results? i would be appreciated.

@Tuhin-thinks
Copy link

Yes, apply job will not work according to the script.
It just works for lever and greenhouse.

But, what approach she's using is : to create automation rules for every website, but that's totally unrealistic!!
We cannot create selenium rules for all 100+ companies that are scrapped! So, I am thinking of a better solution and will come up with one in future when I will really need for this kind of script!

Also, for starters, get_links.py is insanely slow!!
Basically scrap 25-30 links in 15-16 minutes (just finds them, not scrap anything tho).
Lots of improvement needed, I won't open any new issue for this, just wait for existing ones to get fixed!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants