Handle timeout exception from selenium#59
Handle timeout exception from selenium#59michelts wants to merge 5 commits intoclemfromspace:developfrom
Conversation
This test was missing yet.
|
can you tell me how can I use your fork using pipenv? |
|
I am using git urls through requirements.txt, something similar to: git+git://github.com/michelts/scrapy-selenium.git@prod#egg=scrapy-selenium I don't use pipenv, but maybe be you can start here ;) |
|
thank you so much that solved the problem. appreciate it man
EDIT 1:
this is unrelated, but can you tell how to pass multiple expected conditions to wait_until.
…On Tue, 2 Jun 2020 at 14:01, Michel Sabchuk ***@***.***> wrote:
Hi @manikandanraji <https://github.com/manikandanraji>
I am using git urls through requirements.txt
<https://stackoverflow.com/questions/16584552/how-to-state-in-requirements-txt-a-direct-github-source>,
something similar to:
***@***.***#egg=scrapy-selenium
I don't use pipenv, but maybe be you can start here
<https://stackoverflow.com/questions/50316275/how-to-use-pipenv-to-install-package-from-github>
;)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#59 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIH2TUAGEWLJIXWLB7GC3WLRUS2F3ANCNFSM4LASENIA>
.
|
|
It is possible to use several comma-separated css rules for the same condition, when using, for instance, element_to_be_clickable. You want the page to be loaded but sometimes the page renders different from what you expect. This works for me: |
|
woah, that fixed the problem I have been trying to solve for the past couple of hours. once again, thank you man. |
|
You are welcome ;) |
|
I like this pull request! It operates more in line with my expected / needed behavior, ie, if you get a timeout error because the HTML element never loaded, proceed to scrape what you can instead of skipping. |
Hi @clemfromspace
I implemented the necessary steps to meet the issue #58. There wasn't any test of the wait_time and wait_until usage, so I added one.
I decided to always ignore the timeout exception and return the content to scrapy, but I can surely add a config option to allow retrocompatibity, if you prefer.