Does Frontera allow to store the scheduled URLs? For example, using scrapy_redis, if the crawler has run before, scheduled URLs are stored when the crawler is paused/canceled. This allows that the crawler can continue where it was left. If Frontera supports this, how can I implement it?
546dtucgjvhkbj
Hello,
Does Frontera allow to store the scheduled URLs? For example, using scrapy_redis, if the crawler has run before, scheduled URLs are stored when the crawler is paused/canceled. This allows that the crawler can continue where it was left. If Frontera supports this, how can I implement it?
Thank you
Please post Frontera related questions in Frontera's Google group: https://groups.google.com/a/scrapinghub.com/forum/#!forum/frontera or check it's main documentation: http://frontera.readthedocs.org/ and/or Github Issues/Pull requests: https://github.com/scrapinghub/frontera.
nestor
Please post Frontera related questions in Frontera's Google group: https://groups.google.com/a/scrapinghub.com/forum/#!forum/frontera or check it's main documentation: http://frontera.readthedocs.org/ and/or Github Issues/Pull requests: https://github.com/scrapinghub/frontera.
-
Unable to select Scrapy project in GitHub
-
ScrapyCloud can't call spider?
-
Unhandled error in Deferred
-
Item API - Filtering
-
newbie to web scraping but need data from zillow
-
ValueError: Invalid control character
-
Cancelling account
-
Best Practices
-
Beautifulsoup with ScrapingHub
-
Delete a project in ScrapingHub
See all 453 topics