It can be implemented by combining Scrapy Signals with a job scheduling API call (see Jobs API and Scrapinghub's Python client). The idea is to catch the spider_closed signal and trigger the next job via API or the client.
Is it possible to schedule jobs to run sequentially?
Modified on: Wed, 3 Feb, 2021 at 8:34 AM
Did you find it helpful? Yes No
Send feedbackSorry we couldn't be helpful. Help us improve this article with your feedback.