Cancelled (stalled) Job outcome because of scrapy_dotpersistence syncing over an hour
Posted over 7 years ago by chops
Post a topicPeople who like this
Delete Comment
This post will be deleted permanently. Are you sure?
Code Snippet
Html
Html
Css
JavaScript
Sass
Xml
Ruby
PHP
Java
C#
C++
ObjectiveC
Perl
Python
VB
SQL
Generic Language
My job outcome is cancelled (stalled) repeatedly after the scraping is over and the scrapy_dotpersitence addon stores the .scrapy directory to S3:
I tried to delete the httpcache folder in the console, but the syncing duration is over an hour and the job is getting canceled anyway.
How can I solve this issue? Can I "reset" the S3 folder directly?
0 Votes
nestor posted over 7 years ago Admin Best Answer
Jobs will get cancelled if they're not doing anything for an hour, you could add some log every hour or so, so that the job doesn't get cancelled.
0 Votes
4 Comments
chops posted over 7 years ago
Is it possible to insert own S3 Credentials for scrapy_dotpersistence?
0 Votes
thriveni posted over 7 years ago Admin
Do let us know if you are still facing the issue? I do not see any jobs getting stalled in the account.
0 Votes
chops posted over 7 years ago
0 Votes
nestor posted over 7 years ago Admin Answer
Jobs will get cancelled if they're not doing anything for an hour, you could add some log every hour or so, so that the job doesn't get cancelled.
0 Votes
Login to post a comment