Deploy failed because multiple spiders with Scrapinghub

Posted over 6 years ago by motogod19

Post a topic
Un Answered
m
motogod19

 I try to deploy the project to scrapinghub get error

Exceeded container timeout 60s


I find solution from github https://github.com/scrapinghub/shub/issues/273

The problem is i use

CrawlerProcess


process = CrawlerProcess(get_project_settings())

for spider in CrawlersArray:
    process.crawl(spider)

process.start()

I am not sure how to use first solution , so i try second solution just like questioner.

I fix the code like this:


if __name__ == '__main__':
    process = CrawlerProcess(get_project_settings())

    for spider in CrawlersArray:
        process.crawl(spider)

    process.start()


It can deploy project to Scrapinghub succeed, but when i run the project i find no one spider run.


The if function 

if __name__ == '__main__':

not be established. No spider has run.


I'm not sure it is technical question or not, i'm stuck here for a long time.


Hope can get some help from here, any help would be appreciated.


Thanks in advance.



0 Votes


0 Comments

Login to post a comment