Login succeeded
.
.
.
Successfully built f2bb6e18d4db
Entrypoint container is created successfully
>>> Checking python dependencies
No broken requirements found.
>>> Getting spiders list:
>>> Trying to get spiders from shub-image-info command
WARNING: There're some errors on shub-image-info call:
{"message": "", "error": "internal_error"}
It can deploy project to scrapinghub now, but i still get another problem in my case.
When i try the second solution, my process will no inactive.
if __name__ == '__main__':
process = CrawlerProcess()
process.crawl(MySpider1)
process.crawl(MySpider2)
process.start() # the script will block here until all crawling jobs are finished
Hope any one try the solution work can share what the should type.
0 Votes
a
artursilposted
over 6 years ago
Unfortunately I don't know what the problem was. I created new empty scrapy project and deployed it. Then I was copying my main project to an empty one piece by piece and it worked.
In my main project I have some redundant scripts and spiders. Maybe they cause this problem. My advice would be to delete all of the unnecessary scripts.
Regards,
Artur
0 Votes
thriveniposted
over 6 years ago
Admin
Hello Simen,
Can see that there are successful deploys in your account. Would be great if you can mention as how you resolved the issue and help others with similar errors.
Regards,
Thriveni Patil
0 Votes
a
artursilposted
almost 7 years ago
Have similar problem. Here is my log:
Error: Deploy failed: b'{"status": "error", "message": "Internal error"}'
---> d66bd6bc900f
Removing intermediate container 5161ba153fd3
Step 11/12 : RUN if [ -d "/app/addons_eggs" ]; then rm -f /app/*.dash-addon.egg;
fi
---> Running in efbc0db2efa2
---> 5c7e190770fe
Removing intermediate container efbc0db2efa2
Step 12/12 : ENV PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/b
in
---> Running in 26838a17677e
---> 108961c0b30e
Removing intermediate container 26838a17677e
Successfully built 108961c0b30e
Step 1/3 : FROM alpine:3.5
---> 6c6084ed97e5
Step 2/3 : ADD kumo-entrypoint /kumo-entrypoint
---> Using cache
---> bdb8d4874ea6
Step 3/3 : RUN chmod +x /kumo-entrypoint
---> Using cache
---> 702ef707423a
Successfully built 702ef707423a
Entrypoint container is created successfully
>>> Checking python dependencies
No broken requirements found.
>>> Getting spiders list:
>>> Trying to get spiders from shub-image-info command
WARNING: There're some errors on shub-image-info call:
Exceeded container timeout 60s
{"message": "shub-image-info exit code: -1", "details": null, "error": "image_in
fo_error"}
{"status": "error", "message": "Internal error"}
I get this when trying to deploy my spider:
Any ideas?
0 Votes
4 Comments
motogod19 posted over 6 years ago
I think the error is
I find the solution from here
https://github.com/scrapinghub/shub/issues/273
It can deploy project to scrapinghub now, but i still get another problem in my case.
When i try the second solution, my process will no inactive.
Hope any one try the solution work can share what the should type.
0 Votes
artursil posted over 6 years ago
Unfortunately I don't know what the problem was. I created new empty scrapy project and deployed it. Then I was copying my main project to an empty one piece by piece and it worked.
In my main project I have some redundant scripts and spiders. Maybe they cause this problem. My advice would be to delete all of the unnecessary scripts.
Regards,
Artur
0 Votes
thriveni posted over 6 years ago Admin
Hello Simen,
Can see that there are successful deploys in your account. Would be great if you can mention as how you resolved the issue and help others with similar errors.
Regards,
Thriveni Patil
0 Votes
artursil posted almost 7 years ago
Have similar problem. Here is my log:
Does someone know what is wrong ?
0 Votes
Login to post a comment