Problem with shub deploy dependencies - urllib3

Posted over 4 years ago by Equipe GameGratis

Post a topic
Answered
E
Equipe GameGratis

Hello, i'm quite new to Python and ScrapingHub, but I can solve most of the issues.


However, after long research, i cant find my way to deploy my Project into Scrapy Cloud.


I managed to shub login, with API key and project ID, but after that, when i  try to deploy with "shub deploy", a dependency error appears:

  

Packing version 1.0
Deploying to Scrapy Cloud project "439664"
Deploy log last 30 lines:

Deploy log location: C:\Users\Diddy\AppData\Local\Temp\shub_deploy_n361mbb3.log
Error: Deploy failed: b'{"status": "error", "message": "Requirements error"}'
 ---> [Warning] Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
 ---> Running in b84bea5266a1
Removing intermediate container b84bea5266a1
 ---> 47cb7e9912f1
Successfully built 47cb7e9912f1
Successfully tagged i.scrapinghub.com/kumo_project/439664:43
Step 1/3 : FROM alpine:3.5
 ---> f80194ae2e0c
Step 2/3 : ADD kumo-entrypoint /kumo-entrypoint
 ---> Using cache
 ---> 3e4f96d9d57f
Step 3/3 : RUN chmod +x /kumo-entrypoint
 ---> Using cache
 ---> 3a4b57ebf69f
Successfully built 3a4b57ebf69f
Successfully tagged kumo-entrypoint:latest
Entrypoint container is created successfully
>>> Checking python dependencies
Collecting pip<20.0,>=9.0.3
  Downloading https://files.pythonhosted.org/packages/00/b6/9cfa56b4081ad13874b0c6f96af8ce16cfbc1cb06bedf8e9164ce5551ec1/pip-19.3.1-py2.py3-none-any.whl (1.4MB)
Installing collected packages: pip
Successfully installed pip-19.3.1
requests 2.21.0 has requirement urllib3<1.25,>=1.21.1, but you have urllib3 1.25.8.
Warning: Pip checks failed, please fix the conflicts.
WARNING: There're some errors when doing pip-check:
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7.
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7.
More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
{"message": "Dependencies check exit code: 1", "details": "Pip checks failed, please fix the conflicts", "error": "requirements_error"}
{"status": "error", "message": "Requirements error"} 

 

I found some similar problems about that, but nothing with urllib3 dependency. Really I don't think the problem is urllib3 because I installed version 1.21, 1.24 and 1.25.8. Also I saw that the program tries to install pip-19.3.1 , but my version continues 20.0, so I installed the 19.3.1 and nothing changes.


My requirements.txt:

amightygirl.paapi5-python-sdk==1.0.0
gspread==3.2.0
lxml==4.5.0
oauth2client==4.1.3
urllib3==1.24.3
w3lib==1.21.0

 Also tried to add the 'stacks' line on scrapinghub.yml: 

project: 439664
requirements:
   file: requirements.txt
stacks:
  default: scrapy:2.0-py3

 

 But nothing changed.. Tried to delete 'build' and '..egg-info' folders before deploying, but nothing.

What I don't understand is that no matter what version of urllib3, the error message persist. I am losing hope. Does someone had a similar problem, or a workaround, please?


If needed I can send other infos about the project. Thanks in advance!

0 Votes

nestor

nestor posted over 4 years ago Admin Best Answer

Hi, the stack scrapy:2.0-py3 doesn't exist it should be just scrapy:2.0. It defaults to Python 3 since Scrapy 2.0+ has dropped support for Python 2.


Because of the wrong stack it defaults to your default stack which is Scrapy 1.6 and this stack has requests 2.21 which is causing the dependency error.


Simply change your stack to scrapy:2.0 and redeploy to fix the requirements error.

1 Votes


4 Comments

Sorted by
E

Equipe GameGratis posted over 4 years ago

Ok, thank you so much for your help Nestor.. I have now another problem, but since it is inside scraping hub I decided to create another topic, so maybe it help someone else. Can you take a look please? I really don't want to take your time but I really could not find a solution.

0 Votes

nestor

nestor posted over 4 years ago Admin

That's correct, you'll need a Scrapy Cloud unit subscription to be able to use the periodic jobs feature.

1 Votes

E

Equipe GameGratis posted over 4 years ago

Hello Nestor! Thank you for your quick response! And sorry for the stupid question haha.


Your solution fixed it. And after a little more struggle, I managed to deploy the project.


So, for periodic jobs, i need to have a paid plan, right? The Professional Scrapy Cloud plan of $9 is able to run these periodic jobs? Again, thank you, really helpful.

0 Votes

nestor

nestor posted over 4 years ago Admin Answer

Hi, the stack scrapy:2.0-py3 doesn't exist it should be just scrapy:2.0. It defaults to Python 3 since Scrapy 2.0+ has dropped support for Python 2.


Because of the wrong stack it defaults to your default stack which is Scrapy 1.6 and this stack has requests 2.21 which is causing the dependency error.


Simply change your stack to scrapy:2.0 and redeploy to fix the requirements error.

1 Votes

Login to post a comment