requirements.txt node js

Posted over 5 years ago by eliana

Post a topic
Answered
e
eliana

hello, 


i need cfscrape in my project... so i use requirements.txt


cfscrape==2.0.8

nodejs

but how to add node ? i have this error 


"Missing Node.js runtime. Node is required and must be in the PATH (check with `node -v`). Your Node binary may be called `nodejs` rather than `node`, in which case you may need to run `apt-get install nodejs-legacy` on some Debian-based systems. (Please read the cfscrape"
EnvironmentError: Missing Node.js runtime. Node is required and must be in the PATH (check with `node -v`). Your Node binary may be called `nodejs` rather than `node`, in which case you may need to run `apt-get install nodejs-legacy` on some Debian-based systems. (Please read the cfscrape README's Dependencies section: https://github.com/Anorov/cloudflare-scrape#dependencies.

0 Votes

thriveni

thriveni posted over 5 years ago Admin Best Answer

Hello,


As given in the error log, NodeJS executable should be somewhere in the PATH of your crawler's runtime environment. This cannot be done using requirements.txt. Requirements.txt informs which python modules needs to be installed and will be used by spider. 

While for NodeJS, you would need the environment, which can be done by building custom images and deploying them to scrapy cloud as given in https://support.scrapinghub.com/support/solutions/articles/22000200425-deploying-custom-docker-images-on-scrapy-cloud.  Currently this facility is available to paying customers (i.e customers who have subscribed to atleast 1 Scrapy Unit). 

0 Votes


1 Comments

thriveni

thriveni posted over 5 years ago Admin Answer

Hello,


As given in the error log, NodeJS executable should be somewhere in the PATH of your crawler's runtime environment. This cannot be done using requirements.txt. Requirements.txt informs which python modules needs to be installed and will be used by spider. 

While for NodeJS, you would need the environment, which can be done by building custom images and deploying them to scrapy cloud as given in https://support.scrapinghub.com/support/solutions/articles/22000200425-deploying-custom-docker-images-on-scrapy-cloud.  Currently this facility is available to paying customers (i.e customers who have subscribed to atleast 1 Scrapy Unit). 

0 Votes

Login to post a comment