How are you deploying? With a requirements.txt file or did you make your own Docker image containing this data?
Hi, I have the same problem, the error output is :
Resource �[93mpunkt�[0m not found. Please use the NLTK Downloader to obtain the resource: �[31m>>> import nltk >>> nltk.download('punkt') �[0m Searched in: - '/scrapinghub/nltk_data' - '/usr/share/nltk_data' - '/usr/local/share/nltk_data' - '/usr/lib/nltk_data' - '/usr/local/lib/nltk_data' - '/usr/local/nltk_data' - '/usr/local/lib/nltk_data'
I am using requirements.txt
Where to put this command to install this nltkmodule??
I used NLTK package in spider pipeline file. However, the NLTK dependency data is not downloaded in scrapinghub cloud. In local python, we just use nltk.download() to download them. Any way to download the NLTK data on scrapinghub? I paste the processing error as below.