I am running a spider to download a bunch of PDFs on a given website and storing them in a public s3 bucket. I am using the file pipeline and have set
FILES_STORE = 's3://<bucket name>/'
POLICY='public'
AWS_REGION_NAME='ca-central-1'
It works perfectly when I run locally but I get
" File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/usr/local/lib/python2.7/site-packages/botocore/signers.py", line 90, in handler
return self.sign(operation_name, request)
File "/usr/local/lib/python2.7/site-packages/botocore/signers.py", line 157, in sign
auth.add_auth(request)
File "/usr/local/lib/python2.7/site-packages/botocore/auth.py", line 425, in add_auth
super(S3SigV4Auth, self).add_auth(request)
File "/usr/local/lib/python2.7/site-packages/botocore/auth.py", line 357, in add_auth
raise NoCredentialsError
NoCredentialsError: Unable to locate credentials"
when I run on scrapinghub Scrapy-cloud
Hello,
The error is because spider do not get the AWS keys. AWS credentials would need to be provided either through settings.py or through UI as given in https://support.scrapinghub.com/a/solutions/articles/22000200447.
i am now having this problem and the link shows up as a 404.
https://support.zyte.com/a/solutions/articles/22000200447 works
Logan Anderson
I am running a spider to download a bunch of PDFs on a given website and storing them in a public s3 bucket. I am using the file pipeline and have set
FILES_STORE = 's3://<bucket name>/'
POLICY='public'
AWS_REGION_NAME='ca-central-1'
It works perfectly when I run locally but I get
" File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/usr/local/lib/python2.7/site-packages/botocore/signers.py", line 90, in handler
return self.sign(operation_name, request)
File "/usr/local/lib/python2.7/site-packages/botocore/signers.py", line 157, in sign
auth.add_auth(request)
File "/usr/local/lib/python2.7/site-packages/botocore/auth.py", line 425, in add_auth
super(S3SigV4Auth, self).add_auth(request)
File "/usr/local/lib/python2.7/site-packages/botocore/auth.py", line 357, in add_auth
raise NoCredentialsError
NoCredentialsError: Unable to locate credentials"
when I run on scrapinghub Scrapy-cloud
Hello,
The error is because spider do not get the AWS keys. AWS credentials would need to be provided either through settings.py or through UI as given in https://support.scrapinghub.com/a/solutions/articles/22000200447.
- Oldest First
- Popular
- Newest First
Sorted by Oldest Firstthriveni
Hello,
The error is because spider do not get the AWS keys. AWS credentials would need to be provided either through settings.py or through UI as given in https://support.scrapinghub.com/a/solutions/articles/22000200447.
Duke Arioch
i am now having this problem and the link shows up as a 404.
Duke Arioch
https://support.zyte.com/a/solutions/articles/22000200447 works
-
Unable to select Scrapy project in GitHub
-
ScrapyCloud can't call spider?
-
Unhandled error in Deferred
-
Item API - Filtering
-
newbie to web scraping but need data from zillow
-
ValueError: Invalid control character
-
Cancelling account
-
Best Practices
-
Beautifulsoup with ScrapingHub
-
Delete a project in ScrapingHub
See all 453 topics