Start a new topic
Answered

File pipeline with aws s3 not working

I am running a spider to download a bunch of PDFs on a given website and storing them in a public s3 bucket. I am using the file pipeline and have set


FILES_STORE = 's3://<bucket name>/'

POLICY='public'

AWS_REGION_NAME='ca-central-1' 


It works perfectly when I run locally but I get 

"  File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 211, in _emit

    response = handler(**kwargs)

  File "/usr/local/lib/python2.7/site-packages/botocore/signers.py", line 90, in handler

    return self.sign(operation_name, request)

  File "/usr/local/lib/python2.7/site-packages/botocore/signers.py", line 157, in sign

    auth.add_auth(request)

  File "/usr/local/lib/python2.7/site-packages/botocore/auth.py", line 425, in add_auth

    super(S3SigV4Auth, self).add_auth(request)

  File "/usr/local/lib/python2.7/site-packages/botocore/auth.py", line 357, in add_auth

    raise NoCredentialsError

NoCredentialsError: Unable to locate credentials"


when I run on scrapinghub Scrapy-cloud




Best Answer

Hello,


The error is because spider do not get the AWS keys. AWS credentials would need to be provided either through settings.py or through UI as given in https://support.scrapinghub.com/a/solutions/articles/22000200447.


Answer

Hello,


The error is because spider do not get the AWS keys. AWS credentials would need to be provided either through settings.py or through UI as given in https://support.scrapinghub.com/a/solutions/articles/22000200447.

i am now having this problem and the link shows up as a 404.

Login to post a comment