File pipeline with aws s3 not working

Posted over 5 years ago by Logan Anderson

Post a topic
Answered
L
Logan Anderson

I am running a spider to download a bunch of PDFs on a given website and storing them in a public s3 bucket. I am using the file pipeline and have set


FILES_STORE = 's3://<bucket name>/'

POLICY='public'

AWS_REGION_NAME='ca-central-1' 


It works perfectly when I run locally but I get 

"  File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 211, in _emit

    response = handler(**kwargs)

  File "/usr/local/lib/python2.7/site-packages/botocore/signers.py", line 90, in handler

    return self.sign(operation_name, request)

  File "/usr/local/lib/python2.7/site-packages/botocore/signers.py", line 157, in sign

    auth.add_auth(request)

  File "/usr/local/lib/python2.7/site-packages/botocore/auth.py", line 425, in add_auth

    super(S3SigV4Auth, self).add_auth(request)

  File "/usr/local/lib/python2.7/site-packages/botocore/auth.py", line 357, in add_auth

    raise NoCredentialsError

NoCredentialsError: Unable to locate credentials"


when I run on scrapinghub Scrapy-cloud



0 Votes

thriveni

thriveni posted over 5 years ago Admin Best Answer

Hello,


The error is because spider do not get the AWS keys. AWS credentials would need to be provided either through settings.py or through UI as given in https://support.scrapinghub.com/a/solutions/articles/22000200447.

0 Votes


3 Comments

Sorted by
thriveni

thriveni posted over 5 years ago Admin Answer

Hello,


The error is because spider do not get the AWS keys. AWS credentials would need to be provided either through settings.py or through UI as given in https://support.scrapinghub.com/a/solutions/articles/22000200447.

0 Votes

D

Duke Arioch posted about 1 year ago

i am now having this problem and the link shows up as a 404.

0 Votes

D

Duke Arioch posted about 1 year ago

0 Votes

Login to post a comment