Start a new topic

Scrapy Pipeline isn't working in scrapy cloud

I have a working Pipeline that saves results into a postgres db but when I run it on scrapy cloud I get the following errors:

 The first:

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/twisted/internet/", line 651, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "/app/__main__.egg/my_spider/", line 106, in close_spider
AttributeError: 'MySpiderPipeline' object has no attribute 'cur'

 The 'cur' is the cursor of my db connection. My pipeline is pretty much the exact same one as the one in this blog post.

The other error I get after the one shown above. One of my fields is the state but I am not sure it is referring to something else.


[scrapy.utils.signal] Error caught on signal handler: <bound method ?.spider_closed of <scrapy.extensions.spiderstate.SpiderState object at 0x7f204148a250>> Less
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/twisted/internet/", line 150, in maybeDeferred
  result = f(*args, **kw)
File "/usr/local/lib/python2.7/site-packages/pydispatch/", line 55, in robustApply
  return receiver(*arguments, **named)
File "/usr/local/lib/python2.7/site-packages/scrapy/extensions/", line 28, in spider_closed
  pickle.dump(spider.state, f, protocol=2)
AttributeError: 'MySpider' object has no attribute 'state'


Before any errors are shown the crawler starts and stops right away.

9:	2020-01-24 02:40:03	INFO	[scrapy.core.engine] Spider opened
10:	2020-01-24 02:40:03	INFO	[scrapy.core.engine] Closing spider (shutdown)

What do I have to do to get my pipeline working? 

Should I just copy this one instead?



1 Comment


Are you still facing this issue? I see that the recent runs have been successful.

For any queries or issues, Please get in touch with Support team through Dashboard > Help > Contact Support.

Login to post a comment