Shouldn't you iterate through the SelectorList and for each item perform the get?
How can/would I include the SelectorList?
I believe response.css('#resultStats::text') will give you back a list. Does the code work if you change it to
def parse(self, response): item = { 'results': response.css('#resultStats::text')[0].get(), 'url': response.url, }
I am still receiving the same error following the amendment:
'results': response.css('#resultStats::text')[0].get(), AttributeError: 'Selector' object has no attribute 'get'
You can use extract() instead of get() I think. After extraction you should still select if you expect a simple string or a different format.
Many thanks, that is working well for c. 75% of the requests. For all others, I am receiving the following error:
IndexError: list index out of range
Do you know what could be the reason for that?
Also, the search only seems to return items for c. half of the links posted.
It keeps sending out requests afterwards but does not add any additional items to the list.
Same problem from the OP. I changed to extract e works fine.
Maybe there a problem with the server API because works fine on my local machine. It took me some time to get that the problem is in the server and not in my code.
shaping2startups
I am planning to run a Scrapy project, which is working well on my local computer. When uploading it to Scrapinghub, I am however receiving errors for each of the respective requests with the following description:
'results': response.css('#resultStats::text').get(), AttributeError: 'SelectorList' object has no attribute 'get'
The code this error is referring to is shown below:
def parse(self, response):
item = {
'results': response.css('#resultStats::text').get(),
'url': response.url,
}
What could be the reason for the missing object attribute 'get' in the Scrapinghub environment?