Try to replicate your environment when deploying the project to scrapinghub https://support.scrapinghub.com/support/solutions/articles/22000200400-deploying-python-dependencies-for-your-projects-in-scrapy-cloud
Sorry for late reply. Will try that, thank you.
curiouselephant
I'm running a very simple spider that sends two POST requests. (code attached)
On my home computer it works just fine, but on scrapinghub I get the same error for each request sent:
``
[scrapy.core.scraper] Error downloading <POST https://baca.ii.uj.edu.pl/p12018/testerka_gwt/problems>: [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', 'SSL23_GET_SERVER_HELLO', 'tlsv1 alert internal error')]>]
``
After that, the spider stops, there is no data collected.
I think it has to do with the site's certificate being outdated, but in the docs it says that Scrapy doesn't check that.
Any ideas?
Thanks