Start a new topic
Answered

HTTPS does not work with Python requests, but works with curl

I would like to how to properly use Python requests with Crawlera proxy for encrypted sites. I followed the solution: https://support.scrapinghub.com/support/solutions/articles/22000188407-fetching-https-pages-with-crawlera

and I can fetch sites like https://www.google.com with no problem using 'curl' (command line). I wanted to do a similar thing via Python and requests package, but it does not work. I get 'SubjectAltNameWarning'.

Minimal example:

 

import requests


PROXY_HOST = 'proxy.crawlera.com'

PROXY_PORT = '8010'

PROXY_AUTH = <my key>

proxies = {

    "https": "https://{}@{}:{}/".format(PROXY_AUTH, PROXY_HOST, PROXY_PORT),

    "http": "http://{}@{}:{}/".format(PROXY_AUTH, PROXY_HOST, PROXY_PORT)

}


res = requests.get(url='https://www.google.com', proxies=proxies, verify=<absolute-path-to-crawlera-ca.crt>)


Best Answer

Sorry, for a delay. It is working fine. I forgot to put ':' at the end of the key.

Thanks for your interest.


Is that the entire output: 'SubjectAltNameWarning'?

Answer

Sorry, for a delay. It is working fine. I forgot to put ':' at the end of the key.

Thanks for your interest.

Login to post a comment