Learn all about the latest trends and best practices in data extraction - Join us at Extract SummitGet tickets
Start a new topic
Answered

Starting with requests


import requests

url = "https://google.com"
proxy_host = "proxy.crawlera.com"
proxy_port = "8010"
proxy_auth = "<APIKEY>:" # Make sure to include ':' at the end
proxies = {"https": "https://{}@{}:{}/".format(proxy_auth, proxy_host, proxy_port)}

r = requests.get(url, proxies=proxies,
                 verify=False)

 hello, I am new at Crawlera, the request takes about 2 sec and i get message that its strongly adviced to remove  (the response is correct)

verify=False

 however if i remove it gives error


Best Answer

It's just a warning, not an error. You can also download the Crawlera CA certificate and put in your script as Verfiy=path/to/crawerla.crt

Certificate can be downloaded from here:  https://support.scrapinghub.com/support/solutions/articles/22000188407-fetching-https-pages-with-crawlera 



I am sorry it takes about 8 secs

Answer

It's just a warning, not an error. You can also download the Crawlera CA certificate and put in your script as Verfiy=path/to/crawerla.crt

Certificate can be downloaded from here:  https://support.scrapinghub.com/support/solutions/articles/22000188407-fetching-https-pages-with-crawlera 


Login to post a comment