Starting with requests

Posted over 5 years ago by Neurascrape

Post a topic
Answered
N
Neurascrape


import requests

url = "https://google.com"
proxy_host = "proxy.crawlera.com"
proxy_port = "8010"
proxy_auth = "<APIKEY>:" # Make sure to include ':' at the end
proxies = {"https": "https://{}@{}:{}/".format(proxy_auth, proxy_host, proxy_port)}

r = requests.get(url, proxies=proxies,
                 verify=False)

 hello, I am new at Crawlera, the request takes about 2 sec and i get message that its strongly adviced to remove  (the response is correct)

verify=False

 however if i remove it gives error

0 Votes

nestor

nestor posted over 5 years ago Admin Best Answer

It's just a warning, not an error. You can also download the Crawlera CA certificate and put in your script as Verfiy=path/to/crawerla.crt

Certificate can be downloaded from here:  https://support.scrapinghub.com/support/solutions/articles/22000188407-fetching-https-pages-with-crawlera 


0 Votes


2 Comments

Sorted by
N

Neurascrape posted over 5 years ago

I am sorry it takes about 8 secs

0 Votes

nestor

nestor posted over 5 years ago Admin Answer

It's just a warning, not an error. You can also download the Crawlera CA certificate and put in your script as Verfiy=path/to/crawerla.crt

Certificate can be downloaded from here:  https://support.scrapinghub.com/support/solutions/articles/22000188407-fetching-https-pages-with-crawlera 


0 Votes

Login to post a comment