bad_proxy_auth with curl and python requests

Posted almost 6 years ago by Suresh kumar CH

Post a topic
Answered
S
Suresh kumar CH

Just copy/paste your script from https://support.scrapinghub.com/support/solutions/articles/22000203567-using-crawlera-with-python and replace API by my Crawlera API key, but still get bad_proxy_auth error. Request Headers: {'Connection': 'keep-alive', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'User-Agent': 'python-requests/2.10.0'} Response Time: 2.368468 Response Code: 407 Response Headers: {'Content-Length': '0', 'Proxy-Connection': 'close', 'Proxy-Authenticate': 'Basic realm="Crawlera"', 'Connection': 'close', 'Date': 'Mon, 17 Dec 2018 04:33:53 GMT', 'X-Crawlera-Error': 'bad_proxy_auth'}

0 Votes

thriveni

thriveni posted almost 6 years ago Admin Best Answer

As discussed in the support ticket, code from https://support.scrapinghub.com/support/solutions/articles/22000203567-using-crawlera-with-python-requests helped and are able to make requests without errors now. 


Need to ensure that Requests version of atleast 2.18 is being used to avoid 407 errors.

0 Votes


1 Comments

thriveni

thriveni posted almost 6 years ago Admin Answer

As discussed in the support ticket, code from https://support.scrapinghub.com/support/solutions/articles/22000203567-using-crawlera-with-python-requests helped and are able to make requests without errors now. 


Need to ensure that Requests version of atleast 2.18 is being used to avoid 407 errors.

0 Votes

Login to post a comment