i am using crawlera from 2 months , it was woking fine but got this error :
"/home/vocso/.local/lib/python3.6/site-packages/urllib3/connection.py:362: SubjectAltNameWarning: Certificate for www.google.com has no `subjectAltName`, falling back to check for a `commonName` for now. This feature is being removed by major browsers and deprecated by RFC 2818. (See https://github.com/shazow/urllib3/issues/497 for details.)
r = requests.get(url,proxies=proxies,verify="crawlera-ca.crt")
soup = BeautifulSoup(r.text,'html5lib')
"
Best Answer
n
nestor
said
about 4 years ago
That's just a warning about an urllib3 feature support, not an error. The request still goes through the proxy and can get a response, so it can be safely ignored.
That's just a warning about an urllib3 feature support, not an error. The request still goes through the proxy and can get a response, so it can be safely ignored.
d
deepakchauhan
said
about 4 years ago
hi
I am getting the response on localhost but not getting any response in aws server. The code is same.
showing bad proxy auth on server but working perfectly in local machine
nestor
said
about 4 years ago
Are you sure you're using the same script in both local and AWS?
d
deepakchauhan
said
about 4 years ago
yaa sir i am 100% sure
nestor
said
about 4 years ago
Bad Authentication is a client side error, if the API Key being used is the correct one, then the only I can think of is the python requests version installed which might be causing problems.
d
deepakchauhan
said
about 4 years ago
problem solved...problem is python requests version
d
deepakchauhan
said
about 4 years ago
and thank you so much for help
nestor
said
about 4 years ago
No problem.
G
Godfrey Jean
said
over 2 years ago
import requests
url = "http://httpbin.org/ip" proxy_host = "proxy.crawlera.com" proxy_port = "8010" proxy_auth = "<APIKEY>:" # Make sure to include ':' at the end proxies = {"https": "https://{}@{}:{}/".format(proxy_auth, proxy_host, proxy_port), "http": "http://{}@{}:{}/".format(proxy_auth, proxy_host, proxy_port)}
r = requests.get(url, proxies=proxies, verify=False)
deepakchauhan
i am using crawlera from 2 months , it was woking fine but got this error :
"/home/vocso/.local/lib/python3.6/site-packages/urllib3/connection.py:362: SubjectAltNameWarning: Certificate for www.google.com has no `subjectAltName`, falling back to check for a `commonName` for now. This feature is being removed by major browsers and deprecated by RFC 2818. (See https://github.com/shazow/urllib3/issues/497 for details.)
"
here is my code
"
proxy_port = "8010"
proxy_auth = "<key>:"
proxies = {"https": "https://{}@{}:{}/".format(proxy_auth, proxy_host, proxy_port),
"http": "http://{}@{}:{}/".format(proxy_auth, proxy_host, proxy_port)}
photon_requests_session = requests.sessions.Session()
photon_requests_session.verify = certifi.where()
r = requests.get(url,proxies=proxies,verify="crawlera-ca.crt")
soup = BeautifulSoup(r.text,'html5lib')
"
That's just a warning about an urllib3 feature support, not an error. The request still goes through the proxy and can get a response, so it can be safely ignored.
- Oldest First
- Popular
- Newest First
Sorted by Oldest Firstnestor
That's just a warning about an urllib3 feature support, not an error. The request still goes through the proxy and can get a response, so it can be safely ignored.
deepakchauhan
hi
I am getting the response on localhost but not getting any response in aws server. The code is same.
deepakchauhan
proxy_host = "proxy.crawlera.com"
proxy_port = "8010"
proxy_auth = "<key>:"
proxies = {"https": "https://{}@{}:{}/".format(proxy_auth, proxy_host, proxy_port),
"http": "http://{}@{}:{}/".format(proxy_auth, proxy_host, proxy_port)}
photon_requests_session = requests.sessions.Session()
photon_requests_session.verify = certifi.where()
r = photon_requests_session.get(url,proxies=proxies,verify='crawlera-ca.crt')
print(r.text)
nestor
Please add a more verbose response like in the sample: https://support.scrapinghub.com/solution/articles/22000203567-using-crawlera-with-python-requests (e.g. response headers)
deepakchauhan
proxy_host = "proxy.crawlera.com"
proxy_port = "8010"
proxy_auth = "<key>:"
proxies = {"https": "https://{}@{}:{}/".format(proxy_auth, proxy_host, proxy_port),
"http": "http://{}@{}:{}/".format(proxy_auth, proxy_host, proxy_port)}
photon_requests_session = requests.sessions.Session()
photon_requests_session.verify = certifi.where()
r = photon_requests_session.get(url,proxies=proxies,verify='crawlera-ca.crt')
soup = BeautifulSoup(r.text,'html5lib')
print("""
Requesting [{}]
through proxy [{}]
Request Headers:
{}
Response Time: {}
Response Code: {}
Response Headers:
{}
""".format(url, proxy_host, r.request.headers, r.elapsed.total_seconds(),
r.status_code, r.headers, r.text))
deepakchauhan
showing bad proxy auth on server but working perfectly in local machine
nestor
Are you sure you're using the same script in both local and AWS?
deepakchauhan
yaa sir i am 100% sure
nestor
Bad Authentication is a client side error, if the API Key being used is the correct one, then the only I can think of is the python requests version installed which might be causing problems.
deepakchauhan
problem solved...problem is python requests version
deepakchauhan
and thank you so much for help
nestor
No problem.
Godfrey Jean
Godfrey Jean
Can you please help me in the on this above question,
I am getting 'bad_proxy_auth' error while trying to test run the code. I am using requests version 2.19.0
-
Crawlera 503 Ban
-
Amazon scraping speed
-
Website redirects
-
Error Code 429 Too Many Requests
-
Bing
-
Subscribed to Crawlera but saying Not Subscribed
-
Selenium with c#
-
Using Crawlera with browsermob
-
CRAWLERA_PRESERVE_DELAY leads to error
-
How to connect Selenium PhantomJS to Crawlera?
See all 371 topics