主页 M

Python反爬虫措施之User-Agent

2021-06-23 网页编程网 网页编程网

知乎等通过UA的反爬措施,在爬虫中进行request请求,会被拒并返回400。

原理:尝试更换User-agent,并轮流多个不同的User-agent 。有二法处理之:

1.自己写UA

import random
import requests
headers=[
    {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1) Gecko/20100101 Firefox/4.0.1'},
    {'User-Agent':'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0;'},
    {'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50'},
    {'User-Agent':'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50'}
]
req=requests.get(url='xxxx',headers=random.choice(headers))

2.调用useragent模块

import fake_useragent
# 实例化 user-agent 对象
ua = fake_useragent.UserAgent()
print(ua.random)
print(ua.random)
print(ua.random)
阅读原文
阅读 3153
123 显示电脑版