网页抓取解密(使用者再网上查询相关python抓取数据的代码图:其中cookie部分 )
优采云 发布时间: 2021-11-27 18:22网页抓取解密(使用者再网上查询相关python抓取数据的代码图:其中cookie部分
)
免责声明:本代码采集的数据可用于研究学习,请勿用于商业用途,否则由此产生的商业纠纷由用户自行负责
最近需要用到环境国家省会城市的历史数据pm2.5,正好天气网()提供历史数据查询,所以在线查询相关python爬取数据代码,主要是参考这篇博文:
导入 urllib.request;
2. URL增加了反爬取措施,因为它是爬取历史数据,实际是(city)/(date).html。比如北京201907的数据链接是:,但是你用的是新浏览器如果直接在上面打开上面的连接,会返回如下页面:
这也是网上很多python爬虫无法爬取数据的原因。找个理由,你要先访问我,然后到上面的页面返回就ok了。我猜可能是前端在访问主页时写了一个cookie。因此,您可以再次按F12查看页面的cookie数据,如下图:cookie部分已经用红线标出,请求url时直接添加headers。使用这个方法:req=urllib .request.Request(url=url,headers=my_headers),其中
my_headers = {
"Host": "lishi.tianqi.com",
"Connection": "keep-alive",
"Upgrade-Insecure-Requests": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3",
"Accept-Language": "zh-CN,zh;q=0.8,en;q=0.6",
"Referer": "http://lishi.tianqi.com/Accept-Encoding: gzip, deflate",
"Cookie": "cityPy=xianqu; cityPy_expire=1565422933; UM_distinctid=16c566dd356244-05e0d9cb0c361-3f385c06-1fa400-16c566dd357642; Hm_lvt_ab6a683aa97a52202eab5b3a9042a8d2=1564818134; CNZZDATA1275796416=927309794-1564814113-%7C1564814113; Hm_lpvt_ab6a683aa97a52202eab5b3a9042a8d2=1564818280"},具体代码见下面代码
代码显示如下:
import random
import socket
import sys
import urllib
import urllib.request
from bs4 import BeautifulSoup
#reload(sys)
#sys.('utf8')
socket.setdefaulttimeout(30.0)
def parseTianqi(url):
my_headers = {
"Host": "lishi.tianqi.com",
"Connection": "keep-alive",
"Upgrade-Insecure-Requests": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3",
"Accept-Language": "zh-CN,zh;q=0.8,en;q=0.6",
"Referer": "http://lishi.tianqi.com/Accept-Encoding: gzip, deflate",
"Cookie": "cityPy=xianqu; cityPy_expire=1565422933; UM_distinctid=16c566dd356244-05e0d9cb0c361-3f385c06-1fa400-16c566dd357642; Hm_lvt_ab6a683aa97a52202eab5b3a9042a8d2=1564818134; CNZZDATA1275796416=927309794-1564814113-%7C1564814113; Hm_lpvt_ab6a683aa97a52202eab5b3a9042a8d2=1564818280"}
req = urllib.request.Request(url=url, headers=my_headers)
req.add_header("Content-Type", "application/json")
fails = 0
while True:
try:
if fails >= 3:
break
req_data = urllib.request.urlopen(req)
response_data = req_data.read()
response_data = response_data.decode('gbk').encode('utf-8')
return response_data
except urllib.request.URLError as e:
fails += 1
print ('网络连接出现问题, 正在尝试再次请求: ', fails)
else:
break
def witeCsv(data, file_name):
file = open(file_name, 'w',-1,'utf-8')
soup = BeautifulSoup(data, 'html.parser')
weather_list = soup.select('div[class="tqtongji2"]')
for weather in weather_list:
weather_date = weather.select('a')[0].string.encode('utf-8')
ul_list = weather.select('ul')
i = 0
for ul in ul_list:
li_list = ul.select('li')
str = ""
for li in li_list:
str += li.string.encode('utf-8').decode() + ','
if i != 0:
file.write(str + '\n')
i += 1
file.close()
# 根据图片主页,抓取当前图片下面的相信图片
if __name__ == "__main__":
data = parseTianqi("http://lishi.tianqi.com/beijing/201907.html");
witeCsv(data, "beijing_201907");