python爬虫实战之自动下载网页音频文件
优采云 发布时间: 2022-05-08 09:51python爬虫实战之自动下载网页音频文件
2、BeautifulSoup
一个灵活又方便的网页解析库,处理高效,支持多种解析器。
利用它就不用编写正则表达式也能方便的实现网页信息的抓取。
3、安装和引入:
pip install requests<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" />pip install BeautifulSoup
import requests<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" />from bs4 import BeautifulSoup as bf
二、目标网站
一个需要手动点击下载mp3文件的网站,因为需要下载几百个所以很难手动操作。
三:获取并解析网页源代码
1、使用requests获取目标网站的源代码
r = requests.get('http://www.goodkejian.com/ertonggushi.htm')
所有下载链接被存放在标签内,并且长度固定。该链接将其中的amp;去除后方可直接下载。
2、使用BeautifulSoup将网页内容解析并将其中的标签提取出来
soup = bf(r.text, 'html.parser')<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" />res = soup.find_all('a')
四:下载
经过上述步骤res就变成了包含所有目标标签的数组,要想下载网页上的所有mp3文件,只要循环把res中的元组转换为字符串,并经过筛选、裁剪等处理后变成链接就可以使用request访问了,并且返回值就是mp3文件的二进制表示,将其以二进制形式写进文件即可。
全部代码如下:
import requests<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" />from bs4 import BeautifulSoup as bf<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" /><br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" />r = requests.get('http://www.goodkejian.com/ertonggushi.htm')<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" /><br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" />soup = bf(r.text, 'html.parser')<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" />res = soup.find_all('a')<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" /><br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" />recorder = 1<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" /># 长度为126的是要找的图标<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" />for i in res:<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" /> dst = str(i)<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" /> if dst.__len__() == 126:<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" /> url1 = dst[9:53]<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" /> url2 = dst[57:62]<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" /> url = url1 + url2<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" /> print(url)<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" /> xjh_request = requests.get(url)<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" /> with open("./res/" + str(recorder) + ".rar", 'wb') as file:<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" /> file.write(xjh_request.content)<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" /> file.close()<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" /> recorder += 1<br style="outline: 0px;max-width: 100%;box-sizing: border-box !important;overflow-wrap: break-word !important;" /> print("ok")
以上就是使用python爬虫自动下载网页音频文件的思路和全部代码,大家可以套入代码尝试下载进行实战练习哦~
*声明:本文于网络整理,版权归原作者所有,如来源信息有误或侵犯权益,请联系我们删除或授权事宜。