Python网路数据采集之处理自然语言|第07天
优采云 发布时间: 2020-08-10 18:02User:你好我是森林
Date:2018-04-01
Mark:《Python网路数据采集》
网络采集系列文章
Python网路数据采集之创建爬虫
Python网路数据采集之HTML解析
Python网路数据采集之开始采集
Python网路数据采集之使用API
Python网路数据采集之储存数据
Python网路数据采集之读取文件
Python网路数据采集之数据清洗
处理自然语言概括数据
在之前我们了解了怎样把文本内容分解成 n-gram 模型,或者说是n个词组宽度的句型。从最基本的功能上说,这个集合可以拿来确定这段文字中最常用的词组和句子。另*敏*感*词*的诗句,对原文进行看似合理的概括。
例如我们依照威廉 ·亨利 ·哈里森的就职演全文进行剖析。文章地址
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
import string
from collections import Counter
def cleanSentence(sentence):
sentence = sentence.split(' ')
sentence = [word.strip(string.punctuation+string.whitespace) for word in sentence]
sentence = [word for word in sentence if len(word) > 1 or (word.lower() == 'a' or word.lower() == 'I')]
return sentence
def cleanInput(content):
content = content.upper()
content = re.sub('\n', ' ', content)
content = bytes(content, 'UTF-8')
content = content.decode('ascii', 'ignore')
sentences = content.split('. ')
return [cleanSentence(sentence) for sentence in sentences]
def getNgramsFromSentence(content, n):
output = []
for i in range(len(content)-n+1):
output.append(content[i:i+n])
return output
def getNgrams(content, n):
content = cleanInput(content)
ngrams = Counter()
ngrams_list = []
for sentence in content:
newNgrams = [' '.join(ngram) for ngram in getNgramsFromSentence(sentence, n)]
ngrams_list.extend(newNgrams)
ngrams.update(newNgrams)
return(ngrams)
content = str(
urlopen('http://pythonscraping.com/files/inaugurationSpeech.txt').read(),
'utf-8')
ngrams = getNgrams(content, 3)
print(ngrams)
自然语言工具包
自然语言工具包(Natural Language Toolkit,NLTK)就是这样一个 Python库,用于辨识和标记日语文本中各个词的动词(parts of speech)。
安装与配置
NLTK网站()。安装软件比较简单,例如pip安装。
➜ psysh git:(master) pip install nltk
Collecting nltk
Using cached nltk-3.2.5.tar.gz
Requirement already satisfied: six in /usr/local/lib/python3.6/site-packages (from nltk)
Building wheels for collected packages: nltk
Running setup.py bdist_wheel for nltk ... done
Stored in directory: /Users/demo/Library/Caches/pip/wheels/18/9c/1f/276bc3f421614062468cb1c9d695e6086d0c73d67ea363c501
Successfully built nltk
Installing collected packages: nltk
Successfully installed nltk-3.2.5
You are using pip version 9.0.1, however version 9.0.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
检测一下就OK
➜ psysh git:(master) python
Python 3.6.4 (default, Mar 1 2018, 18:36:50)
[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import nltk
>>>
输入nltk.download()就可以看见NLTK下载器。
NLTK下载器
默认下载全部的包,新手降低排除的相关的麻烦。
安装包
用NLTK做统计剖析
用NLTK做统计剖析通常是从Text对象开始的。Text对象可以通过下边的方式用简单的 Python字符串来创建:
from nltk import word_tokenize
from nltk import Text
tokens = word_tokenize("哈哈哈哈哈")
text = Text(tokens)
word_tokenize函数的参数可以是任何Python字符串。如果你手边没有任何长字符串,但是还想尝试一些功能,在NLTK库里早已外置了几本书,可以用import函数导出:
from nltk.book import *
统计文本中不重复的词组,然后与总词组数据进行比较:>>> len(text6)/len(words)。
今天内容比较少,消化比较困难。哈哈哈