问题描述
我想知道是否有任何方法可以让 CountVectorizer()
忽略在所有文档中出现少于 x 次且少于 y 个字符的单词。类似于 wordlength
的 bounds
(R
) 中的 DocumentTermMatrix
和 tm
参数。
示例
这个语料库:
corpus = [
'This is the first document.','This document is the second document.','And this is the third one.','Is this the first document?',]
现在变成这样:
>>> vectorizer = CountVectorizer()
>>> X = vectorizer.fit_transform(corpus)
>>> print(vectorizer.get_feature_names())
['and','document','first','is','one','second','the','third','this']
>>> print(X.toarray())
[[0 1 1 1 0 0 1 0 1]
[0 2 0 1 0 1 1 0 1]
[1 0 0 1 1 0 1 1 1]
[0 1 1 1 0 0 1 0 1]]
将 x 和 y 设置为 2,我想要这样:
>>> vectorizer = CountVectorizer()
>>> X = vectorizer.fit_transform(corpus)
>>> print(vectorizer.get_feature_names())
['and','this']
>>> print(X.toarray())
[[1 1 1 1]
[2 0 1 1]
[0 0 1 1]
[1 1 1 1]]
解决方法
您可能希望:
- 设置
min_df=2
来处理x
- 定义
token_pattern=r"(?u)\b[a-zA-Z]{3,}\b"
来处理y
(您可以尝试token_pattern=r"(?u)\b[a-zA-Z0-9_]{3,}\b"
在令牌定义中包含数字和下划线)
演示:
from sklearn.feature_extraction.text import CountVectorizer
corpus = [
"This is the first document.","This document is the second document.","And this is the third one.","Is this the first document?",]
vectorizer = CountVectorizer(min_df=2,token_pattern=r"(?u)\b[a-zA-Z]{3,}\b")
X = vectorizer.fit_transform(corpus)
print(X.toarray())
[[1 1 1 1]
[2 0 1 1]
[0 0 1 1]
[1 1 1 1]]