使用k-means聚类文本文档¶
这是一个示例,展示了如何使用词袋方法将scikit-learn用于按主题对文档进行聚类。本示例使用scipy.sparse矩阵存储要素,而不是标准numpy数组。
在此示例中,可以使用两种特征提取方法:
TfidfVectorizer使用内存中的词汇表(python字典)将最频繁出现的单词映射到特征索引,从而计算单词出现频率(稀疏)矩阵。然后使用在整个语料库中按特征收集的逆文档频率(IDF)向量对单词频率进行加权。
HashingVectorizer将单词出现散列到固定维空间,可能会发生冲突。然后将字数向量标准化为每个l2范数等于1(投影到欧几里得单位球),这似乎对k均值在高维空间中起作用很重要。
HashingVectorizer不提供IDF加权,因为这是一个无状态模型(fit方法不执行任何操作)。当需要IDF加权时,可以通过将其输出通过管道传递到TfidfTransformer实例来添加。
演示了两种算法:普通k均值及其更具可扩展性的表亲minibatch k均值。
此外,潜在语义分析还可用于减少维数并发现数据中的潜在模式。
可以注意到,k均值(和小批量k均值)对特征缩放非常敏感,在这种情况下,IDF加权有助于将聚类的质量提高很多,这是针对由20个新闻组数据集的类标签分配。
这种改进在“轮廓系数”中不可见,该系数对于这两者都是很小的,因为对于像文本数据这样的高维数据集,此度量似乎遭受称为“度量集中”或“维数诅咒”的现象。其他度量(例如V度量和调整的兰德指数)都是基于信息理论的评估评分:因为它们仅基于聚类分配而不是距离,因此不受维度诅咒的影响。
注意:由于k均值正在优化非凸目标函数,因此它最终可能会达到局部最优值。为了获得良好的收敛性,可能需要使用独立的随机init进行多次运行。
# Author: Peter Prettenhofer <peter.prettenhofer@gmail.com>
# Lars Buitinck
# License: BSD 3 clause
from sklearn.datasets import fetch_20newsgroups
from sklearn.decomposition import TruncatedSVD
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import Normalizer
from sklearn import metrics
from sklearn.cluster import KMeans, MiniBatchKMeans
import logging
from optparse import OptionParser
import sys
from time import time
import numpy as np
# 在标准输出上显示进度日志
logging.basicConfig(level=logging.INFO,
format='%(asctime)s %(levelname)s %(message)s')
# 解析命令行参数
op = OptionParser()
op.add_option("--lsa",
dest="n_components", type="int",
help="Preprocess documents with latent semantic analysis.")
op.add_option("--no-minibatch",
action="store_false", dest="minibatch", default=True,
help="Use ordinary k-means algorithm (in batch mode).")
op.add_option("--no-idf",
action="store_false", dest="use_idf", default=True,
help="Disable Inverse Document Frequency feature weighting.")
op.add_option("--use-hashing",
action="store_true", default=False,
help="Use a hashing feature vectorizer")
op.add_option("--n-features", type=int, default=10000,
help="Maximum number of features (dimensions)"
" to extract from text.")
op.add_option("--verbose",
action="store_true", dest="verbose", default=False,
help="Print progress reports inside k-means algorithm.")
print(__doc__)
op.print_help()
def is_interactive():
return not hasattr(sys.modules['__main__'], '__file__')
# Jupyter Notebook和IPython控制台的解决方法
argv = [] if is_interactive() else sys.argv[1:]
(opts, args) = op.parse_args(argv)
if len(args) > 0:
op.error("this script takes no arguments.")
sys.exit(1)
# #############################################################################
# 从训练集中加载一些类别
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
# 下面这行取消注释以使用更大的注释集(超过11k个文档)
# categories = None
print("Loading 20 newsgroups dataset for categories:")
print(categories)
dataset = fetch_20newsgroups(subset='all', categories=categories,
shuffle=True, random_state=42)
print("%d documents" % len(dataset.data))
print("%d categories" % len(dataset.target_names))
print()
labels = dataset.target
true_k = np.unique(labels).shape[0]
print("Extracting features from the training dataset "
"using a sparse vectorizer")
t0 = time()
if opts.use_hashing:
if opts.use_idf:
# 在HashingVectorizer的输出上执行IDF归一化
hasher = HashingVectorizer(n_features=opts.n_features,
stop_words='english', alternate_sign=False,
norm=None)
vectorizer = make_pipeline(hasher, TfidfTransformer())
else:
vectorizer = HashingVectorizer(n_features=opts.n_features,
stop_words='english',
alternate_sign=False, norm='l2')
else:
vectorizer = TfidfVectorizer(max_df=0.5, max_features=opts.n_features,
min_df=2, stop_words='english',
use_idf=opts.use_idf)
X = vectorizer.fit_transform(dataset.data)
print("done in %fs" % (time() - t0))
print("n_samples: %d, n_features: %d" % X.shape)
print()
if opts.n_components:
print("Performing dimensionality reduction using LSA")
t0 = time()
# Vectorizer results are normalized, which makes KMeans behave as
# spherical k-means for better results. Since LSA/SVD results are
# not normalized, we have to redo the normalization.
svd = TruncatedSVD(opts.n_components)
normalizer = Normalizer(copy=False)
lsa = make_pipeline(svd, normalizer)
X = lsa.fit_transform(X)
print("done in %fs" % (time() - t0))
explained_variance = svd.explained_variance_ratio_.sum()
print("Explained variance of the SVD step: {}%".format(
int(explained_variance * 100)))
print()
# #############################################################################
# 做聚类
if opts.minibatch:
km = MiniBatchKMeans(n_clusters=true_k, init='k-means++', n_init=1,
init_size=1000, batch_size=1000, verbose=opts.verbose)
else:
km = KMeans(n_clusters=true_k, init='k-means++', max_iter=100, n_init=1,
verbose=opts.verbose)
print("Clustering sparse data with %s" % km)
t0 = time()
km.fit(X)
print("done in %0.3fs" % (time() - t0))
print()
print("Homogeneity: %0.3f" % metrics.homogeneity_score(labels, km.labels_))
print("Completeness: %0.3f" % metrics.completeness_score(labels, km.labels_))
print("V-measure: %0.3f" % metrics.v_measure_score(labels, km.labels_))
print("Adjusted Rand-Index: %.3f"
% metrics.adjusted_rand_score(labels, km.labels_))
print("Silhouette Coefficient: %0.3f"
% metrics.silhouette_score(X, km.labels_, sample_size=1000))
print()
if not opts.use_hashing:
print("Top terms per cluster:")
if opts.n_components:
original_space_centroids = svd.inverse_transform(km.cluster_centers_)
order_centroids = original_space_centroids.argsort()[:, ::-1]
else:
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(true_k):
print("Cluster %d:" % i, end='')
for ind in order_centroids[i, :10]:
print(' %s' % terms[ind], end='')
print()
输出:
Usage: plot_document_clustering.py [options]
Options:
-h, --help show this help message and exit
--lsa=N_COMPONENTS Preprocess documents with latent semantic analysis.
--no-minibatch Use ordinary k-means algorithm (in batch mode).
--no-idf Disable Inverse Document Frequency feature weighting.
--use-hashing Use a hashing feature vectorizer
--n-features=N_FEATURES
Maximum number of features (dimensions) to extract
from text.
--verbose Print progress reports inside k-means algorithm.
Loading 20 newsgroups dataset for categories:
['alt.atheism', 'talk.religion.misc', 'comp.graphics', 'sci.space']
3387 documents
4 categories
Extracting features from the training dataset using a sparse vectorizer
done in 0.913811s
n_samples: 3387, n_features: 10000
Clustering sparse data with MiniBatchKMeans(batch_size=1000, init_size=1000, n_clusters=4, n_init=1,
verbose=False)
done in 0.082s
Homogeneity: 0.412
Completeness: 0.491
V-measure: 0.448
Adjusted Rand-Index: 0.289
Silhouette Coefficient: 0.006
Top terms per cluster:
Cluster 0: graphics image file thanks files 3d university format gif software
Cluster 1: space nasa henry access digex toronto gov pat alaska shuttle
Cluster 2: com god article don people just sandvik university know think
Cluster 3: sgi keith livesey morality jon solntze wpd caltech objective moral
脚本的总运行时间:(0分钟1.376秒)