聚类看起来不正确

问题描述

我不知道我的代码哪里出错了我没有在我的情节中显示所有 4 个集群。有什么想法吗?

Scrolling

Clustering

解决方法

您的数据是连续的还是分类的?它似乎是分类的。计算二元变量之间的距离没有多大意义。并非所有数据都适合聚类。

我没有您的实际数据,但我将向您展示如何使用规范的 MTCars 示例数据正确和错误地进行聚类。

# import mtcars data from web,and do some clustering on the data set


import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.cluster import MiniBatchKMeans 


# Import CSV mtcars
data = pd.read_csv('https://gist.githubusercontent.com/ZeccaLehn/4e06d2575eb9589dbe8c365d61cb056c/raw/64f1660f38ef523b2a1a13be77b002b98665cdfe/mtcars.csv')
# Edit element of column header
data.rename(columns={'Unnamed: 0':'brand'},inplace=True)


X1= data.iloc[:,1:12]
Y1= data.iloc[:,-1]

#lets try to plot Decision tree to find the feature importance
from sklearn.tree import DecisionTreeClassifier
tree= DecisionTreeClassifier(criterion='entropy',random_state=1)
tree.fit(X1,Y1)



imp= pd.DataFrame(index=X1.columns,data=tree.feature_importances_,columns=['Imp'] )
imp.sort_values(by='Imp',ascending=False)

sns.barplot(x=imp.index.tolist(),y=imp.values.ravel(),palette='coolwarm')

X=data[['cyl','drat']]
Y=data['carb']

#lets try to create segments using K means clustering
from sklearn.cluster import KMeans
#using elbow method to find no of clusters
wcss=[]
for i in range(1,7):
    kmeans= KMeans(n_clusters=i,init='k-means++',random_state=1)
    kmeans.fit(X)
    wcss.append(kmeans.inertia_)


plt.plot(range(1,7),wcss,linestyle='--',marker='o',label='WCSS value')
plt.title('WCSS value- Elbow method')
plt.xlabel('no of clusters- K value')
plt.ylabel('Wcss value')
plt.legend()
plt.show()

kmeans.predict(X)


#Cluster Center
kmeans = MiniBatchKMeans(n_clusters = 5)
kmeans.fit(X)

centroids = kmeans.cluster_centers_
labels = kmeans.labels_

print(centroids)
print(labels)


colors = ["green","red","blue","yellow","orange"]
plt.scatter(X.iloc[:,0],X.iloc[:,1],c=np.array(colors)[labels],s = 10,alpha=.1)
plt.scatter(centroids[:,centroids[:,marker = "x",s=150,linewidths = 5,zorder = 10,c=colors)
plt.show()

enter image description here


...now,I am just changing two features (two independent variables)...and re-running the same experiment...

X=data[['wt','qsec']]
Y=data['carb']

#lets try to create segments using K means clustering
from sklearn.cluster import KMeans
#using elbow method to find no of clusters
wcss=[]
for i in range(1,c=colors)
plt.show()

enter image description here

如您所见,用于聚类的特征的选择会对结果产生巨大影响(显然)。第一个示例看起来有点像您的结果,第二个示例看起来更像是一个更有用/更有趣的聚类实验。

相关问答

Selenium Web驱动程序和Java。元素在(x,y)点处不可单击。其...
Python-如何使用点“。” 访问字典成员?
Java 字符串是不可变的。到底是什么意思?
Java中的“ final”关键字如何工作?(我仍然可以修改对象。...
“loop:”在Java代码中。这是什么,为什么要编译?
java.lang.ClassNotFoundException:sun.jdbc.odbc.JdbcOdbc...