sklearn决策树集成和贝叶斯优化的结果确实很差

问题描述

没有贝叶斯优化:

model = BaggingClassifier(base_estimator=DecisionTreeClassifier(min_samples_split=15),n_estimators=100,random_state=7)

结果:

Training set - Matthews correlation coefficient:0.93
Test set - Matthews correlation coefficient: 0.45584530253849204

模型参数- model.get_params():

{
    'base_estimator__ccp_alpha': 0.0,'base_estimator__class_weight': None,'base_estimator__criterion': 'gini','base_estimator__max_depth': None,'base_estimator__max_features': None,'base_estimator__max_leaf_nodes': None,'base_estimator__min_impurity_decrease': 0.0,'base_estimator__min_impurity_split': None,'base_estimator__min_samples_leaf': 1,'base_estimator__min_samples_split': 15,'base_estimator__min_weight_fraction_leaf': 0.0,'base_estimator__presort': 'deprecated','base_estimator__random_state': None,'base_estimator__splitter': 'best','base_estimator': DecisionTreeClassifier(min_samples_split=15),'bootstrap': True,'bootstrap_features': False,'max_features': 1.0,'max_samples': 1.0,'n_estimators': 100,'n_jobs': None,'oob_score': False,'random_state': 7,'verbose': 0,'warm_start': False
}

我决定进行贝叶斯优化以减少过度拟合:

    param_hyperopt = {
        'ccp_alpha': hp.uniform('ccp_alpha',1),'max_depth': scope.int(hp.quniform('max_depth',5,20,1)),'n_estimators': scope.int(hp.quniform('n_estimators',200,'max_features': scope.int(hp.quniform('max_features',2,10,'min_samples_leaf': scope.int(hp.quniform('min_samples_leaf',1,40,'splitter': hp.choice('splitter',['best','random']),'criterion': hp.choice('criterion',['gini','entropy']),'max_leaf_nodes': scope.int(hp.quniform('max_leaf_nodes','min_impurity_decrease': hp.uniform('min_impurity_decrease','min_samples_split': scope.int(hp.quniform('min_samples_split',3,'min_weight_fraction_leaf': hp.uniform('min_weight_fraction_leaf',0.5),"max_samples" : scope.int(hp.quniform('max_samples',}

def objective_function(params):
    n_estimators = params["n_estimators"]
    max_samples = params["max_samples"]
    del params["n_estimators"]
    del params["max_samples"]
    clf = BaggingClassifier(base_estimator=DecisionTreeClassifier(**params),n_estimators=n_estimators,max_samples=max_samples,random_state=7)
    score = cross_val_score(clf,X_train,np.ravel(y_train),cv=5).mean()
    return {'loss': -score,'status': STATUS_OK}

trials = Trials()
best_param = fmin(objective_function,param_hyperopt,algo=tpe.suggest,max_evals=200,trials=trials,rstate= np.random.RandomState(1))
loss = [x['result']['loss'] for x in trials.trials]

best_param_values = [x for x in best_param.values()]

我得到了这些结果:

{'ccp_alpha': 0.5554600863908586,'criterion': 1,'max_depth': 15.0,'max_features': 9,'max_leaf_nodes': 3,'min_impurity_decrease': 0.6896630931867213,'min_samples_leaf': 38,'min_samples_split': 4,'min_weight_fraction_leaf': 0.48094992349222787,'splitter': 1}

参数已调整的模型:

clf = BaggingClassifier(base_estimator=DecisionTreeClassifier(
    ccp_alpha =best_param["ccp_alpha"],criterion="entropy",max_depth=best_param["max_depth"],max_features=best_param["max_features"],max_leaf_nodes=best_param["max_leaf_nodes"],min_impurity_decrease=best_param["min_impurity_decrease"],min_samples_leaf=best_param["min_samples_leaf"],min_samples_split=best_param["min_samples_split"],min_weight_fraction_leaf=best_param["min_weight_fraction_leaf"],splitter="random"
    
),n_estimators=int(n_estimators),max_samples=int(max_samples),random_state=702120)

clf.fit(X_train,np.ravel(y_train))

这是我得到的结果! -混淆矩阵:

array([[   0,5897],[   0,5974]])

它把所有东西都放在同一个班上!为什么会这样?

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)

相关问答

Selenium Web驱动程序和Java。元素在(x,y)点处不可单击。其...
Python-如何使用点“。” 访问字典成员?
Java 字符串是不可变的。到底是什么意思?
Java中的“ final”关键字如何工作?(我仍然可以修改对象。...
“loop:”在Java代码中。这是什么,为什么要编译?
java.lang.ClassNotFoundException:sun.jdbc.odbc.JdbcOdbc...