为什么最好的损失不更新?

问题描述

我正在尝试使用 HYPEROPT 运行参数优化,但我没有看到最佳损失值的打印有任何变化。

我尝试更改准确性符号,但没有帮助。我尝试在自己的随机试验中测试模型,结果要好得多。如何优化参数?

我关注了this notebook

最小代码示例:

import pandas as pd
from sklearn.metrics import roc_auc_score
from hyperopt import STATUS_OK,Trials,fmin,hp,tpe
import xgboost as xgb

def objective(space):
    clf = xgb.XGBClassifier(
        n_estimators=space['n_estimators'],max_depth=int(space['max_depth']),gamma=space['gamma'],reg_alpha=int(space['reg_alpha']),min_child_weight=int(space['min_child_weight']),colsample_bytree=int(space['colsample_bytree']))

    evaluation = [(train,train_labels),(test,test_labels)]

    clf.fit(train,train_labels,eval_set=evaluation,eval_metric="auc",early_stopping_rounds=10,verbose=True)

    pred = clf.predict(test)
    accuracy = roc_auc_score(test_labels,pred)
    print("ROC:",accuracy)
    return {'loss': -accuracy,'status': STATUS_OK}

space = {'max_depth': hp.quniform("max_depth",3,300,1),'gamma': hp.uniform('gamma',1,9),'reg_alpha': hp.quniform('reg_alpha',5,180,'reg_lambda': hp.uniform('reg_lambda','colsample_bytree': hp.uniform('colsample_bytree',0.1,'min_child_weight': hp.quniform('min_child_weight',10,'n_estimators': 300,'seed': 0
         }

train,train_Ids = pd.read_csv("train.csv")
test,test_labels,test_Ids = pd.read_csv("test.csv")

trials = Trials()

best_hyperparams = fmin(fn=objective,space=space,algo=tpe.suggest,max_evals=400,trials=trials)

print("The best hyperparameters are : ","\n")
print(best_hyperparams)

在每次迭代开始时重复的结果,例如:

2%|▏         | 9/400 [00:07<05:31,1.18trial/s,best loss: -0.5]
...
5%|▍         | 19/400 [00:17<05:58,1.06trial/s,best loss: -0.5]
...

解决方法

如果没有您的数据集,我无法重现您的确切问题,但我使用 sklearn 数据集 load_breast_cancer 进行了尝试。我很快就得到了高于 0.5 的分数,但是 有很多分数与该基线分数相同。我认为这是因为您的 reg_alpha 范围太高,因此某些模型最终被修剪为一无所有。希望在您的优化采样一些较小的 alpha 之后,tpe 算法将开始关注更有用的值。

您可能会检查:

import numpy as np
alphas = [(trial['misc']['vals']['reg_alpha'][0],trial['result']['loss']) for trial in trials.trials]
print(np.array([alpha for alpha,score in alphas if score == -0.5]).min())
print(np.array([alpha for alpha,score in alphas if score != -0.5]).max())

这对我来说是 85.089.0;有一点重叠,但从广义上讲,大于 85 的 alpha 会杀死模型。

相关问答

错误1:Request method ‘DELETE‘ not supported 错误还原:...
错误1:启动docker镜像时报错:Error response from daemon:...
错误1:private field ‘xxx‘ is never assigned 按Alt...
报错如下,通过源不能下载,最后警告pip需升级版本 Requirem...