Keras 历史回调损失与损失的控制台输出不匹配

问题描述

我目前正在 Keras 培训 cnn。 现在我想记录训练过程的历史以供以后的可视化使用:

history_callback = model.fit(train_generator,steps_per_epoch=EPOCH_STEP_TRAIN,validation_data=test_generator,validation_steps=EPOCH_STEP_TEST,epochs=NUM_OF_EPOCHS,callbacks=callbacks)

val_loss_history = history_callback.history['val_loss']
loss_history = history_callback.history['loss']

numpy_val_loss_history = np.array(val_loss_history)
numpy_loss_history = np.array(loss_history)

np.savetxt(checkpoint_folder + "valid_loss_history.txt",numpy_val_loss_history,delimiter=",")
np.savetxt(checkpoint_folder + "loss_history.txt",numpy_loss_history,")

验证损失被正确保存并且与控制台的输出完全匹配。

但是我存储的损失值在训练时与控制台的输出值不匹配。看这里:

121/121 [==============================] - 61s 438ms/step - loss: 0.9004 - recall: 0.5097 - precision: 0.0292 - acc: 0.8391 - val_loss: 0.8893 - val_recall: 0.0000e+00 - val_precision: 0.0000e+00 - val_acc: 0.9995
Epoch 2/3
121/121 [==============================] - 52s 428ms/step - loss: 0.5830 - recall: 0.1916 - precision: 0.3660 - acc: 0.9898 - val_loss: 0.5422 - val_recall: 0.3007 - val_precision: 0.7646 - val_acc: 0.9996
Epoch 3/3
121/121 [==============================] - 52s 428ms/step - loss: 0.3116 - recall: 0.3740 - precision: 0.7848 - acc: 0.9920 - val_loss: 0.5248 - val_recall: 0.3119 - val_precision: 0.6915 - val_acc: 0.9996

而 history_callback.history['loss'] 的输出是:

0.8124346733093262 
0.4653359651565552 
0.30956554412841797

我的损失函数是:

def dice_coef(y_true,y_pred,smooth=1e-9):
    y_true_f = K.flatten(y_true)
    y_pred_f = K.flatten(y_pred)
    intersection = K.sum(y_true_f * y_pred_f)
    return (2. * intersection + smooth) / (K.sum(y_true_f**2) + K.sum(y_pred_f**2) + smooth)

def dice_loss(y_true,y_pred):
    return (1-dice_coef(y_true,y_pred))

我也试过:

def dice_loss(y_true,y_pred):
        return tf.reduce_mean(1-dice_coef(y_true,y_pred))

什么都没有改变。

有没有人可以解释这种奇怪的行为?

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)