基于 TF/Keras 中可训练变量收敛的 EarlyStopping

问题描述

假设我有一个使用 TF 2.4 的外部可训练变量为我计算损失的自定义层(是的,我知道这是一个愚蠢的例子和损失,它只是为了可重复性,实际损失要复杂得多):

import numpy as np
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Dense,Layer,Input
from tensorflow.keras import Model
from tensorflow.keras.callbacks import EarlyStopping
import tensorflow as tf

n_col = 10
n_row = 1000
X = np.random.normal(size=(n_row,n_col))
beta = np.arange(10)
y = X @ beta

X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,random_state=42)

class MyLoss(Layer):
    def __init__(self,var1,var2):
        super(MyLoss,self).__init__()
        self.var1 = tf.Variable(var1)
        self.var2 = tf.Variable(var2)

    def get_vars(self):
        return self.var1,self.var2

    def custom_loss(self,y_true,y_pred):
        return self.var1 ** 2 * tf.math.reduce_mean(tf.math.square(y_true-y_pred)) + self.var2 ** 2

    def call(self,y_pred):
        self.add_loss(self.custom_loss(y_true,y_pred))
        return y_pred


inputs = Input(shape=(X_train.shape[1],))
y_input = Input(shape=(1,))
hidden1 = Dense(10)(inputs)
output = Dense(1)(hidden1)
my_loss = MyLoss(0.5,0.5)(y_input,output) # here can also initialize those var1,var2
model = Model(inputs=[inputs,y_input],outputs=my_loss)

model.compile(optimizer= 'adam')

训练这个模型很简单:

history = model.fit([X_train,y_train],None,batch_size=32,epochs=100,validation_split=0.1,verbose=0,callbacks=[EarlyStopping(monitor='val_loss',patience=5)])

如果我们编写自定义回调或逐个训练纪元,我们可以看到 var1var2 如何按照预期收敛到 0:

var1_list = []
var2_list = []
for i in range(100):
    if i % 10 == 0:
        print('step %d' % i)
    model.fit([X_train,epochs=1,verbose=0)
    var1,var2 = model.layers[-1].get_vars()
    var1_list.append(var1.numpy())
    var2_list.append(var2.numpy())

plt.plot(var1_list,label='var1')
plt.plot(var2_list,'r',label='var2')
plt.legend()
plt.show()

enter image description here

简短问题:如何根据 EarlyStoppingpatience 的收敛性使模型停止(var1 with some var2) (即它们的向量大小 self.var1**2 + self.var2**2,并再次假设损失要复杂得多,并且您不能只是将此向量大小添加到损失中)?

更长的问题:(如果您有时间/耐心)

  • 是否可以实现自定义 Metric 并让 EarlyStopping 对其进行跟踪?
  • 在这种情况下,当 EarlyStopping 得到的全部是“min”或“max”时,您将如何让 mode 专注于“收敛”? (我想知道我们是否可以扩展 EarlyStopping 而不是扩展 Callback
  • 我们可以在没有指标的情况下使用自定义回调来做到这一点吗?
  • 我们如何结合上面的自定义损失,告诉EarlyStopping注意两者,即“如果您没有看到损失的改善和收敛的改善,请停止耐心等待= 10"?

解决方法

好吧,至少对于“更短的问题”,结果非常简单,遵循 TF 文档中的 this 示例,通过关注变量范数来实现 EarlyStopping

class EarlyStoppingAtVarsConvergence(tf.keras.callbacks.Callback):
    def __init__(self,norm_thresh=0.01,patience=0):
        super(EarlyStoppingAtVarsConvergence,self).__init__()
        self.norm_thresh = norm_thresh
        self.patience = patience

    def on_train_begin(self,logs=None):
        # The number of epoch it has waited when norm hasn't converged.
        self.wait = 0
        # The epoch the training stops at.
        self.stopped_epoch = 0
        # Initialize sigmas norm.
        self.vars_norm = self.get_vars_norm()

    def get_vars_norm(self):
        var1,var2 = model.layers[-1].get_vars()
        return var1**2 + var2**2
    
    def on_epoch_end(self,epoch,logs=None):
        current_norm = self.get_vars_norm()
        if np.abs(current_norm - self.vars_norm) > self.norm_thresh:
            self.sigmas_norm = current_norm
            self.wait = 0
        else:
            self.wait += 1
            if self.wait >= self.patience:
                self.stopped_epoch = epoch
                self.model.stop_training = True

    def on_train_end(self,logs=None):
        if self.stopped_epoch > 0:
            print("Epoch %05d: early stopping" % (self.stopped_epoch + 1))

然后模型将运行:

history = model.fit([X_train,y_train],None,batch_size=32,epochs=100,validation_split=0.1,verbose=0,callbacks=[EarlyStoppingAtVarsConvergence(patience=5)])