如何在使用自定义学习率时解决“运行时错误:尝试以 EAGER 模式进行评估”?

问题描述

我正在使用自定义学习率调度程序,但遇到错误 RuntimeError: Trying to eval in EAGER mode while I'm try to do.

我已经制作了一个用于计算一个时期的学习率的函数,并且我使用了 TensorFlow 提供的 LearningRateScheduler() 来实现模型。

这是我针对上述问题的代码

模型架构:

model_learning_rate = models.Sequential()
model_learning_rate.add(Conv2D(16,kernel_size=5,activation='relu',input_shape=(64,64,1)))
model_learning_rate.add(MaxPool2D())
model_learning_rate.add(Dropout(0.4))
model_learning_rate.add(Conv2D(32,activation='relu'))
model_learning_rate.add(MaxPool2D())
model_learning_rate.add(Dropout(0.4))
model_learning_rate.add(Conv2D(64,activation='relu'))
model_learning_rate.add(MaxPool2D())
model_learning_rate.add(Dropout(0.4))
model_learning_rate.add(Flatten())
model_learning_rate.add(Dense(256,activation='relu'))
model_learning_rate.add(Dense(62,activation='softmax'))
model_learning_rate.compile(loss='categorical_crossentropy',optimizer=tf.keras.optimizers.Adam(lr=1e-3),metrics=['accuracy'])

自定义学习率函数

def lr_schedule(epoch,lr):
    """Learning Rate Schedule
    # Arguments
        epoch (int): The number of epochs
    # Returns
        lr (float32): learning rate
    """
    global K1,K2

    Kz = 0.  # max penultimate activation
    S = 0.
    
    sess = tf.compat.v1.keras.backend.get_session()
    max_wt = 0.
    for weight in model_learning_rate.weights:
        norm = np.linalg.norm(weight.eval(sess))
        if norm > max_wt:
            max_wt = norm
    
    for i in range((len(X_train) - 1) // batch_size + 1):
        start_i = i * batch_size
        end_i = start_i + batch_size
        xb = X_train[start_i:end_i]
        
        tmp = np.array(func([xb]))
        activ = np.linalg.norm(tmp)
        sq = np.linalg.norm(np.square(tmp))

        if sq > S:
            S = sq
        
        if activ > Kz:
            Kz = activ

    K_ = ((num_classes - 1) * Kz) / (num_classes * batch_size) #+ 1e-4 * max_wt
    S_ = (num_classes - 1) ** 2 / (num_classes * batch_size) ** 2 * S #+ 2e-4 * max_wt * ((num_classes - 1) * Kz) / (num_classes * batch_size)
    
    K1 = beta1 * K1 + (1 - beta1) * K_
    K2 = beta2 * K2 + (1 - beta2) * S_

    lr = (np.sqrt(K2) + K.epsilon()) / K1
    print('S =',S,',K1 =',K1,K2 =',K2,K_ =',K_,lr =',lr)
    lrs.append(lr)
    print('Epoch',epoch,'LR =',lr)
    return lr

使用学习率:

lr_scheduler = LearningRateScheduler(lr_schedule)
history_learning_rate = model_learning_rate.fit(datagen.flow(X_train,y_train,batch_size=8),epochs = 30,steps_per_epoch=len(X_train) / 8,validation_data = (X_test,y_test),callbacks=[lr_scheduler])

解决方法

在导入 tensorflow 后尝试用这个来抑制它:

tf.compat.v1.disable_v2_behavior()