tensorflow v2渐变未在张量板直方图中显示

问题描述

我有一个简单的神经网络,正试图通过使用如下所示的回调,使用tensorboard绘制渐变:

class GradientCallback(tf.keras.callbacks.Callback):
    console = False
    count = 0
    run_count = 0

    def on_epoch_end(self,epoch,logs=None):
        weights = [w for w in self.model.trainable_weights if 'dense' in w.name and 'bias' in w.name]
        self.run_count += 1
        run_dir = logdir+"/gradients/run-" + str(self.run_count)
        with tf.summary.create_file_writer(run_dir).as_default(),tf.GradientTape() as g:
          # use test data to calculate the gradients
          _x_batch = test_images_scaled_reshaped[:100]
          _y_batch = test_labels_enc[:100]
          g.watch(_x_batch)
          _y_pred = self.model(_x_batch)  # forward-propagation
          per_sample_losses = tf.keras.losses.categorical_crossentropy(_y_batch,_y_pred) 
          average_loss = tf.reduce_mean(per_sample_losses) # Compute the loss value
          gradients = g.gradient(average_loss,self.model.weights) # Compute the gradient

        for t in gradients:
          tf.summary.histogram(str(self.count),data=t)
          self.count+=1
          if self.console:
                print('Tensor: {}'.format(t.name))
                print('{}\n'.format(K.get_value(t)[:10]))

# Set up logging
!rm -rf ./logs/ # clear old logs
from datetime import datetime
import os
root_logdir = "logs"
run_id = datetime.Now().strftime("%Y%m%d-%H%M%s")
logdir = os.path.join(root_logdir,run_id)


# register callbacks,this will be used for tensor board latter
callbacks = [
    tf.keras.callbacks.TensorBoard( log_dir=logdir,histogram_freq=1,write_images=True,write_grads = True ),GradientCallback()
]

然后,我在健身过程中使用以下回调:

network.fit(train_pipe,epochs = epochs,batch_size = batch_size,validation_data = val_pipe,callbacks=callbacks)

现在,当我检查张量板时,我可以在左侧滤镜上看到渐变,但是在“直方图”选项卡中什么都没有显示

Histogram tensorboard gradients

在这里想念什么?我是否正确记录了梯度?

解决方法

问题似乎在于您在 tf 摘要编写器的上下文之外编写直方图。 我相应地更改了您的代码。但我没有试过。

class GradientCallback(tf.keras.callbacks.Callback):
    console = False
    count = 0
    run_count = 0

    def on_epoch_end(self,epoch,logs=None):
        weights = [w for w in self.model.trainable_weights if 'dense' in w.name and 'bias' in w.name]
        self.run_count += 1
        run_dir = logdir+"/gradients/run-" + str(self.run_count)
        with tf.summary.create_file_writer(run_dir).as_default()
          with tf.GradientTape() as g:
            # use test data to calculate the gradients
            _x_batch = test_images_scaled_reshaped[:100]
            _y_batch = test_labels_enc[:100]
            g.watch(_x_batch)
            _y_pred = self.model(_x_batch)  # forward-propagation
            per_sample_losses = tf.keras.losses.categorical_crossentropy(_y_batch,_y_pred) 
            average_loss = tf.reduce_mean(per_sample_losses) # Compute the loss value
            gradients = g.gradient(average_loss,self.model.weights) # Compute the gradient

          for nr,grad in enumerate(gradients):
            tf.summary.histogram(str(nr),data=grad)
            if self.console:
                  print('Tensor: {}'.format(grad.name))
                  print('{}\n'.format(K.get_value(grad)[:10])) 

相关问答

Selenium Web驱动程序和Java。元素在(x,y)点处不可单击。其...
Python-如何使用点“。” 访问字典成员?
Java 字符串是不可变的。到底是什么意思?
Java中的“ final”关键字如何工作?(我仍然可以修改对象。...
“loop:”在Java代码中。这是什么,为什么要编译?
java.lang.ClassNotFoundException:sun.jdbc.odbc.JdbcOdbc...