如何为Tensorflow keras中的每个批次生成随机向量?

问题描述

在下面的代码中,当我第一次调用函数 create_model()时,初始化了矢量 rand

def create_model(num_columns):
    inp_layer = tfl.Input((num_columns,))
    rand = tf.random.uniform((1,num_columns),minval = 0,maxval = 2,dtype = tf.int32),tf.float32))
    inp_rand = tfl.Multiply()([inp_layer,rand])
    dense = tfl.Dense(256,activation = 'relu')(inp_rand)
    dense = tfl.Dense(128,activation = 'relu')(dense)
    dense = tfl.Dense(64,activation = 'sigmoid')(dense)
    model = tf.keras.Model(inputs = inp_layer,outputs = dense)
    model.compile(optimizer = 'adam',loss = 'binary_crossentropy')

model = create_model(num_columns)
model.fit()

我希望在每次执行model.fit()的过程中,只要每次调用函数model.fit()或什至更好时,都可以使用新的随机重新生成它。

您知道我该怎么做吗?

解决方法

您可以更改子类Keras模型的call()方法中发生的事情。

    def call(self,x,training=None,**kwargs):
        rand = tf.cast(tf.random.uniform((1,*x.shape[1:]),2,tf.int32),tf.float32)
        x = tf.multiply(x,rand)
        x = self.conv1(x)
        x = self.maxp1(x)
        x = self.conv2(x)
        x = self.maxp2(x)
        x = self.flatt(x)
        x = self.dens1(x)
        x = self.drop1(x)
        x = self.dens2(x)
        return x

在这里,我使用MNIST进行此操作,在该过程中,我将输入张量与相同形状的随机张量相乘:

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow import keras as K
from tensorflow.keras.layers import Conv2D,Flatten,Dense,MaxPooling2D,Dropout
from tensorflow import nn as nn
from functools import partial

dataset,info = tfds.load('mnist',with_info=True)

train,test = dataset['train'],dataset['test']

def prepare(dataset):
    inputs = tf.divide(x=dataset['image'],y=255)
    targets = tf.one_hot(indices=dataset['label'],depth=10)
    return inputs,targets

train = train.take(5_000).batch(4).map(prepare)
test = test.take(1_000).batch(4).map(prepare)

class MyCNN(K.Model):
    def __init__(self):
        super(MyCNN,self).__init__()
        Conv = partial(Conv2D,kernel_size=(3,3),activation=nn.relu)
        MaxPool = partial(MaxPooling2D,pool_size=(2,2))

        self.conv1 = Conv(filters=8)
        self.maxp1 = MaxPool()
        self.conv2 = Conv(filters=16)
        self.maxp2 = MaxPool()
        self.flatt = Flatten()
        self.dens1 = Dense(64,activation=nn.relu)
        self.drop1 = Dropout(.5)
        self.dens2 = Dense(10,activation=nn.softmax)

    def call(self,rand)
        x = self.conv1(x)
        x = self.maxp1(x)
        x = self.conv2(x)
        x = self.maxp2(x)
        x = self.flatt(x)
        x = self.dens1(x)
        x = self.drop1(x)
        x = self.dens2(x)
        return x

model = MyCNN()

model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])

model.fit(train,validation_data=test,epochs=10,steps_per_epoch=1250,validation_steps=250)
,

我试图实现的实际上是Dropout层。因此,进一步搜索毫无意义。