GAN,通过真实图像生成回归输出,而不是随机噪声

问题描述

这个概念是否可以通过GAN算法实现?

我希望GAN通过真实图像生成形状(4,)的回归输出(G值),而不是从随机噪声中生成,然后将G-Value与真实回归值( R-Value),形状相同(4,)。 R-Value属于“ y-train”数据集。

这意味着,如果图像具有圆形图案,则通常具有x,y,z和alpha位置的4特征。我称其为Real-Value(R-Value),并且我希望GAN产生伪造的值(G-Value)欺骗鉴别符。

我试图按以下方式实现它。

class UTModel:
    def __init__(self):
        optimizer__ = Adam(2e-4)

        self.__dropout = .3

        self.optimizerGenerator = Adam(1e-4)
        self.optimizerdiscriminator = Adam(1e-4)

        self.generator,self.discriminator = self.build()

    def build(self):
        # build the generator
        g = Sequential()
        g.add(Conv2D(512,kernel_size=3,strides=2,input_shape=(128,128,1),padding='same'))
        g.add(Batchnormalization(momentum=0.8))
        g.add(LeakyReLU(alpha=0.2))
        g.add(Dropout(self.__dropout))
        g.add(Conv2D(256,padding='same'))
        g.add(Batchnormalization(momentum=0.8))
        g.add(LeakyReLU(alpha=0.2))
        g.add(Dropout(self.__dropout))
        g.add(Conv2D(128,padding='same'))
        g.add(Batchnormalization(momentum=0.8))
        g.add(LeakyReLU(alpha=0.2))
        g.add(Dropout(self.__dropout))
        g.add(Conv2D(64,strides=1,padding='same'))
        g.add(Batchnormalization(momentum=0.8))
        g.add(LeakyReLU(alpha=0.2))
        g.add(Dropout(self.__dropout))
        g.add(Flatten())
        g.add(Dense(4,activation='linear'))

        # build the discriminator
        d = Sequential()
        d.add(Dense(128,input_shape=(4,)))
        d.add(LeakyReLU(alpha=0.2))
        d.add(Dropout(self.__dropout))
        d.add(Dense(64))
        d.add(LeakyReLU(alpha=0.2))
        d.add(Dropout(self.__dropout))
        d.add(Dense(64))
        d.add(LeakyReLU(alpha=0.2))
        d.add(Dropout(self.__dropout))
        d.add(Dense(32))
        d.add(LeakyReLU(alpha=0.2))
        d.add(Dropout(self.__dropout))
        d.add(Dense(1,activation='sigmoid'))

        return g,d

    def computeLosses(self,rValid,fValid):
        bce = BinaryCrossentropy(from_logits=True)

        # discriminator loss
        rLoss = bce(tf.ones_like(rValid),rValid)
        fLoss = bce(tf.zeros_like(fValid),fValid)
        dLoss = rLoss + fLoss

        # Generator loss
        gLoss = bce(tf.zeros_like(fValid),fValid)

        return dLoss,gLoss

    def train(self,images,rValues):
        with tf.GradientTape() as gTape,tf.GradientTape() as dTape:
            gValues = self.generator(images,training=True)

            rValid = self.discriminator(rValues,training=True)
            fValid = self.discriminator(gValues,training=True)

            dLoss,gLoss = self.computeLosses(rValid,fValid)

        dGradients = dTape.gradient(dLoss,self.discriminator.trainable_variables)
        gGradients = gTape.gradient(gLoss,self.generator.trainable_variables)

        self.optimizerdiscriminator.apply_gradients(zip(dGradients,self.discriminator.trainable_variables))
        self.optimizerGenerator.apply_gradients(zip(gGradients,self.generator.trainable_variables))

        print (dLoss,gLoss)


class UTTrainer:
    def __init__(self):
        self.env = 3DPatterns()
        self.model = UTModel()

    def start(self):
        if not self.env.available:
            return

        batch = 32

        for epoch in range(1):
            # set new episod
            while self.env.setEpisod():
                for i in range(0,self.env.episodelen,batch):
                    self.model.train(self.env.episode[i:i+batch],self.env.y[i:i+batch])

但是G-Values尚未生成为有效值。它总是收敛1或-1。正确的值应类似于[-0.192798,0.212887,-0.034519,-0.015000]。请帮助我找到正确的方法

谢谢。

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)