无法减少称为FlowNet的光流模型中的损耗

问题描述

我一直在尝试模拟称为FlowNet的DL架构,如下图所示。

enter image description here

该模型称为FlowNetSimple。细化结构称为show。

enter image description here

我已经使用带有tensorflow后端的keras构造了模型层。我曾经使用过的需要最小化的误差是均方误差损失。我使用的模型代码是。

    #### The input layer normalized between 0 an 1
inputs = Input((IMG_HEIGHT,IMG_WIDTH,IMG_CHANNELS*2))
inputs_norm = Lambda(lambda x : x / 255)(inputs)

#Downscaling the input (Encoder)
#Convolutional layer 1
conv1 = Conv2D(64,(7,7),activation='elu',kernel_initializer='he_normal',padding='same')(inputs_norm)
conv1 = Batchnormalization()(conv1)
conv1 = Dropout(0.1)(conv1)
conv1_pool = MaxPooling2D((2,2))(conv1)

#Convolutional layer 2_a
conv1_1 = Conv2D(64,padding='same')(conv1_pool)
conv1_1 = Batchnormalization()(conv1_1)
conv1_1 = Dropout(0.1)(conv1_1)

#Convolutional layer 2
conv2 = Conv2D(128,(5,5),padding='same')(conv1_1)
conv2 = Batchnormalization()(conv2)
conv2 = Dropout(0.1)(conv2)
conv2_pool = MaxPooling2D((2,2))(conv2)

#Convolutional layer 2_a
conv2_1 = Conv2D(128,padding='same')(conv2_pool)
conv2_1 = Batchnormalization()(conv2_1)
conv2_1 = Dropout(0.1)(conv2_1)

#Convolutional layer 3
conv3 = Conv2D(256,(3,3),padding='same')(conv2_1)
conv3 = Batchnormalization()(conv3)
conv3 = Dropout(0.1)(conv3)
conv3_pool = MaxPooling2D((2,2))(conv3)

#Convolutional layer 3_a
conv3_1 = Conv2D(256,padding='same')(conv3_pool)
conv3_1 = Batchnormalization()(conv3_1)
conv3_1 = Dropout(0.1)(conv3_1)

#Convolutional layer 4
conv4 = Conv2D(512,padding='same')(conv3_1)
conv4 = Batchnormalization()(conv4)
conv4 = Dropout(0.1)(conv4)
conv4_pool = MaxPooling2D((2,2))(conv4)

#Convolutional layer 4_a
conv4_1 = Conv2D(512,padding='same')(conv4_pool)
conv4_1 = Batchnormalization()(conv4_1)
conv4_1 = Dropout(0.1)(conv4_1)

#Convolutional layer 5
conv5 = Conv2D(512,padding='same')(conv4_1)
conv5 = Batchnormalization()(conv5)
conv5 = Dropout(0.1)(conv5)
conv5_pool = MaxPooling2D((2,2))(conv5)

#Convolutional layer 5_a
conv5_1 = Conv2D(512,padding='same')(conv5_pool)
conv5_1 = Batchnormalization()(conv5_1)
conv5_1 = Dropout(0.1)(conv5_1)
conv5_1 = Conv2D(512,padding='same')(conv5_1)
conv5_1 = Batchnormalization()(conv5_1)

#Convolutional layer 6
conv6 = Conv2D(1024,padding='same')(conv5_1)
conv6 = Batchnormalization()(conv6)
conv6 = Dropout(0.1)(conv6)
conv6_pool = MaxPooling2D((2,2))(conv6)

#Convolutional layer 6_a
conv6_1 = Conv2D(1024,padding='same')(conv6_pool)
conv6_1 = Batchnormalization()(conv6_1)
conv6_1 = Dropout(0.1)(conv6_1)

#Upscaling the model features extracted (Decoder)
#Deconvolution Layer 5
deconv5 = Conv2DTranspose(512,(2,2),strides=(2,padding='same')(conv6_1)
deconv5 = Concatenate()([deconv5,conv5_1])
flow5 = Conv2DTranspose(3,padding='same')(deconv5)
deconv5 = Conv2D(512,padding='same')(deconv5)
deconv5 = Dropout(0.1)(deconv5)
deconv5 = Conv2D(512,padding='same')(deconv5)

#Deconvolution Layer 4
deconv4 = Conv2DTranspose(256,padding='same')(deconv5)
deconv4 = Concatenate()([deconv4,conv4_1,flow5])
flow4 = Conv2DTranspose(3,padding='same')(deconv4)
deconv4 = Conv2D(256,padding='same')(deconv4)
deconv4 = Dropout(0.1)(deconv4)
deconv4 = Conv2D(256,padding='same')(deconv4)

#Deconvolution Layer 3
deconv3 = Conv2DTranspose(128,padding='same')(deconv4)
deconv3 = Concatenate()([deconv3,conv3_1,flow4])
flow3 = Conv2DTranspose(3,padding='same')(deconv3)
deconv3 = Conv2D(128,padding='same')(deconv3)
deconv3 = Dropout(0.1)(deconv3)
deconv3 = Conv2D(128,padding='same')(deconv3)

#Deconvolution Layer 2
deconv2 = Conv2DTranspose(64,padding='same')(deconv3)
deconv2 = Concatenate()([deconv2,conv2_1,flow3])
flow2 = Conv2DTranspose(3,padding='same')(deconv2)
deconv2 = Conv2D(64,padding='same')(deconv2)
deconv2 = Dropout(0.1)(deconv2)
deconv2 = Conv2D(64,padding='same')(deconv2)

#Deconvolution Layer 1
deconv1 = Conv2DTranspose(32,padding='same')(deconv2)
deconv1 = Concatenate()([deconv1,conv1_1,flow2])
flow1 = Conv2DTranspose(16,padding='same')(deconv1)
deconv1 = Conv2D(32,padding='same')(flow1)
deconv1 = Dropout(0.1)(deconv1)
deconv1 = Conv2D(32,padding='same')(deconv1)

outputs = Conv2D(3,(1,1),activation='sigmoid')(deconv1)

model = Model(inputs=[inputs],outputs=[outputs])
model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])
model.summary()

此处的输入是来自具有运动对象的连续视频帧的两个RGB图像,输出是两个重叠图像的光流图像表示。我面临的问题是,即使准确度提高,训练过程中的损失也不会减少。另外,当我对图像进行预测时,返回的图像是完全白色的。我不确定我编码的模型是否正确。请指出在这里我是否想念一些东西。

此Wiki页面中解释了光流背后的想法。 (Link

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)