我应该如何训练图像进行图像分割

问题描述

我正在尝试图像分割任务,需要从拼接部分定位原始图像,即给定图像有两类,即图像的真实部分和图像的拼接部分。对于数据集,我使用CASIA v1.0数据集及其遮罩的基本事实。我将VGG-16模型用作FCN-8模型的骨干。这是该模型的代码:

model = tf.keras.applications.VGG16(include_top=False,weights='imagenet',input_shape=(224,224,3))

x = tf.keras.layers.Conv2D(4096,(7,7),padding = "SAME",activation = "relu",name = "fc5")(model.layers[-1].output)

x = tf.keras.layers.Conv2D(4096,name = "fc6")(x)
x = tf.keras.layers.Conv2D(2,(1,1),name = "score_fr")(x)
Conv_size = x.shape[2] #16 if image size if 512
x = tf.keras.layers.Conv2DTranspose(2,kernel_size=(4,4),strides = (2,2),padding = "valid",activation=None,name = "score2")(x)
Deconv_size = x.shape[2]
Extra = (Deconv_size - 2*Conv_size)

x = tf.keras.layers.Cropping2D(cropping=((0,(0,2)))(x)
model1 = tf.keras.Model(inputs =model.input,outputs =[x]
skip_conv1 = tf.keras.layers.Conv2D(2,padding="SAME",name="score_pool4")      
summed=tf.keras.layers.Add()([skip_conv1(model1.layers[14].output),model1.layers[-1].output]))

x = tf.keras.layers.Conv2DTranspose(2,name = "score4")(summed)
x = tf.keras.layers.Cropping2D(cropping=((0,2)))(x)


skip_con2 = tf.keras.layers.Conv2D(2,kernel_size=(1,padding = "same",name = "score_pool3")

Summed = tf.keras.layers.Add()([skip_con2(model.layers[10].output),x])

Up = tf.keras.layers.Conv2DTranspose(2,kernel_size=(16,16),strides = (8,8),activation = None,name = "upsample")(Summed)

final = tf.keras.layers.Cropping2D(cropping = ((0,8)))(Up)
final_model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),metrics=['accuracy'])

我的图像和遮罩位于单独的文件夹中,即train和train_label用于训练,而val和val_label用于验证。 现在,我正在使用ImageDataGenerator进行图像增强。这是代码。

from keras.preprocessing.image import ImageDataGenerator

train_datagen = ImageDataGenerator(
        rescale=1./255,shear_range=0.2,zoom_range=0.2,horizontal_flip=True)
        
val_datagen = ImageDataGenerator(rescale=1./255)
train_image_generator = train_datagen.flow_from_directory(final/train/",target_size=(224,224),batch_size = 32 )

train_mask_generator = train_datagen.flow_from_directory("final/train_label/",batch_size = 32)

val_image_generator = val_datagen.flow_from_directory("final/val/",batch_size = 32)


val_mask_generator = val_datagen.flow_from_directory("final/val_label/",batch_size = 32)



train_generator = (pair for pair in zip(train_image_generator,train_mask_generator))
val_generator = (pair for pair in zip(val_image_generator,val_mask_generator))

当我尝试使用生成器训练模型时,出现错误

model_history = final_model.fit(train_generator,epochs = 50,steps_per_epoch = 23,validation_data = val_generator,validation_steps = 2)
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s),but instead got the following list of 2 arrays: [array([[[[0.,0.,0.],[0.,...,0.]],[[0.,...

您能帮我吗,我是图像分割的新手,所以欢迎您提出任何建议。

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)

相关问答

依赖报错 idea导入项目后依赖报错,解决方案:https://blog....
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下...
错误1:gradle项目控制台输出为乱码 # 解决方案:https://bl...
错误还原:在查询的过程中,传入的workType为0时,该条件不起...
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct...