问题描述
CNN模型将在两个类别之间进行分类,其中训练样本= 5974,验证= 1987。
我正在使用 datagen.flow_from_directory ,并且我的模型将根据单独的测试集进行预测。 我正在Google Colab中运行200个纪元的代码,但是5个纪元后,训练和验证的准确性并没有提高。
准确性
第45/200集 186/186 [==============================]-138s 744ms / step-损耗:0.6931-acc:0.4983-val_loss :0.6931-val_acc:0.5000
第46/200集 186/186 [==============================]-137s 737ms / step-损耗:0.6931-acc:0.4990-val_loss :0.6931-val_acc:0.5000
第47/200集 186/186 [==============================]-142s 761ms / step-损耗:0.6931-acc:0.4987-val_loss :0.6931-val_acc:0.5000
第48/200集 186/186 [==============================]-140s 752ms / step-损耗:0.6931-acc:0.4993-val_loss :0.6931-val_acc:0.5005
第49/200集 186/186 [==============================]-139s 745ms / step-损耗:0.6931-acc:0.4976-val_loss :0.6931-val_acc:0.5010
时代50/200 186/186 [==============================]-143s 768ms / step-损耗:0.6931-acc:0.4992-val_loss :0.6931-val_acc:0.5000
第51/200集 186/186 [=============================]-140s 755ms / step-损耗:0.6931-acc:0.4980-val_loss :0.6931-val_acc:0.5000
第52/200集 186/186 [==============================]-141s 758ms / step-损耗:0.6931-acc:0.4990-val_loss :0.6931-val_acc:0.4995
第53/200集 186/186 [==============================]-141s 759ms / step-损耗:0.6931-acc:0.4985-val_loss :0.6931-val_acc:0.5000
第54/200集 186/186 [==============================]-143s 771ms / step-损耗:0.6931-acc:0.4987-val_loss :0.6931-val_acc:0.4995
第55/200集 186/186 [==============================]-143s 771ms / step-损耗:0.6931-acc:0.4992-val_loss :0.6931-val_acc:0.5005
train_data_path = "/content/drive/My Drive/snk_tod/train"
valid_data_path = "/content/drive/My Drive/snk_tod/valid"
test_data_path = "/content/drive/My Drive/snk_tod/test"
img_rows = 100
img_cols = 100
epochs = 200
print(epochs)
batch_size = 32
num_of_train_samples = 5974
num_of_valid_samples = 1987
#Image Generator
train_datagen = ImageDataGenerator(rescale=1. / 255,rotation_range=40,width_shift_range=0.2,height_shift_range=0.2,shear_range=0.2,zoom_range=0.2,horizontal_flip=True,fill_mode='nearest')
valid_datagen = ImageDataGenerator(rescale=1. / 255)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(train_data_path,target_size=(img_rows,img_cols),batch_size=batch_size,shuffle=True,class_mode='categorical')
validation_generator = valid_datagen.flow_from_directory(valid_data_path,class_mode='categorical')
test_generator = test_datagen.flow_from_directory(test_data_path,shuffle=False,class_mode='categorical')
model = Sequential()
model.add(Conv2D((32),(3,3),input_shape=(img_rows,img_cols,kernel_initializer="glorot_uniform",bias_initializer="zeros"))
model.add(Activation('relu'))
model.add(Batchnormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.5))
model.add(Conv2D((32),2)))
model.add(Dropout(0.5))
model.add(Conv2D((64),2)))
model.add(Dropout(0.5))
model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(Dropout(0.5))
model.add(Dense(512))
model.add(Dense(2))
model.add(Activation('sigmoid'))
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['acc'])
#Train
history=model.fit_generator(train_generator,steps_per_epoch=num_of_train_samples // batch_size,epochs=epochs,validation_data=validation_generator,validation_steps=num_of_valid_samples // batch_size)
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)