从所有内存中学习切换到dada生成器时,为什么我的验证准确性如此之低?

问题描述

我有一个包含2列的数据集:

1。)一个由21个不同字母组成的字符串列。 2.)分类列:这些字符串中的每一个都与1-7之间的数字相关联。

使用以下代码,我首先执行整数编码。

codes = ['A','C','D','E','F','G','H','I','K','L','M','N','P','Q','R','S','T','V','W','Y']

def create_dict(codes):
    char_dict = {}
    for index,val in enumerate(codes):
        char_dict[val] = index+1
 return char_dict

def integer_encoding(data):
"""
  - Encodes code sequence to integer values.
  - 20 common amino acids are taken into consideration
    and rest 4 are categorized as 0.
"""
    encode_list = []
    for row in data['Sequence'].values:
        row_encode = []
        for code in row:
            row_encode.append(char_dict.get(code,0))
            encode_list.append(np.array(row_encode))
    return encode_list

使用此代码,我先执行整数,然后一次热编码在内存中。

char_dict = create_dict(codes)
train_encode = integer_encoding(balanced_train_df.reset_index()) 
val_encode = integer_encoding(val_df.reset_index()) 
train_pad = pad_sequences(train_encode,maxlen=max_length,padding='post',truncating='post')
val_pad = pad_sequences(val_encode,truncating='post')
train_ohe = to_categorical(train_pad)
val_ohe = to_categorical(val_pad)

然后我像这样训练我的学习者。

es = EarlyStopping(monitor='val_loss',patience=3,verbose=1)

history2 = model2.fit(
    train_ohe,y_train,epochs=50,batch_size=64,validation_data=(val_ohe,y_val),callbacks=[es]
)

这使我的验证准确度达到了大约86%的首次突破水平。

一个纪元也是如此:

 Train on 431403 samples,validate on 50162 samples
 Epoch 1/50
 431403/431403 [==============================] - 187s 434us/sample - loss: 1.3532 - accuracy: 0.6947 - val_loss: 0.9443 - val_accuracy: 0.7730

请注意,第一轮的验证准确性为77%。

但是由于我的数据集相对较大,所以最终消耗了大约50 + Gb。之所以这样,是因为我正在将整个数据集加载到内存中,并在内存中转换整个数据集和数据转换。

为了以一种内存效率更高的方式进行学习,我正在引入一个数据生成器,如下所示:

class DataGenerator(Sequence):
    'Generates data for Keras'
    def __init__(self,list_IDs,data_col,labels,batch_size=32,dim=(32,32,32),n_channels=1,n_classes=10,shuffle=True):
    'Initialization'
    self.dim = dim
    self.batch_size = batch_size
    self.data_col_name = data_col
    self.labels = labels
    self.list_IDs = list_IDs
    self.n_channels = n_channels
    self.n_classes = n_classes
    self.shuffle = shuffle
    self.on_epoch_end()

def __len__(self):
    'Denotes the number of batches per epoch'
    return int(np.floor(len(self.list_IDs) / self.batch_size))

def __getitem__(self,index):
    'Generate one batch of data'
    # Generate indexes of the batch
    indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]

    # Find list of IDs
    list_IDs_temp = [self.list_IDs[k] for k in indexes]

    # Generate data
    X,y = self.__data_generation(list_IDs_temp)

    return X,y

def on_epoch_end(self):
    'Updates indexes after each epoch'
    self.indexes = np.arange(len(self.list_IDs))
    if self.shuffle == True:
        np.random.shuffle(self.indexes)

def __data_generation(self,list_IDs_temp):
    'Generates data containing batch_size samples' # X : (n_samples,*dim,n_channels)
    # Initialization
    X = np.empty((self.batch_size,*self.dim))
    y = np.empty(self.batch_size,dtype=int)

    # Generate data
    for i,ID in enumerate(list_IDs_temp):
        # Store sample
        # Read sequence string and convert to array 
        # of padded categorical data in array
        int_encode_dt = integer_encoding(integer_encoding([balanced_train_df.loc[ID,self.data_col_name]]))
        padded_dt = pad_sequences(int_encode_dt,maxlen=660,truncating='post')
        categorical_dt = to_categorical(padded_dt)
        X[i,] = categorical_dt
        # Store class
        y[i] = self.labels[ID]-1
    return X,to_categorical(y,num_classes=self.n_classes)

代码从这里改编:https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly

然后像这样触发学习:

params = {'dim': (660,21),# sequences are at most 660 long and are encoded in 20 common amino acids,'batch_size': 32,'n_classes': 7,'n_channels': 1,'shuffle': False}

training_generator = DataGenerator(balanced_train_df.index,'Sequence',balanced_train_df['ec_lvl_1'],**params)
validate_generator = DataGenerator(val_df.index,val_df['ec_lvl_1'],**params)

# Early Stopping
es = EarlyStopping(monitor='val_loss',verbose=1)

history2 = model2.fit(
    training_generator,validation_data=validate_generator,use_multiprocessing=True,workers=6,callbacks=[es]
    )

这里的问题是,使用数据生成器时,我的验证准确性从未超过15%。

Epoch 1/10
13469/13481 [============================>.] - ETA: 0s - loss: 2.0578 - accuracy: 0.1427
13481/13481 [==============================] - 242s 18ms/step - loss: 2.0578 - accuracy: 0.1427 - val_loss: 1.9447 - val_accuracy: 0.0919

请注意,验证准确性仅为9%。

我的问题是为什么会这样?我无法解释的一件事是:

当我在记忆学习中进行所有操作时,我将批处理大小设置为32或64,但是步骤数仍然约为413k(训练样本的总数)。但是,当我使用数据生成器时,我得到的数字要小得多,通常为413k样本/批次。这是否告诉我在内存中学习案例中我并没有真正使用批处理大小参数?解释表示赞赏。

解决方法

一系列愚蠢的错误导致了这种差异,它们都位于以下一行:

int_encode_dt = integer_encoding(integer_encoding([balanced_train_df.loc[ID,self.data_col_name]]))

错误1:我应该传递要处理的数据框,这样我就可以输入训练和验证错误。我以前做过的方式...即使我认为我传递了验证数据,我仍然会使用训练数据。

错误2:我是对数据进行编码的双整数(duh!)