ValueError:预期dense_22具有形状无,37但得到形状为1000,2的数组

问题描述

我目前正在研究问答系统。我创建了一个合成数据集,其中包含答案中的多个单词。但是,答案不是给定上下文的范围。

最初,我计划使用基于深度学习的模型对其进行测试。但是我在构建模型时遇到了一些问题。 这就是我矢量化数据的方式。

def vectorize(data,word2idx,story_maxlen,question_maxlen,answer_maxlen):
    """ Create the story and question vectors and the label """
    Xs,Xq,Y = [],[],[]
    for story,question,answer in data:
        xs = [word2idx[word] for word in story]
        xq = [word2idx[word] for word in question]
        y = [word2idx[word] for word in answer]
        #y = np.zeros(len(word2idx) + 1)
        #y[word2idx[answer]] = 1
        Xs.append(xs)
        Xq.append(xq)
        Y.append(y)
    return (pad_sequences(Xs,maxlen=story_maxlen),pad_sequences(Xq,maxlen=question_maxlen),pad_sequences(Y,maxlen=answer_maxlen))
            #np.array(Y))

以下是我创建模型的方式。

    # story encoder. Output dim: (None,EMBED_HIDDEN_SIZE)
story_encoder = Sequential()
story_encoder.add(Embedding(input_dim=vocab_size,output_dim=EMBED_HIDDEN_SIZE,input_length=story_maxlen))
story_encoder.add(Dropout(0.3))

# question encoder. Output dim: (None,EMBED_HIDDEN_SIZE)
question_encoder = Sequential()
question_encoder.add(Embedding(input_dim=vocab_size,input_length=question_maxlen))
question_encoder.add(Dropout(0.3))

# episodic memory (facts): story * question
# Output dim: (None,story_maxlen)
facts_encoder = Sequential()

facts_encoder.add(Merge([story_encoder,question_encoder],mode="dot",dot_axes=[2,2]))
facts_encoder.add(Permute((2,1)))                        

## combine response and question vectors and do logistic regression
answer = Sequential()
answer.add(Merge([facts_encoder,mode="concat",concat_axis=-1))
answer.add(LSTM(LSTM_OUTPUT_SIZE,return_sequences=True))
answer.add(Dropout(0.3))
answer.add(Flatten())
answer.add(Dense(vocab_size,activation= "softmax"))


answer.compile(optimizer="rmsprop",loss="categorical_crossentropy",metrics=["accuracy"])

answer.fit([Xs_train,Xq_train],Y_train,batch_size=BATCH_SIZE,nb_epoch=NBR_EPOCHS,validation_data=([Xs_test,Xq_test],Y_test))

这是模型的总结

   _________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
merge_46 (Merge)             (None,5,616)            0         
_________________________________________________________________
lstm_23 (LSTM)               (None,32)             83072     
_________________________________________________________________
dropout_69 (Dropout)         (None,32)             0         
_________________________________________________________________
flatten_9 (Flatten)          (None,160)               0         
_________________________________________________________________
dense_22 (Dense)             (None,37)                5957      
=================================================================
Total params: 93,765.0
Trainable params: 93,765.0
Non-trainable params: 0.0
_________________________________________________________________

它给出了以下错误。

ValueError: Error when checking model target: expected dense_22 to have shape (None,37) but got array with shape (1000,2)

我认为错误与 Y_train、Y_test 有关。我应该将它们编码为分类值,答案不是文本跨度,而是顺序。我不知道怎么做/怎么做。 我该如何解决?有什么想法吗?

编辑:

当我在损失中使用 sparse_categorical_crossentropy 和 Reshape(2,-1); answer.summary()

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
merge_94 (Merge)             (None,616)            0         
_________________________________________________________________
lstm_65 (LSTM)               (None,32)             83072     
_________________________________________________________________
dropout_139 (Dropout)        (None,32)             0         
_________________________________________________________________
reshape_22 (Reshape)         (None,2,80)             0         
_________________________________________________________________
dense_44 (Dense)             (None,37)             2997      
=================================================================
Total params: 90,805.0
Trainable params: 90,805.0
Non-trainable params: 0.0
_________________________________________________________________

编辑2: 修改后的模型

# story encoder. Output dim: (None,1)))                        

## combine response and question vectors and do logistic regression
## combine response and question vectors and do logistic regression
answer = Sequential()
answer.add(Merge([facts_encoder,return_sequences=True))
answer.add(Dropout(0.3))
#answer.add(Flatten())
answer.add(keras.layers.Reshape((2,-1)))    
answer.add(Dense(vocab_size,activation= "softmax"))

answer.compile(optimizer="rmsprop",loss="sparse_categorical_crossentropy",Y_test))

还是给了

ValueError: Error when checking model target: expected dense_46 to have 3 dimensions,but got array with shape (1000,2)

解决方法

据我所知 - Y_train、Y_test 由索引(不是单热向量)组成。如果是这样 - 将损失更改为 sparse_categorical_entropy:

answer.compile(optimizer="rmsprop",loss="sparse_categorical_crossentropy",metrics=["accuracy"])

据我所知 - Y_train、Y_test 有一个序列维度。问题(5)的长度不等于答案(2)的长度。此维度已被 Flatten() 删除。尝试将 Flatten() 替换为 Reshape()

# answer.add(Flatten())
answer.add(tf.keras.layers.Reshape((2,-1)))    

相关问答

错误1:Request method ‘DELETE‘ not supported 错误还原:...
错误1:启动docker镜像时报错:Error response from daemon:...
错误1:private field ‘xxx‘ is never assigned 按Alt...
报错如下,通过源不能下载,最后警告pip需升级版本 Requirem...