序列自动编码器的Keras准确性与手动计算不同

问题描述

我将基于GRU的seq2seq模型与字节对编码一起用于拼写校正问题。 这是我的模型架构/代码

encoder_inputs = Input(shape=(padding_length,),name="EncoderInput_1")
embedded_encoder_inputs = Embedding(num_encoder_tokens,latent_dim,mask_zero=True)(encoder_inputs)
encoder = GRU(latent_dim,return_state=True)
_,state_h = encoder(embedded_encoder_inputs)

encoder_states = state_h

decoder_inputs = Input(shape=(padding_length,name="DecoderInput_1")
embedded_decoder_inputs = Embedding(num_decoder_tokens,mask_zero=True)(decoder_inputs)
decoder_lstm = GRU(latent_dim,return_sequences=True,return_state=True)
x,_ = decoder_lstm(embedded_decoder_inputs,initial_state=encoder_states)
decoder_dense = Timedistributed(Dense(num_decoder_tokens,activation='softmax'))
decoder_outputs = decoder_dense(x)

model = Model([encoder_inputs,decoder_inputs],decoder_outputs)

model.compile(optimizer='adam',loss=sparse_categorical_crossentropy,metrics=["accuracy"])
model.fit([ip_encoded,op_encoded],op_offset_encoded,epochs=epochs,verbose=1,batch_size=batch_size,validation_split=0.2,

我用约700K的训练示例对其进行了10个时期的训练,最终验证准确性约为97%。

我研究了样本的预测和准确性(我正在使用model.evaluate来获得单个示例的准确性),但这与手动计算有所不同。

我想提一下我对手动计算过程中使用的精度计算的理解:

  • 对于解码器的所有输入令牌,它会预测下一个令牌。如果预测令牌==预期令牌,则正确预测的“计数”将增加1。我对所有解码器输入令牌应用相同的计数,最后通过将“计数”除以输入令牌总数来计算准确性。

示例:( ip:模型输入,op:模型输出,ex:预期)

(注意:我假设填充令牌不用于度量计算) (注意::1,:0,:2)

用于查找单个示例准确性的代码model.evaluate([ip_encoded[idx:idx+1],op_encoded[idx:idx+1]],op_offset_encoded[idx:idx+1],batch_size=64)

  1. (所有标记均正确预测,手动和Keras精度相同)
ip:  Do you get a pattern similar to t hat shown in fig.
op:  Do you get a pattern similar to that shown in fig.
ex:  Do you get a pattern similar to that shown in fig.

ip:  [1,1313,416,742,261,1953,1239,299,259,308,270,911,283,814,16,2,0]
op:  [1313,336,2]
ex:  [1313,0]

1/1 [==============================] - 0s 1ms/step - loss: 0.0031 - accuracy: 1.0000
[0.003069164464250207,1.0]
  1. (手动准确度:17/23 = 0.73,keras:0.95)
ip:  The equal part that we get in is,therefore,larger than the equal part we get in.
op:  The equal part that we get in  is,larger than the equal part in which we get .
ex:  The equal part that we get in  is,larger than the equal part we get in .

ip:  [1,316,752,496,370,294,14,1578,2115,621,264,0]
op:  [316,223,406,1964,2]
ex:  [316,0]

1/1 [==============================] - 0s 1ms/step - loss: 0.0289 - accuracy: 0.9565
[0.028948727995157242,0.95652174949646]
  1. (手册:10/13 = 0.77,Keras:0.92)
ip:  Can you tell what advanxtage the fungus derives from this association?
op:  Can you tell what advantage affect the sewage from this association?
ex:  Can you tell what advantage the fungus derives from this association?

ip:  [1,1321,2797,985,2923,282,1424,596,9590,19016,389,428,6330,33,0]
op:  [1321,6089,2211,4739,2]
ex:  [1321,0]

1/1 [==============================] - 0s 1ms/step - loss: 0.0121 - accuracy: 0.9286
[0.012137999758124352,0.9285714030265808]
  • 如果社区可以帮助我了解Keras准确性指标的内部结构并验证我的手动指标计算过程,这将非常有帮助。

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)