如何在 keras 中为 seq2seq 模型添加自注意力

问题描述

我有这个带有点积注意力层的模型。我已经注释掉了代码中的部分。我如何使用自我注意力而不是我拥有的注意力层?所以,基本上,我想用自我关注层替换评论部分。

我对 keras-self-attention 或手动添加的图层持开放态度。任何有用的东西

# Encoder
encoder_inputs = Input(shape=(max_text_len,))

# Embedding layer
enc_emb = Embedding(x_voc,embedding_dim,trainable=True)(encoder_inputs)

# Encoder LSTM 1
encoder_lstm1 = Bidirectional(LSTM(latent_dim,return_sequences=True,return_state=True,dropout=0.4,recurrent_dropout=0.4))
(encoder_output1,forward_h1,forward_c1,backward_h1,backward_c1) = encoder_lstm1(enc_emb)

# Encoder LSTM 2
encoder_lstm2 = Bidirectional(LSTM(latent_dim,recurrent_dropout=0.4))
(encoder_output2,forward_h2,forward_c2,backward_h2,backward_c2) = encoder_lstm2(encoder_output1)

# Encoder LSTM 3
encoder_lstm3 = Bidirectional(LSTM(latent_dim,recurrent_dropout=0.4))
(encoder_outputs,forward_h,forward_c,backward_h,backward_c) = encoder_lstm3(encoder_output2)

state_h = Concatenate()([forward_h,backward_h])
state_c = Concatenate()([forward_c,backward_c])

# Set up the decoder,using encoder_states as the initial state
decoder_inputs = Input(shape=(None,))

# Embedding layer
dec_emb_layer = Embedding(y_voc,trainable=True)
dec_emb = dec_emb_layer(decoder_inputs)


# Decoder LSTM
decoder_lstm = LSTM(latent_dim*2,recurrent_dropout=0.2)
(decoder_outputs,decoder_fwd_state,decoder_back_state) = \
    decoder_lstm(dec_emb,initial_state=[state_h,state_c])

#Start attention layer
# attention = dot([decoder_outputs,encoder_outputs],axes=[2,2])
# attention = Activation('softmax')(attention)
# context = dot([attention,1])
# decoder_outputs = Concatenate()([context,decoder_outputs])
#End attention layer

# Dense layer
decoder_dense = Timedistributed(Dense(y_voc,activation='softmax'))(decoder_outputs)

# Define the model
model = Model([encoder_inputs,decoder_inputs],decoder_dense)

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)