问题描述
我写了一个自定义的编码器。我第一次使用双向LSTM包装器,这样做可能会出错。编译模型时出现class Encoder(tf.keras.Model):
def __init__(self,inp_vocab_size,embedding_size,lstm_size,input_length):
super().__init__()
self.inp_vocab_size = inp_vocab_size
self.embedding_size = embedding_size
self.input_length = input_length
self.lstm_size = lstm_size
self.lstm_output = 0
self.lstm_state_h = 0
self.lstm_state_c = 0
self.embedding = Embedding(input_dim=self.inp_vocab_size,output_dim=self.embedding_size,input_length=self.input_length,mask_zero=True,name="embedding_layer_encoder",weights=[embedding_matrix_questions],trainable=False)
self.concatenate = Concatenate()
def build(self,input_shape):
self.forward_layer = LSTM(self.lstm_size,return_state=True,return_sequences=True,go_backwards=False,name="Encoder_LSTM_forward")
self.backward_layer = LSTM(self.lstm_size,go_backwards=True,name="Encoder_LSTM_backward")
self.bidirectional = tf.keras.layers.Bidirectional(self.forward_layer,backward_layer=self.backward_layer,merge_mode = "sum")
def call(self,input_sequence,states):
input_embedding = self.embedding(input_sequence)
bidirectional_lstm_output,forward_h,forward_c,backward_h,backward_c = self.bidirectional(input_embedding)
state_h = self.concatenate([forward_h,backward_h])
state_c = self.concatenate([forward_c,backward_c])
print(bidirection_lstm_output.shape)
print(state_h.shape)
print(state_c.shape)
return bidirectional_lstm_output,state_h,state_c
def initialize_states(self,batch_size):
h_state_init = np.zeros((batch_size,self.lstm_size))
c_state_init = np.zeros((batch_size,self.lstm_size))
return h_state_init,c_state_init
错误。希望能帮助我解决此错误的任何帮助。实际上,我想将双向LSTM的输出(以及隐藏状态和单元状态)用作解码器的输入。我在哪里弄错了?
static
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)