如何从 LSTM 层获取输入的嵌入作为输出

问题描述

我创建了两个相同的编码器来训练两个不同的文本 job_description 和 resume,以余弦相似度作为损失函数

encoder_input1 = Input(shape= (None,),name='encoder1')
embed_dim = 200
embedded_seq_encoder1 = Embedding(input_dim = 9000,output_dim = embed_dim)(encoder_input1)
encoder1 = LSTM(256,return_state=True)
encoder_output1,state_h,state_c = encoder1(embedded_seq_encoder1)




encoder_input2 = Input(shape= (None,name='encoder2')
embed_dim = 200
embedded_seq_encoder2 = Embedding(input_dim = 9000,output_dim = embed_dim)(encoder_input2)
encoder2 = LSTM(256,return_state=True)
encoder_output2,state_c = encoder2(embedded_seq_encoder2)

encoder_dense = tf.keras.layers.Dense(1,activation='softmax',name='Final-Output-Dense')
encoder_outputs = encoder_dense(merged_Encoder_layer)

model = Model([encoder_input1,encoder_input2],[encoder_outputs])
model.compile(optimizer='rmsprop',loss = tf.keras.losses.BinaryCrossentropy())
model.summary()

然后我将两个编码器都转换为自定义层,我计划用这些编码器和一些密集层构建一个顺序模型

class Custom_Encoder_Layer1(tf.keras.layers.Layer):

    def __init__(self,shape,embed_dim,input_dim,n_units):
        super(Custom_Encoder_Layer1,self).__init__()
        self.shape = shape
        self.embed_dim = embed_dim
        self.input_dim = input_dim
        self.n_units = n_units
    def Encoder1(self):

        encoder_input1 = Input(shape= shape,name='encoder_layer1')
        embed_dim = embed_dim
        embedded_seq_encoder1 = Embedding(input_dim = input_dim,output_dim = embed_dim)(encoder_input1)
        encoder1 = LSTM(n_units,return_state=True)
        encoder_output1,state_c = encoder1(embedded_seq_encoder1)
        return encoder_input1,encoder_output1



class Custom_Encoder_Layer2(tf.keras.Model):

    def __init__(self,n_units):
        super(Custom_Encoder_Layer2,self).__init__()
        self.shape = shape
        self.embed_dim = embed_dim
        self.input_dim = input_dim
        self.n_units = n_units

    def Encoder2(self):

        encoder_input2 = Input(shape= shape,name='encoder_layer2')
        embed_dim = embed_dim
        embedded_seq_encoder2 = Embedding(input_dim = input_dim,output_dim = embed_dim)(encoder_input2)
        encoder2 = LSTM(n_units,return_state=True)
        encoder_output2,state_c = encoder2(embedded_seq_encoder2)
        return encoder_input2,encoder_output2

model = tf.keras.models.Sequential([

    Custom_Encoder_Layer1((None,200,9000,256),Model.layers[4],Custom_Encoder_Layer2((None,Model.layers[5],Model.layers[7]  

])

现在我的问题是如何训练这个序列模型并获得输入文本的嵌入

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)