ValueError:维度必须相等,但分别为 256 和 64;序列到序列 LSTM

问题描述

我正在尝试构建一个序列到序列模型,但我不太确定如何解决我在解码器 LSTM 中不断出现的错误

我正在学习本教程:

ValueError: Dimensions must be equal,but are 256 and 64 for '{{node mul/mul}} = Mul[T=DT_FLOAT](Sigmoid_1,init_c)' with input shapes: [?,25,256],[?,64].
import os
os.environ["TF_FORCE_GPU_ALLOW_GROWTH"] = "true"
from speech_preprocess import sound_files,phrases,tokenizer,labels
from tensorflow.keras.layers import Input,LSTM,Dense,Embedding,Conv1D,MaxPool1D,Reshape
from tensorflow.keras.models import Model

input_length_encoder = max(len(arr) for arr in sound_files)
output_tokens = tokenizer.num_words
batch = 4

encoder_input = Input(shape=(1,sound_files.shape[-1]))
encoder,state_1,state_2 = LSTM(64,return_state=True)(encoder_input)

decoder_input = Input(shape=(None,phrases.shape[1]))
decode_embedding = Embedding(input_dim=output_tokens,output_dim=64)(decoder_input)
decoder,_,_ = LSTM(64,return_state=True,return_sequences=True)(decode_embedding,initial_state=[state_1,state_2])
decoder_dense = Dense(output_tokens,activation="softmax")
decoder_output = decoder_dense(decoder)

model = Model([encoder_input,decoder_input],decoder_output)
for layer in model.layers:
    print(layer.output_shape)
model.summary()
model.compile(loss="categorical_crossentropy",optimizer="adam",metrics=["accuracy"])
model.fit([sound_files,phrases],labels,epochs=1,batch_size=batch,validation_split=0.2)

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)