问题描述
我正在努力扩展lambda层的输出。代码如下: 我的X_train是100 * 15 * 24,Y_train是100 * 1(网络由LSTM层+密集层组成)
input_shape=(timesteps,num_feat)
data_input = Input(shape=input_shape,name="input_layer")
lstm1 = LSTM(10,name="lstm_layer")(data_input)
dense1 = Dense(4,activation="relu",name="dense1")(lstm1)
dense2 = Dense(1,activation = "custom_activation_1",name = "dense2")(dense1)
dense3 = Dense(1,activation = "custom_activation_2",name = "dense3")(dense1)
#dense 2 and 3 has customed activation function with range the REAL LINE (so I need to normalize it)
## custom lambda layer/ loss function ##
def custom_layer(new_input):
add_input = new_input[0]+new_input[1]
#below three lines are where problem occurs that makes the program does not work
###############################################
scaler = MinMaxScaler()
scaler.fit(add_input)
normalized = scaler.transform(add_input)
###############################################
return normalized
lambda_layer = Lambda(custom_layer,name="lambda_layer")([dense2,dense3])
model = Model(inputs=data_input,outputs=lambda_layer)
model.compile(loss='mse',optimizer='adam',metrics=['accuracy'])
model.fit(X_train,Y_train,epochs=2,batch_size=216)
如何正确规范 lambda_layer 的输出?任何想法或建议,表示赞赏!
解决方法
我认为Scikit转换器不能在Lambda层中工作。如果您只想获取传递了数据的标准化输出,则可以这样做,
from tensorflow.keras.layers import Input,LSTM,Dense,Lambda
from tensorflow.keras.models import Model
import tensorflow as tf
timesteps = 3
num_feat = 12
input_shape=(timesteps,num_feat)
data_input = Input(shape=input_shape,name="input_layer")
lstm1 = LSTM(10,name="lstm_layer")(data_input)
dense1 = Dense(4,activation="relu",name="dense1")(lstm1)
dense2 = Dense(1,activation = "custom_activation_1",name = "dense2")(dense1)
dense3 = Dense(1,activation = "custom_activation_2",name = "dense3")(dense1)
#dense 2 and 3 has customed activation function with range the REAL LINE (so I need to normalize it)
## custom lambda layer/ loss function ##
def custom_layer(new_input):
add_input = new_input[0]+new_input[1]
normalized = (add_input - tf.reduce_min(add_input,axis=0,keepdims=True))/(tf.reduce_max(add_input,keepdims=True) - tf.reduce_max(add_input,keepdims=True))
return normalized
lambda_layer = Lambda(custom_layer,name="lambda_layer")([dense2,dense3])
model = Model(inputs=data_input,outputs=lambda_layer)
model.compile(loss='mse',optimizer='adam',metrics=['accuracy'])