问题描述
我使用 keras 调谐器,随机搜索。 还有张量流。 但我不认为随机搜索是问题所在:
由于某些原因,自动添加了多个图层。
latenteVariable = 24 ##### IMPORTANT
class MyTuner(kerastuner.tuners.RandomSearch):
def run_trial(self,trial,*args,**kwargs):
def sampling(args):
z_mean,z_log_var = args
epsilon = K.random_normal(shape=(K.shape(z_mean)[0],latenteVariable),mean=0.,stddev=epsilon_std)
return z_mean + K.exp(z_log_var / 2) * epsilon
def build_model(hp):
...
h = Dense(units=hp.Int('units4',min_value=48,max_value=64,step=8),activation=activation)(h)
h = Batchnormalization(name="encoder_norm_4")(h)
schicht4 = hp.get('units4')
z_mean = Dense(latenteVariable)(h)
z_log_var = Dense(latenteVariable)(h)
z = Lambda(sampling,output_shape=(latenteVariable,))([z_mean,z_log_var]) ###### variable is used here
b = Dense(units=schicht4,activation=activation)(z)
b = Batchnormalization(name="decoder_norm_1")(b)
output:
__________________________________________________________________________________________________
encoder_norm_4 (Batchnormalizat (None,48) 192 dense_3[0][0]
__________________________________________________________________________________________________
dense_4 (Dense) (None,24) 1176 encoder_norm_4[0][0]
__________________________________________________________________________________________________
dense_5 (Dense) (None,24) 1176 encoder_norm_4[0][0]
__________________________________________________________________________________________________
lambda (Lambda) (None,24) 0 dense_4[0][0]
dense_5[0][0]
__________________________________________________________________________________________________
dense_6 (Dense) (None,48) 1200 lambda[0][0]
__________________________________________________________________________________________________
Bellow latenteVariable 是一个局部变量。
def sampling(args):
z_mean,z_log_var,latenteVariable = args
epsilon = K.random_normal(shape=(K.shape(z_mean)[0],stddev=epsilon_std)
return z_mean + K.exp(z_log_var / 2) * epsilon
def build_model(hp):
h = Dense(units=hp.Int('units4',activation=activation)(h)
h = Batchnormalization(name="encoder_norm_4")(h)
schicht4 = hp.get('units4')
latenteVariable = 24 ########## local variable
z_mean = Dense(latenteVariable)(h)
z_log_var = Dense(latenteVariable)(h)
z = Lambda(sampling,z_log_var])
b = Dense(units=schicht4,activation=activation)(z)
b = Batchnormalization(name="decoder_norm_1")(b)
I get the result:
encoder_norm_4 (Batchnormalizat (None,64) 256 dense_3[0][0]
__________________________________________________________________________________________________
dense_4 (Dense) (None,24) 1560 encoder_norm_4[0][0]
__________________________________________________________________________________________________
tf_op_layer_Shape (TensorFlowOp [(2,)] 0 dense_4[0][0]
__________________________________________________________________________________________________
tf_op_layer_strided_slice (Tens [()] 0 tf_op_layer_Shape[0][0]
__________________________________________________________________________________________________
tf_op_layer_shape_1 (TensorFlow [(2,)] 0 tf_op_layer_strided_slice[0][0]
__________________________________________________________________________________________________
dense_5 (Dense) (None,24) 1560 encoder_norm_4[0][0]
__________________________________________________________________________________________________
tf_op_layer_RandomStandardnorma [(None,24)] 0 tf_op_layer_shape_1[0][0]
__________________________________________________________________________________________________
tf_op_layer_RealDiv (TensorFlow [(None,24)] 0 dense_5[0][0]
__________________________________________________________________________________________________
tf_op_layer_Mul (TensorFlowOpLa [(None,24)] 0 tf_op_layer_RandomStandardnormal[
__________________________________________________________________________________________________
tf_op_layer_Exp (TensorFlowOpLa [(None,24)] 0 tf_op_layer_RealDiv[0][0]
__________________________________________________________________________________________________
tf_op_layer_Add (TensorFlowOpLa [(None,24)] 0 tf_op_layer_Mul[0][0]
__________________________________________________________________________________________________
tf_op_layer_Mul_1 (TensorFlowOp [(None,24)] 0 tf_op_layer_Exp[0][0]
tf_op_layer_Add[0][0]
__________________________________________________________________________________________________
tf_op_layer_AddV2 (TensorFlowOp [(None,24)] 0 dense_4[0][0]
tf_op_layer_Mul_1[0][0]
__________________________________________________________________________________________________
dense_6 (Dense) (None,64) 1600 tf_op_layer_AddV2[0][0]
所以在第一个例子中,我有三个 24 层。 在第二个示例中,我有 8 个 24 层。 如何在不自动获得八层(而不是三层)的情况下使用局部变量?
谢谢
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)