如何避免损失函数?

问题描述

我在损失函数中得到了NAN,我检查了数据集中是否没有NA,请找到下面的模型代码

keras.backend.clear_session()
input1 = Input(shape=(pad_essay_idf.shape[1],),dtype='int32',name="essay_input")
embedding_layer_text = Embedding(vocab_size_text,300,weights=[embedding_matrix_idf],input_length=max_length_text_idf,trainable=False)(input1)
lstm1 = LSTM(300,activation='relu',return_sequences=True,kernel_initializer=keras.initializers.he_normal(seed=0))(embedding_layer_text)
flatten1 = Flatten()(lstm1)

input2 = Input(shape=(1,name = "school_state")
embedding_layer_school_state = Embedding(unique_school_state-1,20,input_length=1)(input2)
flatten2 = Flatten()(embedding_layer_school_state)

input3 = Input(shape=(1,name ="project_grade_category")
embedding_layer_grade = Embedding(unique_project_grade_category-1,input_length=1)(input3)
flatten3 = Flatten()(embedding_layer_grade)

input4 = Input(shape=(1,name='categories')
embedding_layer_categories = Embedding(unique_project_subject_categories-1,10,input_length=1)(input4)
flatten4 = Flatten()(embedding_layer_categories)

input5 = Input(shape=(1,name='sub_categories')
embedding_layer_sub_categories = Embedding(unique_project_subject_subcategories-1,30,input_length=1)(input5)
flatten5 = Flatten()(embedding_layer_sub_categories)

input6 = Input(shape=(1,name = 'teacher_prefix')
embedding_layer_teacher_prefix = Embedding(unique_teacher_prefix,input_length=1)(input6)
flatten6 = Flatten()(embedding_layer_teacher_prefix)


input7 = Input(shape=(4,name='numerical_input')
dense_layer1 = Dense(64,kernel_initializer=keras.initializers.he_normal(seed=0))(input7)

concate_layer = concatenate(inputs=[flatten1,flatten2,flatten3,flatten4,flatten5,flatten6,dense_layer1],name="concat")

x=Batchnormalization()(concate_layer)
x= Dense(256,kernel_initializer='glorot_normal',kernel_regularizer=l2(0.01))(x)
x= LeakyReLU(alpha = 0.3)(x)
x= Dense(128,kernel_regularizer=l2(0.01))(x)
x= LeakyReLU(alpha = 0.3)(x)
x= Dropout(0.5)(x)
x= Dense(64,kernel_regularizer=l2(0.01))(x)
x= LeakyReLU(alpha = 0.3)(x)
x= Dense(32,kernel_regularizer=l2(0.01))(x)
x= LeakyReLU(alpha = 0.3)(x)
x= Dropout(0.5)(x)
x=Batchnormalization()(x)
x= Dense(16,kernel_regularizer=l2(0.01))(x)
x= LeakyReLU(alpha = 0.3)(x)
output=Dense(2,activation='softmax')(x)
second_model = Model(inputs=[input1,input2,input3,input4,input5,input6,input7],outputs=output)

以下是我的优化

optimizer=keras.optimizers.Adam(lr=0.0006,decay = 1e-4)

我尝试使用不同的下拉菜单和稳压器,但仍然遇到相同的问题如何解决

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)