为什么一步一步的LSTM会比MLP更好?

问题描述

出于好奇,我将具有单个时间步长的堆叠LSTM神经网络与具有tanh激活功能的MLP进行了比较,认为它们具有相同的性能。 用于比较的体系结构如下,并在相同的回归问题数据集上进行了训练(损失函数为MSE):

model.add(Dense(50,input_dim=num_features,activation = 'tanh'))
model.add(Dense(100,activation = 'tanh'))
model.add(Dense(150,activation = 'tanh'))
model.add(Dense(50,activation = 'tanh'))
model.add(Dense(1))


model.add(LSTM(50,return_sequences=True,input_shape=(None,num_features)))
model.add(LSTM(100,return_sequences=True))
model.add(LSTM(150,return_sequences=True))
model.add(LSTM(100,return_sequences=True))
model.add(LSTM(50))
model.add(Dense(1))

令人惊讶的是,LSTM模型的损耗下降速度比MLP快得多:

MLP loss:
Epoch: 1
Training Loss: 0.011504
Validation Loss: 0.010708
Epoch: 2
Training Loss: 0.010739
Validation Loss: 0.010623
Epoch: 3
Training Loss: 0.010598
Validation Loss: 0.010189
Epoch: 4
Training Loss: 0.010046
Validation Loss: 0.009651
Epoch: 5
Training Loss: 0.009305
Validation Loss: 0.008502
Epoch: 6
Training Loss: 0.007388
Validation Loss: 0.004334
Epoch: 7
Training Loss: 0.002576
Validation Loss: 0.001686
Epoch: 8
Training Loss: 0.001375
Validation Loss: 0.001217
Epoch: 9
Training Loss: 0.000921
Validation Loss: 0.000916
Epoch: 10
Training Loss: 0.000696
Validation Loss: 0.000568
Epoch: 11
Training Loss: 0.000560
Validation Loss: 0.000479
Epoch: 12
Training Loss: 0.000493
Validation Loss: 0.000451
Epoch: 13
Training Loss: 0.000439
Validation Loss: 0.000564
Epoch: 14
Training Loss: 0.000402
Validation Loss: 0.000478
Epoch: 15
Training Loss: 0.000377
Validation Loss: 0.000366
Epoch: 16
Training Loss: 0.000351
Validation Loss: 0.000240
Epoch: 17
Training Loss: 0.000340
Validation Loss: 0.000352
Epoch: 18
Training Loss: 0.000327
Validation Loss: 0.000203
Epoch: 19
Training Loss: 0.000311
Validation Loss: 0.000323
Epoch: 20
Training Loss: 0.000299
Validation Loss: 0.000264
LSTM loss:
Epoch: 1
Training Loss: 0.011345
Validation Loss: 0.010634
Epoch: 2
Training Loss: 0.008128
Validation Loss: 0.003692
Epoch: 3
Training Loss: 0.001488
Validation Loss: 0.000668
Epoch: 4
Training Loss: 0.000440
Validation Loss: 0.000232
Epoch: 5
Training Loss: 0.000260
Validation Loss: 0.000160
Epoch: 6
Training Loss: 0.000200
Validation Loss: 0.000137
Epoch: 7
Training Loss: 0.000165
Validation Loss: 0.000093
Epoch: 8
Training Loss: 0.000140
Validation Loss: 0.000104
Epoch: 9
Training Loss: 0.000127
Validation Loss: 0.000139
Epoch: 10
Training Loss: 0.000116
Validation Loss: 0.000091
Epoch: 11
Training Loss: 0.000106
Validation Loss: 0.000095
Epoch: 12
Training Loss: 0.000099
Validation Loss: 0.000082
Epoch: 13
Training Loss: 0.000091
Validation Loss: 0.000135
Epoch: 14
Training Loss: 0.000085
Validation Loss: 0.000099
Epoch: 15
Training Loss: 0.000082
Validation Loss: 0.000055
Epoch: 16
Training Loss: 0.000079
Validation Loss: 0.000062
Epoch: 17
Training Loss: 0.000075
Validation Loss: 0.000045
Epoch: 18
Training Loss: 0.000073
Validation Loss: 0.000121
Epoch: 19
Training Loss: 0.000069
Validation Loss: 0.000045
Epoch: 20
Training Loss: 0.000065
Validation Loss: 0.000052
在100个周期后,MLP的验证损失降至约1e-4,而LSTM的验证损失降至约1e-5。 这两个体系结构有什么不同对我来说没有多大意义,因为LSTM单元从以前的时间步开始就没有使用任何内存。而且,对MLP的训练比LSTM快约3倍。有人可以解释其背后的数学原理吗?

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)

相关问答

错误1:Request method ‘DELETE‘ not supported 错误还原:...
错误1:启动docker镜像时报错:Error response from daemon:...
错误1:private field ‘xxx‘ is never assigned 按Alt...
报错如下,通过源不能下载,最后警告pip需升级版本 Requirem...