Keras LSTM网络在训练时期中出现无效参数错误

问题描述

我目前是印度菜的新手,正在尝试制作一个非常简单的LSTM网络来预测AQI。但是我无法参加培训,因为在培训的第一阶段就报告了一个错误。

Epoch 1/150
37/41 [==========================>...] - ETA: 0s - loss: nan
...(trace back information)
InvalidArgumentError:  [_Derived_]  Invalid input_h shape: [1,20,24] [1,10,24]
     [[{{node CudnnRNN}}]]
     [[n_network/lstm_18/StatefulPartitionedCall]] [Op:__inference_train_function_36684]

Function call stack:
train_function -> train_function -> train_function

代码在下面。

网络建设功能

def nn(input_size,learning_rate,batch_size,time_step):
    model = Sequential(name ='n_network')
 
    model.add(LSTM(24,activation='tanh',batch_input_shape=(batch_size,time_step,input_size),return_sequences=True))
    model.add(LSTM(24,activation='tanh'))
    model.add(Dense(1,activation='linear'))   
    RMSprop = optimizers.RMSprop(lr=learning_rate)
    model.compile(loss='mse',optimizer=RMSprop)
    
    return model

获取数据功能如下。

def get_data(filename):
    data = pd.read_csv(filename,encoding='utf-8',header=None,skiprows = 1)
    #Exclude the natural order column
    data = data.iloc[:1300,1:]
    dt = data.values
    return dt

接下来,是get_batch函数,该函数缩放数据并返回训练x和y。

def get_batches(dt,time_step):
    scaler_for_x = MinMaxScaler(feature_range=(0,1))
    scaler_for_y = MinMaxScaler(feature_range=(0,1))
    scaled_x_data = scaler_for_x.fit_transform(dt[:,:-1])
    scaled_y_data = scaler_for_y.fit_transform(dt[:,-1].reshape(-1,1))
    scaled_y_data = scaled_y_data.flatten()
    label = scaled_y_data
    normalized_data = scaled_x_data
    input_size = normalized_data.shape[1]
    x_,y_=[],[]
    for i in range(len(normalized_data)-time_step):
        x = normalized_data[i:i+time_step,:input_size]
        y = [label[i+time_step-1]]
        x_.append(x.tolist())
        y_.append(y)
    x_ = x_[:-math.ceil((time_step)/100)*100+time_step]
    y_ = y_[:-math.ceil((time_step)/100)*100+time_step]
    return x_,y_,scaler_for_y    

最后的训练功能

def train(name,input_size,epochs):
    model = nn(input_size,time_step)
    AQI_data = get_data('demo_AQI.csv')
    train_x,train_y,scaler_for_y1  = get_batches(AQI_data[:train_end,:],time_step)
    history = model.fit(x = np.array(train_x),y = np.array(train_y),batch_size = batch_size,epochs=epochs,validation_split=0.1,verbose=1,shuffle = False)
    output = model.predict(np.array(train_x),batch_size=batch_size)
    test_x,test_y,scaler_for_y2 = get_batches(AQI_data[train_end:,time_step)
    predicted = model.predict(np.array(test_x),batch_size=batch_size)
    model.save('model/'+'demo_LSTM.h5')
    Preds = scaler_for_y1.inverse_transform(predicted.reshape(-1,1))
    test_y = scaler_for_y2.inverse_transform(test_y)
    return Preds,history,output,train_y    

参数设置如下:

name = 'chengdu'
batch_size = 20     # Sequences per batch
time_step = 5
lstm_size = 128    # Size of hidden layers in LSTMs
learning_rate = 0.001   # Learning rate
Epochs = [150]
train_end = 1000
input_size = 30

数据集是5行×32列

    0   1   2   3   4   5   6   7   8   9   ... 22  23  24  25  26  27  28  29  30  31
1334    1334    30.4753 114.1525    25  45.5833 56.2000 13.7083 8.41667 28.5833 1.162500    ... 0   0   1546214400000000000 4.283844    63.229125   53.1000 2.805557    14.29165    11.62500    1036.61
1335    1335    30.4753 114.1525    25  23.2500 39.5833 59.0417 4.91667 31.2500 1.183330    ... 0   0   1562889600000000000 18.450531   33.214286   39.5833 1.638890    15.62500    11.83330    1001.08
1336    1336    30.4753 114.1525    25  59.4348 155.7390    79.8261 9.65217 43.8696 0.839130    ... 0   0   1557705600000000000 24.945656   80.543500   102.8695    3.217390    21.93480    8.39130 1007.83
1337    1337    30.4753 114.1525    25  24.4348 41.4500 52.6957 6.82609 34.0435 0.800000    ... 0   0   1538006400000000000 16.467406   34.906857   41.4500 2.275363    17.02175    8.00000 1012.96
1338    1338    30.4753 114.1525    25  75.6522 117.5830    36.8750 17.58330    76.5652 0.754167    ... 0   0   1575590400000000000 11.523438   100.815250  83.7915 5.861100    38.28260    7.54167 1031.33

当我尝试训练网络时,以下时间发生了错误。

Epoch 1/150
37/41 [==========================>...] - ETA: 0s - loss: nan
...(trace back information)
InvalidArgumentError:  [_Derived_]  Invalid input_h shape: [1,24]
     [[{{node CudnnRNN}}]]
     [[n_network/lstm_18/StatefulPartitionedCall]] [Op:__inference_train_function_36684]

Function call stack:
train_function -> train_function -> train_function

model.summary()输出为

Model: "n_network"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_12 (LSTM)               (20,5,24)               5280      
_________________________________________________________________
lstm_13 (LSTM)               (20,24)                  4704      
_________________________________________________________________
dense_6 (Dense)              (20,1)                   25        
=================================================================
Total params: 10,009
Trainable params: 10,009
Non-trainable params: 0
_________________________________________________________________

每层的输入形状为

(20,30)
(20,24)
(20,24)

我猜问题出在两层之间,但是我找不到找到确切位置的方法。由于培训时代已经过去了将近80%,如何仍然会发生错误的输入形状错误?我很困惑,在Stackoverflow上找不到另一个像我一样的人。如果您能帮助我,我将不胜感激

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)

相关问答

依赖报错 idea导入项目后依赖报错,解决方案:https://blog....
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下...
错误1:gradle项目控制台输出为乱码 # 解决方案:https://bl...
错误还原:在查询的过程中,传入的workType为0时,该条件不起...
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct...