为conv1D Keras NN找到正确的输入和输出形状

问题描述

我建立了一个LSTM模型,以使用大小为(1750、20、28)的输入矩阵X和1750,长度序列20和28的特征来分析时间序列。实际上,我采用了具有28个特征的原始X矩阵,并创建了一个长度为20的滑动窗口的3D矩阵。y矩阵的大小为(1750,) 我成功地将它与LTSM配合使用(输入形状= X_train [1],X_train [2])

它与第一层模型.add(layer_LSTM1)或与堆叠的LSTM完美配合,但效果不佳(如果我运行两次相同的NN,则非常不稳定)。 然后,我尝试将conv1D NN应用于具有相同输入形状的相同数据集。我收到以下错误消息。 这是模型定义和消息:

# available layers
layer_drop = keras.layers.Dropout(rate = dropout)
layer_dense1 = Dense(units= layer_1,activation = 'relu')
layer_LSTM1 = keras.layers.LSTM(units=layer_1,activation = 'relu',return_sequences = False,input_shape=(X_train.shape[1],X_train.shape[2]))
layer_LSTMstack1 = keras.layers.LSTM(units=layer_2,return_sequences = True,X_train.shape[2]))
layer_LSTMstack2 = keras.layers.LSTM(units=layer_2,return_sequences = True)
layer_LSTMstackend = keras.layers.LSTM(units=layer_2,activation = 'relu')
layer_conv1D1 = keras.layers.Conv1D(filters = 28,kernel_size= 3,X_train.shape[2]))
layer_output = Dense(units = 1)

# Model architecture 
model.add(layer_conv1D1)
model.add(layer_dense1)
model.add(layer_output)

我收到以下反馈(向其中添加了model.summary()

 runfile('C:/GD/AI/Conv1D_1stock.py',wdir='C:/GD/AI')
Reloaded modules: util_prepa,util_model,util_DENSE
Time preparing data =  Time: 3.784785270690918
Traceback (most recent call last):

  File "C:\GD\AI\Conv1D_1stock.py",line 133,in <module>
    model,history = compile_train_model(model,loss,optimizer,X_train,y_train,epochs,batch_size,validation_split,verbose)

  File "C:\GD\AI\util_LSTM.py",line 89,in compile_train_model
    history = model.fit(X_train,epochs = epochs,batch_size = batch_size,validation_split = validation_split,verbose = verbose)

  File "C:\Users\Nav\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py",line 709,in fit
    shuffle=shuffle)

  File "C:\Users\Nav\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py",line 2692,in _standardize_user_data
    y,self._feed_loss_fns,feed_output_shapes)

  File "C:\Users\Nav\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_utils.py",line 549,in check_loss_and_target_compatibility
    ' while using as loss `' + loss_name + '`. '

ValueError: A target array with shape (1750,1) was passed for an output of shape (None,18,1) while using as loss `mean_squared_error`. This loss expects targets to have the same shape as the output.


print(model.summary())
Model: "sequential_10"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d_7 (Conv1D)            (None,28)            2380      
_________________________________________________________________
dense_26 (Dense)             (None,128)           3712      
_________________________________________________________________
dense_28 (Dense)             (None,1)             129       
=================================================================
Total params: 6,221
Trainable params: 6,221
Non-trainable params: 0
_________________________________________________________________

我在做什么错?有人能告诉我正确的方向吗? 提前谢谢。

NB。根据要求,这是我使用的参数(从头开始编写代码-很长,抱歉):

# ====  PART 0. Installing libraries ============
import numpy as np
import pandas as pd
import sqlite3 as sq
import time
from itertools import chain
import tensorflow as tf
from tensorflow import keras
import seaborn as sns
from pylab import rcParams
import matplotlib.pyplot as plt
from matplotlib import rc
from tensorflow.keras.layers import Bidirectional,Dropout,Activation,Dense,LSTM,Flatten,ConvLSTM2D
from tensorflow.python.keras.layers import CuDNNLSTM
from tensorflow.keras.models import Sequential
from sklearn.metrics import confusion_matrix
from util_prepa import *
from util_model import *
from util_LSTM import *

start_time = time.time()
rcParams['figure.figsize'] = 14,8

### ====   PART 0.A Defining hyeprparameters & parameters  =  INPUT REQUIRED ============
## SQL parameters
dbInput = 'Inputlist.db'           ### Database with input data
dbList = "TRlistInput"              ### table with list of datasets
ric = "ATOS"                        ### RIC code of the underlying item
dbOutput = 'saveLSTMoutput.db'       ### Database for saving output
saveX = "savX"                     ### Table for saving X output in dbOutput
saveY = "savY"                     ### Table for saving Y output in dbOutput

## Dataset parameters
horiz = 10                          ### time horizon of the prediction 
seq_length = 20                     ### number of days for enriching the LSTM
step = 1                            ### time lag within LSTM memory batch

tested_model = 'Conv1D'           ### 'LSTM' / 'STACKED' / 'ConvLSTM' / 'BAYES' / 'Conv1D' / 'Conv2D' / DEEP'

## Parameters LSTM & CNN
drop_rows = 50                      ### Number of unrelevant rows given technical indicators computation
lstmStart = 0                    ### initial value of X and Y matrices out of the total dataset
lstmSize = 2000                     ### length of the X & Y matrices starting from lstmStart index
proportionTrain = 0.875 

X_plot = 0                          ### 1 for plot close price  /  0 for no plot

### ====   PART 1.A Connecting to SQL DB and loading lists ============
dataX,dataY = get_model_data(dbInput,dbList,ric,horiz,drop_rows)
dataX = get_model_cleanXset(dataX,trigger)                             # Clean X matrix for insufficient data
Xs,ys = LSTM_create_dataset(dataX,dataY,seq_length,step)

(X_train,y_train),(X_test,y_test),(res_train,res_test) = LSTM_train_test_size(Xs,ys,lstmStart,lstmSize,proportionTrain)
(X_train,X_test),(train_mean,train_std) = get_model_scaleX(X_train,X_test)

### ====   PART 2.B Input & define Model  =  INPUT REQUIRED ============
## Model & Hyper-parameters
validation_split = 0.1
model = keras.Sequential()
dropout = 0.1
optimizer = 'adam'               ### Optimizer of the compiled model
learning = 0.001
loss = 'mean_squared_error'
verbose = 0                      ### 0 = hidden computation  //  1 = computation printed
batch_size = 32
epochs = 15
layer_1 = 128
layer_2 = 256

# available layers
layer_drop = keras.layers.Dropout(rate = dropout)
layer_dense1 = Dense(units= layer_1,activation = 'relu')
layer_dense2 = Dense(units= layer_2,X_train.shape[2]))
layer_output = Dense(units = 1)

# Model architecture 
model.add(layer_conv1D1)
model.add(layer_dense1)
model.add(layer_output)
model_arch = 'LSTM1-128+D1-128+Out'

### ====   PART 4.B Compile and Train model + predict   ============
model,verbose)
eval_train,eval_test,y_pred = model_predict(model,history,X_test,y_test,res_test)

plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train','test'],loc='upper left')
plt.show()

解决方法

首先,让我们讨论输入形状。我对“大小为(1750、20、28)的输入矩阵X”的解释是,您的批处理大小为1750,一维20个时间步长的1D系列,每个时间步长有28个特征。

添加卷积层时,“批量大小”保持不变,时间步数通常保持不变(这取决于过滤器如何适应时间步长),并且要素输出的数量将等于过滤器的数量您曾经使用过。

因此,在卷积之后添加密集层时,您将添加2D密集层(对每个时间步将相同的权重应用于每个特征向量)。为避免这种情况,您需要在代码中的某处添加keras.layers.Flatten()。 Flatten将采用2D卷积输出并将其放入1D。为了获得我认为您想要的效果,我将像这样修改代码

model.add(layer_conv1D1)
model.add(layer_dense1)
model.add(keras.layers.Flatten())
model.add(layer_output)

相关问答

依赖报错 idea导入项目后依赖报错,解决方案:https://blog....
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下...
错误1:gradle项目控制台输出为乱码 # 解决方案:https://bl...
错误还原:在查询的过程中,传入的workType为0时,该条件不起...
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct...