尺寸超出范围预计在[-1,0]范围内,但得到1

问题描述

我一直在尝试使用交叉熵损失在简单的神经网络(784,512,128,10)上实现MNIST数据集。我正在使用Keras获取MNIST数据集。但是我遇到了错误

RuntimeError: 1D target tensor expected,multi-target not supported

当我的主要模型为:

for epoch in range(num_epochs):
  for x,y in train_data:
    x=Variable(x)
    y=Variable(y)
    print(x.shape)
    y_pred=model(x)
    optimizer.zero_grad()
    loss=criterion(y_pred,y)
    loss.backward()
    optimizer.step()

因此,要消除该错误,我实现了:

y=y[0][0:]
y_pred=y_pred[0][0:]
loss=criterion(y_pred,y)

但是在那之后我得到了这个错误

IndexError: Dimension out of range (expected to be in range of [-1,0],but got 1)

我阅读了许多有关如何解决错误文章,但没有帮助。

是否由于Keras数据集而出现此错误? 或我的代码有问题吗?有人可以帮助您找到错误吗? 我的代码

import torch
import torch.nn as nn
import numpy as np
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
from torch.utils.data import DataLoader,Dataset
import keras
import torch.nn.functional as F
from torch.autograd import Variable

class Netz(nn.Module):
  def __init__(self,n_input_features):
    super(Netz,self).__init__()
    self.linear=nn.Linear(784,bias=True)
    self.l1=nn.Linear(512,bias=True)
    self.l2=nn.Linear(128,10,bias=True)
    self.relu=nn.ReLU()
    self.relu2=nn.ReLU()
    self.softmax=nn.softmax(dim=-1)
  def forward(self,x):
    # x=x.view(-1,784)
    x=self.relu(self.linear(x))
    x=self.relu2(self.l1(x))
    x=self.softmax(self.l2(x))
    return x

model=Netz(784)

class Data(Dataset):
    def __init__(self):
        self.x=x_train
        self.y=y_train
        self.len=self.x.shape[0]
    def __getitem__(self,index):
        return self.x[index],self.y[index]

mnist = keras.datasets.mnist
#copying data
(x_train,y_train),(x_test,y_test) = mnist.load_data()
#One-hot encoding the labels
y_train = keras.utils.to_categorical(y_train,10)
y_test = keras.utils.to_categorical(y_test,10)
#Flattening the images
x_train_reshaped = x_train.reshape((60000,784))
x_test_reshaped = x_test.reshape((10000,784))
#normalizing the inputs
x_train = x_train_reshaped/255.0 
x_test = x_test_reshaped/255.0

x_train=torch.from_numpy(x_train.astype(np.float32))
x_test=torch.from_numpy(x_test.astype(np.float32))
y_train=torch.from_numpy(y_train.astype(np.float32))
y_test=torch.from_numpy(y_test.astype(np.float32))

criterion=nn.CrossEntropyLoss()
print(criterion)
optimizer=torch.optim.SGD(model.parameters(),lr=0.05)
dataset=Data()
train_data=DataLoader(dataset=dataset,batch_size=1,shuffle=False)

num_epochs=5
for epoch in range(num_epochs):
  for x,y in train_data:
    x=Variable(x)
    y=Variable(y)
    y_pred=model(x)
    optimizer.zero_grad()
    y=y[0][0:]
    y_pred=y_pred[0][0:]
    loss=criterion(y_pred,y)
    loss.backward()
    optimizer.step()

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)