PyTorch Optimizer.step不更新权重

问题描述

我认为问题可能出在loss和loss.backward()步骤上,因为当我看一看loss.grad_fn链时,它并没有显示出我所期望的S型和线性层。

import torch
import torch.nn as nn
import torch.optim as optim


class Neuron(nn.Module):

    def __init__(self):
        super(Neuron,self).__init__()
        # an affine operation: y = Wx + b
        self.linear = nn.Linear(3,1,bias=False)
        self.sig = nn.Sigmoid()

    def forward(self,x):
        linear_output = self.linear(x)
        f = self.sig(linear_output)
        return f

my_neuron = Neuron()


# data from above,appended a 1 to each row to match my implementation
x = torch.tensor([[1.2,1],[0.2,1.4,[0.5,0.5,[-1.5,-1.3,-1.4,[-0.7,-0.5,1]])

y = torch.tensor([0,1 ]) # targets

criterion = nn.CrossEntropyLoss()


optimizer = optim.SGD(my_neuron.parameters(),lr=0.01)

n = 10
for epoch in range(n): # loop over dataset n times

  optimizer.zero_grad()  # zero the gradient buffers

  # run each data point through the network
  output = torch.empty((6,6),dtype=torch.float)
  idx = 0
  for pt in x:
    output[idx,:] = my_neuron(pt)
    idx += 1
  print(my_neuron.linear.weight[0])

  loss = criterion(output,y)
  loss.backward()
  optimizer.step()  # does the update step
print(loss.grad_fn)  
print(loss.grad_fn.next_functions[0][0])  
print(loss.grad_fn.next_functions[0][0].next_functions[0][0])
print(loss.item())

在这里,我输出了线性层的权重和参数,仅供比较。我还输出了loss.grad_fn链,您可以在其中看到线性层不存在。

my_neuron.linear.weight:  tensor([-0.0055,0.1782,0.2472],grad_fn=<SelectBackward>)
my_neuron.parameters:  tensor([-0.0055,grad_fn=<SelectBackward>)
my_neuron.linear.weight:  tensor([-0.0055,grad_fn=<SelectBackward>)
<NllLossBackward object at 0x7fcfeae411d0>
<LogsoftmaxBackward object at 0x7fcfeafd36a0>
<copySlices object at 0x7fcfeae411d0>

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)