牛顿方法通过回溯进行梯度下降的算法中的TypeError和ValueError

问题描述

我正在尝试将牛顿法应用于具有回溯功能的梯度下降算法。

梯度下降算法:

enter image description here

具有回溯算法的梯度下降:

enter image description here

牛顿法:

enter image description here

    import numpy as np
from scipy import optimize as opt

def newton_gd_backtracking(w,itmax,tol):
    # You may set bounds of "learnrate"
    max_learnrate =0.1
    min_learnrate =0.001

    for i in range(itmax):
        grad = opt.rosen_der(w)
        grad2 = (np.linalg.norm(grad))**2
        hess = opt.rosen_hess(w)

        # you have to decide "learnrate"
        learnrate = max_learnrate
        while True:
            f0 = opt.rosen(w)
            f1 = opt.rosen(w - learnrate * grad)
            if f1 <= (f0 - (learnrate/2)*grad2):
                break
            else:
                learnrate /=2;
            if learnrate< min_learnrate:
                learnrate = min_learnrate; break

        # Now,Newton's method
        deltaw = - learnrate * np.linalg.inv(hess) * grad 
        w = w + deltaw

        if np.linalg.norm(deltaw) < tol:
            break

    return w,i,learnrate


# You can call the above function,by adding main

if __name__=="__main__":
    w0 = np.array([0,0])
    
    itmax = 10000; tol = 1.e-5

    w,learnrate = newton_gd_backtracking(w0,tol)
    print('Weight: ',w)
    print('Iterations: ',i)
    print('Learning Rate: ',learnrate)

运行程序后,我收到以下错误消息:

TypeError:只有大小为1的数组可以转换为Python标量

上述异常是以下异常的直接原因:

回溯(最近通话最近一次):

文件“ c:/ Users / Desfios 5 / Desktop / Python /家庭作业提交/Deyeon/GD_BT_Newton/Main_Newton_GD_Backtracking.py”,第43行,在 w,i,learningrate = newton_gd_backtracking(w0,itmax,tol)

第12行,文件“ c:/ Users / Desfios 5 / Desktop / Python /家庭作业提交/Deyeon/GD_BT_Newton/Main_Newton_GD_Backtracking.py”,在newton_gd_backtracking中 hess = opt.rosen_hess(w)

文件“ C:\ Users \ Desfios 5 \ AppData \ Roaming \ Python \ python38 \ site-packages \ scipy \ optimize \ optimize.py”,第373行,位于rosen_hess 对角线[0] = 1200 * x [0] ** 2-400 * x 1 + 2

ValueError:设置具有序列的数组元素。

当我在不使用Hessian的情况下运行此程序时,作为具有回溯功能的正常梯度下降,代码可以正常工作。这是我用于带回溯的正常梯度下降的代码

    import numpy as np
from scipy import optimize as opt

def gd_backtracking(w,tol):
    # You may set bounds of "learnrate"
    max_learnrate =0.1
    min_learnrate =0.001

    for i in range(itmax):
        grad = opt.rosen_der(w)
        grad2 = (np.linalg.norm(grad))**2

        # you have to decide "learnrate"
        learnrate = max_learnrate
        while True:
            f0 = opt.rosen(w)
            f1 = opt.rosen(w - learnrate * grad)
            if f1 <= (f0 - (learnrate/2)*grad2):
                break
            else:
                learnrate /=2;
            if learnrate< min_learnrate:
                learnrate = min_learnrate; break

        # Now,march
        deltaw = - learnrate * grad
        w = w + deltaw

        if np.linalg.norm(deltaw) < tol:
            break

    return w,learnrate = gd_backtracking(w0,learnrate)

关于黑森州矩阵,是否有我不知道的东西?据我了解,opt.rosen_hess应该像opt.rosen_der一样为我们生成一维数组。也许我以错误的方式使用了opt.rose_hess。我在这里想念什么?

解决方法

一遍又一遍地工作后,我意识到自己的错误。在newton_gd_backtesting下,我将反粗麻布和渐变相乘。这些不是标量,所以我应该做点积。一旦使用了点积,我就会得到理想的结果。