多元回归梯度下降误差

问题描述

X = df.drop(columns="Math") 
y = df.iloc[:,4] 
theta = np.array([0]*len(X.columns))

def hypothesis(theta,X):
    return theta*X

def computeCost(X,y,theta):
    y1 = hypothesis(theta,X)
    y1=np.sum(y1,axis=1)
    return sum(np.sqrt((y1-y)**2))/(2*47)

def gradientDescent(X,theta,alpha,i):
    J = []  #cost function in each iterations
    k = 0
    while k < i:        
        y1 = hypothesis(theta,X)
        y1 = np.sum(y1,axis=1)
        for c in range(0,len(X.columns)):
            theta[c] = theta[c] - alpha*(sum((y1-y)*X.iloc[:,c])/len(X))
        j = computeCost(X,theta)
        J.append(j)
        k += 1
    return J,j,theta

J,theta = gradientDescent(X,0.05,10000)

数据集由五列组成。第一个是偏差项的一列。第二个到最后一个是 int64,由 1-100 的数值组成。第二个字段代表物理分数,第三个代表科学分数,第四个代表统计分数,而最后一个代表数学分数。我正在尝试使用第 1 列直到第 4 列来预测第 5 列(数学)

会出现如下错误

    OverflowError                             Traceback (most recent call last)
<ipython-input-26-d17a8fb83984> in <module>()
----> 1 J,10000)

<ipython-input-25-bfec0d0edcfa> in gradientDescent(X,i)
      6         y1 = np.sum(y1,axis=1)
      7         for c in range(0,len(X.columns)):
----> 8             theta[c] = theta[c] - alpha*(sum((y1-y)*X.iloc[:,c])/len(X))
      9         j = computeCost(X,theta)
     10         J.append(j)

OverflowError: Python int too large to convert to C 

解决方法

您遇到错误的原因很可能是以下几种情况:

  1. 使用 thetatheta = np.array([0]*len(X.columns)) 设置为整数。您可以执行类似 np.zeros(np.shape(X)[1])

  2. 设置的学习率过高,您可以检查成本或J,它可能会增加表明学习率过高

  3. 不太确定您的偏差项是否为 1,这可能取决于您的值范围。

所以如果我用一个简单的例子来测试你的代码:

import pandas as pd
import numpy as np
np.random.seed(111)

df = pd.DataFrame(np.random.randint(0,100,(50,4)),columns=['const','Physics','Science','Stats'])

df['const'] = 1
df['Math'] = 0.2*df['Physics'] + 0.4*df['Science'] + 0.5*df['Stats']

然后初始化:

X = df.drop(columns="Math") 
y = df.iloc[:,4] 

theta = np.ones(X.shape[1])

然后以较小的学习率运行:

J,j,theta = gradientDescent(X,y,theta,0.0001,100)

theta

array([0.98851902,0.1950524,0.39639991,0.49143374])