为什么我在随机梯度下降实施中花费了巨额费用?

问题描述

在尝试实现随机梯度下降时遇到了一些问题,基本上发生的是我的费用正疯狂地增长,我不知道为什么。

MSE实施:

def mse(x,y,w,b):
    predictions = x @ w 
    summed = (np.square(y - predictions - b)).mean(0)
    cost = summed / 2 
    return cost

渐变:

def grad_w(y,x,b,n_samples):
    return -y @ x / n_samples + x.T @ x @ w / n_samples + b * x.mean(0)
def grad_b(y,n_samples):
    return -y.mean(0) + x.mean(0) @ w + b

SGD实施:

def stochastic_gradient_descent(X,learning_rate=0.01,iterations=500,batch_size =100):
    
    length = len(y)
    cost_history = np.zeros(iterations)
    n_batches = int(length/batch_size)
    
    for it in range(iterations):
        cost =0
        indices = np.random.permutation(length)
        X = X[indices]
        y = y[indices]
        for i in range(0,length,batch_size):
            X_i = X[i:i+batch_size]
            y_i = y[i:i+batch_size]

            w -= learning_rate*grad_w(y_i,X_i,length)
            b -= learning_rate*grad_b(y_i,length)
            
            cost = mse(X_i,y_i,b)
        cost_history[it]  = cost
        if cost_history[it] <= 0.0052: break
        
    return w,cost_history[:it]

随机变量:

w_true = np.array([0.2,0.5,-0.2])
b_true = -1
first_feature = np.random.normal(0,1,1000)
second_feature = np.random.uniform(size=1000)
third_feature = np.random.normal(1,2,1000)
arrays = [first_feature,second_feature,third_feature]
x = np.stack(arrays,axis=1) 
y = x @ w_true + b_true + np.random.normal(0,0.1,1000)
w = np.asarray([0.0,0.0,0.0],dtype='float64')
b = 1.0

运行此命令后:

theta,cost_history = stochastic_gradient_descent(x,b)

print('Final cost/MSE:  {:0.3f}'.format(cost_history[-1]))

我明白了:

Final cost/MSE:  3005958172614261248.000

这是plot

解决方法

以下是一些建议:

  • 您的学习率对于培训而言太大:将其更改为1e-3之类应该没问题。
  • 您的更新部分可以进行如下修改:
PDOException

最终结果:

def stochastic_gradient_descent(X,y,w,b,learning_rate=0.01,iterations=500,batch_size =100):
    
    length = len(y)
    cost_history = np.zeros(iterations)
    n_batches = int(length/batch_size)
    
    for it in range(iterations):
        cost =0
        indices = np.random.permutation(length)
        X = X[indices]
        y = y[indices]
        for i in range(0,length,batch_size):
            X_i = X[i:i+batch_size]
            y_i = y[i:i+batch_size]

            w -= learning_rate*grad_w(y_i,X_i,len(X_i)) # the denominator should be the actual batch size
            b -= learning_rate*grad_b(y_i,len(X_i))
            
            cost += mse(X_i,y_i,b)*len(X_i) # add batch loss
        cost_history[it]  = cost/length # this is a running average of your batch losses,which is statistically more stable
        if cost_history[it] <= 0.0052: break
        
    return w,cost_history[:it]

enter image description here

,

嘿,@ TQCH,谢谢你。我想出了另一种方法来实现SGD,而没有内部循环,结果也很不错。

def stochastic_gradient_descent(X,learning_rate=0.35,iterations=3000,batch_size =100):
    
    length = len(y)
    cost_history = np.zeros(iterations)
    n_batches = int(length/batch_size)
    marker = 0
    cost = mse(X,b)
    print(cost)
    for it in range(iterations):
        cost =0
        indices = np.random.choice(length,batch_size)
        X_i = X[indices]
        y_i = y[indices]

        w -= learning_rate*grad_w(y_i,b)
        b -= learning_rate*grad_b(y_i,b)
            
        cost = mse(X_i,b)
        cost_history[it]  = cost
        if cost_history[it] <= 0.0075 and cost_history[it] > 0.0071: marker = it
        if cost <= 0.0052: break
    print(f'{w},{b}')
    return w,cost_history,marker,cost
w = np.asarray([0.0,0.0,0.0],dtype='float64')
b = 1.0
theta,cost = stochastic_gradient_descent(x,b)

print(f'Number of iterations: {marker}')
print('Final cost/MSE:  {:0.3f}'.format(cost))

这给了我这些结果:

1.9443112664859845,
[0.19592532 0.31735225 -0.20044424],-0.9059800816290591
迭代次数:68
最终成本/MSE:0.005

但是你是对的,我错过了我要除以向量y的总长度,而不是除以批次大小,而忘记增加批次损失了!

谢谢!

相关问答

错误1:Request method ‘DELETE‘ not supported 错误还原:...
错误1:启动docker镜像时报错:Error response from daemon:...
错误1:private field ‘xxx‘ is never assigned 按Alt...
报错如下,通过源不能下载,最后警告pip需升级版本 Requirem...