在不使用 sklearn 的情况下使用 SGD 实现具有 Logloss 和 L2 正则化的 SGD 分类器

问题描述

X,y = make_classification(n_samples=50000,n_features=15,n_informative=10,n_redundant=5,n_classes=2,weights=[0.7],class_sep=0.7,random_state=15)

初始化权重

def initialize_weights(dim):
    ''' In this function,we will initialize our weights and bias''' 
    w = np.zeros_like(1,dim)
    b = 0
    return w,b

计算sigmoid

def sigmoid(z):
    ''' In this function,we will return sigmoid of z'''
    sig = 1/(1 + np.exp(-z))
    return sig

计算对数损失

def logloss(y_true,y_pred):
    '''In this function,we will compute log loss '''
    n = (len(y_true))
    log_loss = (-1/n) * ((y_true * np.log10(y_pred)) + (1-y_true) * np.log10(1-y_pred)).sum()
    return log_loss

计算梯度 w.r.t w

def gradient_dw(x,y,w,b,alpha,N):
    '''In this function,we will compute the gardient w.r.to w '''
    dw = (x*(y - sigmoid(np.dot(w.T,x) + b)) - ((alpha)*(1/N) * w)).sum()
    return dw

计算梯度 w.r.t b

def gradient_db(x,b):
    '''In this function,we will compute gradient w.r.to b '''
    db = y - sigmoid(np.dot(w.T,x) + b)
    return db

实现逻辑回归

def train(X_train,y_train,X_test,y_test,epochs,eta0,tol = 1e-3):
    ''' In this function,we will implement logistic regression'''
    #Here eta0 is learning rate
    #implement the code as follows
    # initalize the weights (call the initialize_weights(X_train[0]) function)
    # for every epoch
        # for every data point(X_train,y_train)
           #compute gradient w.r.to w (call the gradient_dw() function)
           #compute gradient w.r.to b (call the gradient_db() function)
           #update w,b
        # predict the output of x_train[for all data points in X_train] using w,b
        #compute the loss between predicted and actual values (call the loss function)
        # store all the train loss values in a list
        # predict the output of x_test[for all data points in X_test] using w,b
        #compute the loss between predicted and actual values (call the loss function)
        # store all the test loss values in a list
        # you can also compare previous loss and current loss,if loss is not updating then stop the process and return w,b
    
    w,b = initialize_weights(X_train[0])
    train_loss = []
    test_loss = []
    for e in range(epochs):
        for x,y in zip(X_train,y_train):
            dw = gradient_dw(x,N)
            db = gradient_db(x,b)
            w = w + (eta0 * dw)
            b = b + (eta0 * db)
        for i in X_train:
            y_pred = sigmoid(np.dot(w,i) + b)
            train_loss.append(logloss(y_train,y_pred))
        for j in X_test:
            y_pred_test = sigmoid(np.dot(w,j) + b)
            test_loss.append(logloss(y_test,y_pred_test))
    return w,train_loss,test_loss
alpha=0.0001
eta0=0.0001
N=len(X_train)
epochs=50
w,train_loss_arr,test_loss_arr = train(X_train,eta0)

绘制纪元数 vs 训练,测试损失

plt.plot(range(epochs),'g',label = 'Training loss')
plt.plot(range(epochs),test_loss_arr,label = 'Test loss')
plt.title('Epoch vs Training Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()

绘制纪元 vs train_loss/test_loss 时出错

ValueError                                Traceback (most recent call last)
<ipython-input-138-7d80027b5139> in <module>
      1 import matplotlib.pyplot as plt
----> 2 plt.plot(range(epochs),label = 'Training loss')
      3 plt.plot(range(epochs),label = 'Test loss')
      4 plt.title('Epoch vs Training Loss')
      5 plt.xlabel('Epoch')

~\anaconda3\lib\site-packages\matplotlib\pyplot.py in plot(scalex,scaley,data,*args,**kwargs)
   2838 @_copy_docstring_and_deprecators(Axes.plot)
   2839 def plot(*args,scalex=True,scaley=True,data=None,**kwargs):
-> 2840     return gca().plot(
   2841         *args,scalex=scalex,scaley=scaley,2842         **({"data": data} if data is not None else {}),**kwargs)

~\anaconda3\lib\site-packages\matplotlib\axes\_axes.py in plot(self,scalex,**kwargs)
   1741         """
   1742         kwargs = cbook.normalize_kwargs(kwargs,mlines.Line2D)
-> 1743         lines = [*self._get_lines(*args,data=data,**kwargs)]
   1744         for line in lines:
   1745             self.add_line(line)

~\anaconda3\lib\site-packages\matplotlib\axes\_base.py in __call__(self,**kwargs)
    271                 this += args[0],272                 args = args[1:]
--> 273             yield from self._plot_args(this,kwargs)
    274 
    275     def get_next_color(self):

~\anaconda3\lib\site-packages\matplotlib\axes\_base.py in _plot_args(self,tup,kwargs)
    397 
    398         if x.shape[0] != y.shape[0]:
--> 399             raise ValueError(f"x and y must have same first dimension,but "
    400                              f"have shapes {x.shape} and {y.shape}")
    401         if x.ndim > 2 or y.ndim > 2:

ValueError: x and y must have same first dimension,but have shapes (50,) and (1875000,)

我对逻辑回归的代码感到困惑,如果它正确,那么我应该如何绘制纪元 vs train_loss/test_loss。对于每个时期都应该有一个损失,并且不知道我应该在我的代码中进行哪些更改来绘制它。

解决方法

试试这个:我添加了 2 个列表来添加训练预测值和测试预测值以帮助迭代,其余看起来不错。此外,在迭代 s = re.sub("\d{4}","",s) (在您的代码中)时,您需要考虑 y_pred 而不仅仅是 len(X_train) :

X_train

相关问答

错误1:Request method ‘DELETE‘ not supported 错误还原:...
错误1:启动docker镜像时报错:Error response from daemon:...
错误1:private field ‘xxx‘ is never assigned 按Alt...
报错如下,通过源不能下载,最后警告pip需升级版本 Requirem...