为什么DNN不学习?

问题描述

您知道为什么这个网络不想学习吗?这个想法是,它在较早的层中将ReLU用作激活函数,而在最后一层将Sigmoid用作激活函数。当我仅使用S型时,网络学习良好。为了验证网络,我使用了MNIST。

def sigmoid( z ):
    return 1.0 / (1.0 + np.exp(-z))

def sigmoid_prime(z):
    return sigmoid(z)*(1-sigmoid(z))

def RELU(z):
    return z*(z>0)

def RELU_Prime(z):
    return (z>0)

    # x - training data in mnist for example (1,784) vector
    # y - training label in mnist for example (1,10) vector
    # nabla is gradient for the current x and y 
    def backprop(self,x,y):
        nabla_b = [np.zeros(b.shape) for b in self.biases]
        nabla_w = [np.zeros(w.shape) for w in self.weights]
        # feedforward
        activation = x
        activations = [x] # list to store all the activations,layer by layer
        zs = [] # list to store all the z vectors,layer by layer
        index =0
        for b,w in zip(self.biases,self.weights):
            z = np.dot(w,activation)+b
            zs.append(z)
            if index == len(self.weights)-1:
                activation = sigmoid(z)
            #previous layers are RELU
            else:
                activation = RELU(z)

            activations.append(activation)
            index +=1
        # backward pass
        delta = self.cost_derivative(activations[-1],y) *\
             sigmoid_prime(zs[-1])

        nabla_b[-1] = delta

        nabla_w[-1] = np.dot(delta,activations[-2].transpose())
        for l in range(2,self.num_layers):
            z = zs[-l]
            sp = RELU_Prime(z)
            delta = np.dot(self.weights[-l+1].transpose(),delta) * sp
            nabla_b[-l] = delta
            nabla_w[-l] = np.dot(delta,activations[-l-1].transpose())
        return (nabla_b,nabla_w)

---------------编辑----------------------------- >

    def cost_derivative(self,output_activations,y):
        return (output_activations-y)

---------------编辑2 -----------------------------

      self.weights = [w-(eta/len(mini_batch))*nw
                       for w,nw in zip(self.weights,nabla_w)]
       self.biases = [b-(eta/len(mini_batch))*nb
                      for b,nb in zip(self.biases,nabla_b)]

eta> 0

解决方法

对于那些将来的人来说,这个问题的答案很简单,但却是隐藏的:)。原来,权重初始化是错误的。要使其正常工作,您必须使用Xavier初始化并将其乘以2。

相关问答

错误1:Request method ‘DELETE‘ not supported 错误还原:...
错误1:启动docker镜像时报错:Error response from daemon:...
错误1:private field ‘xxx‘ is never assigned 按Alt...
报错如下,通过源不能下载,最后警告pip需升级版本 Requirem...