我的梯度下降给出了巨大的数字

问题描述

这是我第一次使用python编码神经网络,也是我第一次编码梯度下降。我正在尝试创建一个神经网络来分析股票并预测接下来会发生什么。因为我很懒,所以我没有估算真实的导数,而是估算它,因为我太懒了。

问题是,当我运行它时,神经网络的输出成本之间的差异以权重的微小差异运行,并且运行相同但没有微小差异的情况下,其差异要大得多

这是我的代码

import csv
import numpy as np

input = []
inputEnd = 199 #start at 0
inputLength = 200

metric = 4 #date,open,high,low,close,volume (start at 0)

output = []
predictionLength = 50
outputGoal = []

weights = []
biases = []

#hiddenLayersNum: int = [16,16,16]
hiddenLayersNum = [16]
hiddenLayers = []


#get inputs from file
with open('MacroTrends_Data_Download_IBM.csv',newline = '') as f:
    reader = csv.reader(f)
    i = 0
    for row in reader:
        if i >= inputEnd - inputLength and i <= inputEnd:
            input.append(float(row[metric]))
        i += 1

#set weights and biases to random from normal distribution
i = 0
while i < predictionLength:
    output.append(0.0)
    i += 1

weightsLength = len(output) * len(input)
biasesLength = len(output)
for i in hiddenLayersNum:
    weightsLength *= i
    biasesLength += i


i = 0
while i <= weightsLength:
    weights.append(np.random.normal(0,1.0))
    i += 1

i = 0
while i <= biasesLength:
    biases.append(np.random.normal(0,1.0))
    i += 1

for i in hiddenLayersNum:
    x = 0
    while x <= i:
        hiddenLayers.append(0.0)
        x += 1

def Run(inputs = [],weight = [],bias = []):
    #set hidden layers
    i = 0
    while i < len(hiddenLayers):
        neuron = 0.0
        if i < hiddenLayersNum[0]:#first layer
            x = 0
            while x < len(inputs):
                neuron += inputs[x] * weight[int(i * len(inputs) + x)]#add up input neurons times weights
                x += 1
            neuron += bias[int(i)]#add bias

        else:
            start = 0
            end = hiddenLayersNum[0]
            for x in hiddenLayersNum:# other layers
                end += x
                start += x
                if i < end and i >= start:
                    y = 0
                    while y < end - start:
                        neuron += hiddenLayers[int(start - x + y)] * weight[int(len(inputs) * hiddenLayersNum[0] + (start - hiddenLayersNum[0]) * hiddenLayersNum[0] + y)]#add up neurons from prevIoUs layer multiplied by weights
                        y += 1
                x += 1
            neuron += bias[int(start + i)]# add bias
        
        hiddenLayers[int(i)] = neuron
        i += 1

    #set output
    i = 0
    while i < predictionLength:
        neuron = 0.0
        x = 0
        while x < hiddenLayersNum[-1]:
            neuron += hiddenLayers[int(len(hiddenLayers) - hiddenLayersNum[-1] + x)] * weight[int(len(weight) - (len(output) * hiddenLayersNum[-1]) + x)]#add up neurons from last hidden layer * weights
            x += 1
        neuron += biases[int(len(hiddenLayers) + i)]#add bias
        output[int(i)] = neuron
        i += 1
    return output

with open('MacroTrends_Data_Download_IBM.csv',newline = '') as f:
        reader = csv.reader(f)
        i = 0
        for row in reader:
            if i > inputEnd and i <= inputEnd + predictionLength:
                outputGoal.append(float(row[metric]))
            i += 1

def Cost(inputs = []):#add up all of the (outputs - correct outputs)^2
    cost = 0.0
    x = 0
    for i in inputs:
        cost += (i - outputGoal[x])**2
        x += 1
    return cost

def GradientDescent(learningRate = 0.01,derivativeDifference = 0.0001):
    i = 0
    origWeights = weights[:]
    while i < weightsLength:#loop through each weight and change it to make the outputs closer to what they should be
        weightsPlus = origWeights[:]
        weightsPlus[i] += derivativeDifference#make the weight I am testing be a little bit more

        weights[i] -= learningRate * (Cost(Run(input,weightsPlus,biases)) - Cost(Run(input,weights,biases)) / derivativeDifference)
        #find an approximation of the derivative,and then subtract that times the learning rate from the weight I am testing
        
        print(Cost(Run(input,biases)))

        i+=1

GradientDescent()

我感觉这是一个语法错误,因为我曾经使用过c#和Unity,但找不到任何东西。

请帮助! 预先感谢。

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)