问题描述
- 为什么我必须通过 var_list 来最小化,因为我已经明确使用了 trainable = True。如果我删除 var_list,我会收到错误消息。
- 无论我称之为最小化多少次迭代或我使用什么学习率,我总是得到相同的结果。我相信我的变量不在图中,因此它们不能在每次迭代时更新。它基本上与数字 1 相关。
这是代码。任何帮助将不胜感激!谢谢,注意安全!!
import tensorflow as tf
from matplotlib import pyplot as plt
import numpy as np
y_train = [2.48647765732222,-0.303258845569525,-4.05313903208675,-4.33589931314926,-6.17420514970129,-5.60395159162841,-3.50690652820209,-
2.32567755531334,-4.63772499017313,-0.232708358092122,-1.98576143074415,1.02839153133005,-
2.26396081470418,-0.450826849993658,1.16715548976294,6.65243873752780,4.14520332259566,5.26766016927822,6.34028299499329,9.62643862532723,14.7841620484121]
x_train = [i+1.0 for i in range(len(y_train))]
class LinReg:
def __init__(self,x,y):
self.tf_x = tf.constant(x)
self.tf_y = tf.constant(y)
self.tf_a = tf.Variable(0.1,trainable=True)
self.tf_b = tf.Variable(0.2,trainable=True)
self.tf_c = tf.Variable(0.3,trainable=True)
@tf.function
def DegreeTwo(self):
tf_y_model = tf.multiply(self.tf_a,tf.pow(self.tf_x,2)) + \
tf.multiply(self.tf_b,self.tf_x) + self.tf_c
tf_error = tf.reduce_mean(tf.square(self.tf_y - tf_y_model))
op = tf.compat.v1.train.GradientDescentOptimizer(learning_rate=0.000000002)
for _ in range(100):
op.minimize(loss=tf_error,var_list=[self.tf_a,self.tf_b,self.tf_c])
return self.tf_a,self.tf_c
x = LinReg(x_train,y_train)
a,b,c = x.DegreeTwo()
print("\nA=",a)
print("\nB=",b)
print("\nC=",c)
我决定稍微更改代码以确保所有变量都在 tf.function 的“内部”创建,但这没有任何区别
这是代码
import tensorflow as tf
from matplotlib import pyplot as plt
import numpy as np
y_train = [2.48647765732222,14.7841620484121]
x_train = [i+1.0 for i in range(len(y_train))]
tf_y_train = tf.constant(y_train)
tf_x_train = tf.constant(x_train)
class LinReg:
def __init__(self):
self.first = None
@tf.function
def DegreeTwo(self,y):
self.tf_x = x
self.tf_y = y
if self.first is None:
self.first = 1
self.tf_a = tf.Variable(0.1,trainable=True)
self.tf_b = tf.Variable(0.2,trainable=True)
self.tf_c = tf.Variable(0.3,trainable=True)
tf_y_model = tf.multiply(self.tf_a,self.tf_x) + self.tf_c
tf_error = tf.reduce_mean(tf.square(self.tf_y - tf_y_model))
op = tf.compat.v1.train.GradientDescentOptimizer(learning_rate=0.00000002)
for _ in range(100):
op.minimize(loss=tf_error,self.tf_c
x = LinReg()
a,c = x.DegreeTwo(x_train,y_train)
print("\nA=",c)
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)