Tensorflow 最小化功能无法正常工作

问题描述

我正在尝试像这样应用 tensorflow 的最小化函数

train_op = optimizer_dict[optimizer][0](*optimizer_dict[optimizer][1]).minimize(cost)

然而,自从 tensorflow 更新后,代码的要求似乎发生了变化。我已尝试适应新版本,但出现错误

Shape must be rank 1 but is rank 2 for '{{node BiasAdd_589}} = BiasAdd[T=DT_FLOAT,data_format="NHWC"](Placeholder_398,BiasAdd_589/ReadVariableOp)' with input shapes: [?,8],[8,1000].

我在下面提供了完整的功能,以便您可以了解问题的完整上下文。如果需要提供任何其他信息,请告诉我。

def train_tensorflow(sess,trX,trY,train_steps,full_train,train_size,net_type,transform_dict,loss_type,optimizer,optimizer_dict):
    '''
    Automatically constructs,trains,and tests a tensorflow neural network,returning the r squared value of the output.
    :param sess: A tensorflow session.
    :param trX: Numpy array that contains the training features.
    :param trY: Numpy array that contains the training outputs. Must have shape of at least 1 on columns.
    :param train_steps: Integer value denoting the number of times to iterate through training.
    :param full_train: Boolean value denoting whether to use the full training set for each iteration.
    :param train_size: Integer value denoting the number of samples to pull from the training set for each iteration of training.
    :param net_type: List of alternating string values and integer values. Must always start and end with a string values. The strings denote the type of each layer. The integer values denote the end size of each layer,through this constrained for certain layer types. Sizes of zero drop that layer out.
    :param transform_dict: Dictionary of strings to tuples of tensors that encode how to set up the layers of the neural network.
    :param loss_type: String denoting typoe of tensor to use for loss type,Use l2_loss for regression,cross_entropy for classification.
    :param optimizer: String denotiong the type of optimization tensor to use for training the neural network.
    :param optimizer_dict: Dictionary of strings to tuples of tensors that encode how to set up the optimizers of the neural networks.
    :return: predict_op: Tensor that encodes the neural network.
        X: Placeholder tensor for the features array.
        y: Placeholder tensor for the output array.
    '''

    # Set up input and output tensors.
    X = tf.compat.v1.placeholder("float",[None,trX.shape[1]])
    y = tf.compat.v1.placeholder("float",trY.shape[1]])

    # Set up network.
    tmp_model = make_model(X,trX.shape[1],trY.shape[1],transform_dict)
    py_x = tmp_model[0]

    # Set up cost and training type.
    if (loss_type == "l2_loss"):
        cost = tf.nn.l2_loss(tf.subtract(py_x,y))
    elif (loss_type == "cross_entropy"):
        cost = -tf.reduce_sum(y*tf.log(py_x))

    # Gets the optimizer to be used for training and set it up.
    if (type(optimizer) == str):
        train_op = optimizer_dict[optimizer][0](*optimizer_dict[optimizer][1]).minimize(cost,tape=tf.GradientTape(persistent=True).gradient(cost,[tmp_model[1],tmp_model[2]]))
    else:
        #train_op = optimizer[0](*optimizer[1]).minimize(cost,var_list=[py_x],tmp_model[1]))
        print("in else")
    predict_op = py_x

    init = tf.initialize_all_variables()
    sess.run(init)

    # Trains given number of times
    try:
        for i in range(train_steps):

            # If full_train is selected,the trains on the full set of training data,in 100 sample increments.
            if (full_train):
                for start,end in zip(range(0,len(trX),100),range(100,100)):
                    sess.run(train_op,Feed_dict={X: trX[start:end],y: trY[start:end]})

            # If full_train is not selectged,then trains on a random set of samples from the training data.
            else:
                indices = random_index_list(train_size,len(trY))
                sess.run(train_op,Feed_dict={X: trX[indices],y: trY[indices]})

    except:
        print("Error during training")
        sess.close()
        return None

    return predict_op,X,y

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)