TF Agents 训练期间任何变量都没有梯度

问题描述

我正在尝试强化学习并选择 2048 game 开始。我遵循了 guide for the TF-Agents package 并从 cartpole 环境和 reinforce 代理中复制了大部分代码

在教程中,他们使用了 TF 代理附带的 ActordistributionNetwork

actor_net = actor_distribution_network.ActordistributionNetwork(
    train_env.observation_spec(),train_env.action_spec(),fc_layer_params=fc_layer_params)

这似乎不适合我的需求,因为输入是一个 (16,18) 张量,16 个网格站点上 18 种可能状态的热编码。输出一个(4)张量,应该是四个类别的softmax。在两者之间我只想有几个密集的层。

刚刚从教程中复制了代理:

optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate)
train_step_counter = tf.compat.v2.Variable(0)
tf_agent = reinforce_agent.ReinforceAgent(
    train_env.time_step_spec(),actor_network=actor_net,optimizer=optimizer,normalize_returns=True,use_advantage_loss=False,train_step_counter=train_step_counter)
tf_agent.initialize()

而且我有一个训练循环,也是从教程中复制的:

for _ in tqdm.tqdm(range(num_iterations)):
    # Collect a few episodes using collect_policy and save to the replay buffer.
    collect_episode(
        train_env,tf_agent.collect_policy,collect_episodes_per_iteration,replay_buffer)
    
    # Use data from the buffer and update the agent's network.
    experience = replay_buffer.gather_all()
    train_loss = tf_agent.train(experience)
    replay_buffer.clear()

使用给定的 actor_net,训练效果很好,结果只是无稽之谈。演员基本上有一个随机策略,因为动作输出一个向量,有四个大约 0.5 的元素。显然最后没有softmax

我尝试用一​​个简单的 Keras 层堆栈替换网络,如下所示:

actor_net = tf_agents.networks.Sequential(
    layers=[
        # tf.keras.layers.Flatten(),tf.keras.layers.Dense(32,activation=tf.keras.activations.relu),tf_agents.keras_layers.InnerReshape((16,32),(16 * 32,)),tf.keras.layers.Dense(4,activation=tf.keras.activations.softmax),],input_spec=train_env.observation_spec()
)

InnerReshape 的出现是因为在经验收集(或播放)期间,输入形状始终是 (B,16,18),而在训练期间,它将是 (B,T,18),其中 B 是批量大小和 T 是在一集中完成的时间步数。一个普通的 Keras ReshapeFlatten 层也会尝试拉平时间轴,由于游戏的开放性,时间轴具有不同数量的元素。

当我尝试训练这个时,我被告知没有为任何变量提供梯度:

ValueError: No gradients provided for any variable: ["<tf.Variable 'sequential/dense/kernel:0' shape=(18,32) dtype=float32>","<tf.Variable 'sequential/dense/bias:0' shape=(32,) dtype=float32>","<tf.Variable 'sequential/dense_1/kernel:0' shape=(512,"<tf.Variable 'sequential/dense_1/bias:0' shape=(32,"<tf.Variable 'sequential/dense_2/kernel:0' shape=(32,4) dtype=float32>","<tf.Variable 'sequential/dense_2/bias:0' shape=(4,) dtype=float32>"].

完整的跟踪:

Traceback (most recent call last):
  File "/home/mu/reinforcement-2048/main.py",line 3,in <module>
    ri2048.__main__.main()
  File "/home/mu/reinforcement-2048/ri2048/__main__.py",line 16,in main
    ri2048.training.make_agent()
  File "/home/mu/reinforcement-2048/ri2048/training.py",line 103,in make_agent
    train_loss = tf_agent.train(experience)
  File "/home/mu/reinforcement-2048/venv/lib64/python3.7/site-packages/tensorflow/python/eager/def_function.py",line 828,in __call__
    result = self._call(*args,**kwds)
  File "/home/mu/reinforcement-2048/venv/lib64/python3.7/site-packages/tensorflow/python/eager/def_function.py",line 871,in _call
    self._initialize(args,kwds,add_initializers_to=initializers)
  File "/home/mu/reinforcement-2048/venv/lib64/python3.7/site-packages/tensorflow/python/eager/def_function.py",line 726,in _initialize
    *args,**kwds))
  File "/home/mu/reinforcement-2048/venv/lib64/python3.7/site-packages/tensorflow/python/eager/function.py",line 2969,in _get_concrete_function_internal_garbage_collected
    graph_function,_ = self._maybe_define_function(args,kwargs)
  File "/home/mu/reinforcement-2048/venv/lib64/python3.7/site-packages/tensorflow/python/eager/function.py",line 3361,in _maybe_define_function
    graph_function = self._create_graph_function(args,line 3206,in _create_graph_function
    capture_by_value=self._capture_by_value),File "/home/mu/reinforcement-2048/venv/lib64/python3.7/site-packages/tensorflow/python/framework/func_graph.py",line 990,in func_graph_from_py_func
    func_outputs = python_func(*func_args,**func_kwargs)
  File "/home/mu/reinforcement-2048/venv/lib64/python3.7/site-packages/tensorflow/python/eager/def_function.py",line 634,in wrapped_fn
    out = weak_wrapped_fn().__wrapped__(*args,**kwds)
  File "/home/mu/reinforcement-2048/venv/lib64/python3.7/site-packages/tf_agents/agents/tf_agent.py",line 519,in train
    experience=experience,weights=weights,**kwargs)
  File "/home/mu/reinforcement-2048/venv/lib64/python3.7/site-packages/tf_agents/utils/common.py",line 185,in with_check_resource_vars
    return fn(*fn_args,**fn_kwargs)
  File "/home/mu/reinforcement-2048/venv/lib64/python3.7/site-packages/tf_agents/agents/reinforce/reinforce_agent.py",line 289,in _train
    grads_and_vars,global_step=self.train_step_counter)
  File "/home/mu/reinforcement-2048/venv/lib64/python3.7/site-packages/tensorflow/python/training/optimizer.py",line 595,in apply_gradients
    ([str(v) for _,v,_ in converted_grads_and_vars],))
ValueError: No gradients provided for any variable: ["<tf.Variable 'sequential/dense/kernel:0' shape=(18,) dtype=float32>"].

我的整个代码 is on GitHub,主要在 environment.pytraining.py 文件中。

我想这是小事。如何获得训练所需的梯度?

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)