AssertionError:defaultdict<函数mc_control_importance_sampling<locals><lambda> at 0x7f31699ffe18>

问题描述

我一直在使用稳定的基准和具有3个动作的离散环境来制作DQN。

我正在使用RL教程https://github.com/dennybritz/reinforcement-learning/blob/master/MC/MC%20Control%20with%20Epsilon-Greedy%20Policies%20Solution.ipynb

以供参考

class MyCell(torch.nn.Module):
    def __init__(self):
        super(MyCell,self).__init__()
        self.linear = torch.nn.Linear(4,4)

    def forward(self,x,h):
        new_h = torch.tanh(self.linear(x) + h)
        return new_h,new_h

my_cell = MyCell()
x,h = torch.rand(3,4),torch.rand(3,4)
traced_cell = torch.jit.script(my_cell,(x,h))
print(traced_cell)
traced_cell(x,h)

但是我的蒙特卡洛方法的辅助函数存在一些问题


env = gym.make('fishing-v0')
model = DQN(MlpPolicy,env,verbose=2)
trained_model = model.learn(total_timesteps=10000)
  

当我调用函数时,


def mc_control_importance_sampling(env,num_episodes,discount = .99):
    """
    Monte Carlo Control Off-Policy Control using Weights for Sampling.
    Finds an optimal greedy policy.

    """
    
    # creates Q dictionary that maps obs to action values
    Q = defaultdict(lambda: np.zeros(env.action_space))
    #dictionary for weights
    C = defaultdict(lambda: np.zeros(env.action_space))
    
    # learn greedy policy
    target_policy = env.step(Q)
        
    for i_episode in range(1,num_episodes + 1):
        if i_episode % 1 == 0:
            print("\rEpisode {}/{}.".format(i_episode,num_episodes),end="")

        # Generate an episode to be tuple (state,action,reward) tuples
        episode = []
        obs = env.reset()
        for t in range(100):
            # Sample an action from our policy
            action,_states = trained_model.predict(obs)
            next_state,reward,done,_ = env.step(action)
            episode.append((state,reward))
            if done:
                break
            obs = next_obs
        
        # Sum of discounted returns
        G = 0.0
        # weights for return
        W = 1.0
        for t in range(len(episode))[::-1]:
            obs,reward = episode[t]
            G = discount * G + reward
            #  Add weights
            C[obs][action] += W
            # Update policy
            Q[obs][action] += (W / C[obs][action]) * (G - Q[obs][action]
                                                      
            if action !=  np.argmax(target_policy(obs)):
                break
            W = W * 1./behavior_policy(obs)[action]
        
    return Q,target_policy

我得到了错误

Q,policy = mc_control_importance_sampling(env,num_episodes=500000)

我不确定如何解决此问题,

谢谢

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)

相关问答

错误1:Request method ‘DELETE‘ not supported 错误还原:...
错误1:启动docker镜像时报错:Error response from daemon:...
错误1:private field ‘xxx‘ is never assigned 按Alt...
报错如下,通过源不能下载,最后警告pip需升级版本 Requirem...