如何减少DQN中的情节时间?

问题描述

我已经从OpenAi修改了cartpole环境,以便它以相反的位置开始并且必须学习上升。我使用Google collab来运行它,因为它比笔记本电脑上的运行速度更快。我想。太慢了……我需要40秒。一集,与笔记本电脑上的时间差不多我什至尝试针对Google TPU对其进行优化,但没有任何变化。主要的消费者是.fit().predict(),所以我相信。我在这里使用.predict()

def get_qs(self,state): return self.model.predict(np.array(state).reshape(-1,*state.shape),workers = 8,use_multiprocessing = True)[0]

这里也是.fit()

@tf.function 
def train(self,terminal_state,step):
    "Zum trainieren lohnt es sich immer einen größeren Satz von Daten zu nehmen um ein Overfitting zu verhindern"
    if len(self.replay_memory) < MIN_REPLAY_MEMORY_SIZE:
        return
    
   # Get a minibatch of random samples from memory replay table
    minibatch = random.sample(self.replay_memory,MINIBATCH_SIZE)

    # Get current states from minibatch,then query NN model for Q values
    current_states = np.array([transition[0] for transition in minibatch])
    current_qs_list = self.model.predict(current_states)

    # Get future states from minibatch,then query NN model for Q values
    # When using target network,query it,otherwise main network should be queried
    new_current_states = np.array([transition[3] for transition in minibatch])
    future_qs_list = self.target_model.predict(new_current_states,use_multiprocessing = True)

    X = []
    y = []

    # Now we need to enumerate our batches
    for index,(current_states,action,reward,new_current_states,done) in enumerate(minibatch):

        # If not a terminal state,get new q from future states,otherwise set it to 0
        # almost like with Q Learning,but we use just part of equation here
        if not done:
            max_future_q = np.max(future_qs_list[index])
            new_q = reward + DISCOUNT * max_future_q
        else:
            new_q = reward

        # Update Q value for given state
        current_qs = current_qs_list[index]
        current_qs[action] = new_q

        # And append to our training data
        
        X.append(state)
        y.append(current_qs)
    
    # Fit on all samples as one batch,log only on terminal state callbacks=[self.tensorboard] if terminal_state else None
    self.model.fit(np.array(X),np.array(y),batch_size=MINIBATCH_SIZE,verbose=0,shuffle=False,use_multiprocessing = True)
     # Update target network counter every episode
    if terminal_state:
        self.target_update_counter += 1

    # If counter reaches set value,update target network with weights of main network
    if self.target_update_counter > UPDATE_TARGET_EVERY:
        self.target_model.set_weights(self.model.get_weights())
        self.target_update_counter = 0

有人可以帮我把事情搞定吗?

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)