执行时间太长

问题描述

我的数据总长度为 l=132011。最初,当我尝试正常运行此代码时,运行长度 = 100 次迭代需要 35 分钟,长度 = 1000 次迭代需要大约 6 小时,这意味着运行完整长度(132011 次迭代)需要 600 多个小时。这是我的主要代码

start = time.perf_counter()  # start timer

v1 = pd.read_excel('filename',header = None)
i1 = pd.read_excel(filename,header = None)

v = np.array(v1)
i = np.array(i1)

columnnames = ['S','V','K','I','Pr']
data = pd.DataFrame(0,index=np.arange(19),columns = columnnames)

P  = 0.01         
Q  = 0.000251      
R  = 0.1         
CC = 0
Cn = 10260
S = 0         
K = 0     
Pr = 0 

X = []   # creating empy lists to use in the for loop 
Y = []
Z = []
A = []
warnings.filterwarnings("ignore",category=np.VisibleDeprecationWarning)         #ignore the warning

for k in range(0,l):
    x  = i[k]   # get each current value 
    CC = CC + (x/Cn) # calculate SoC using CC
    CC = np.round(CC,8)
    X.append(CC)                 

    V = v[k]
    V = np.round(V,6)
    I = i[k]
    I = np.round(I,6)

    data.loc[-1] = [S,V,K,I,Pr]     # Add these values from prevIoUs cycle to the end of 'data'
    data = data.reset_index(drop=True)      # reset the index of the dataframe 
    data = data.round(decimals=6)           # round all values to 6 decimals
#prior estimation  
    data_predict = data.tail(20)            # shape is (20,5)
    data_predict = np.array(data_predict)   # convert into ndarray
    data_predict = tf.convert_to_tensor(data_predict,np.float32) # convert into tensor 
    data_predict = tf.expand_dims(data_predict,0   # shape here is (1,20,5)
   
    Pr1 = Pr     

    with tf.GradientTape(persistent=True,watch_accessed_variables=True) as tape:

      tape.watch(data_predict)
      Pr = model(data_predict)  # Trained model (1 LSTM layer with 20 units,2 dense layers with 35,20 neurons,one output dense layer with 1 neuron)
   
    j = tape.jacobian(Pr,data_predict) # jacobian of output w.r.t input

    j = tf.reshape(j,[1,100])

    jt = tf.transpose(j)

    Pr = tf.reshape(Pr,[1])    # to convert into 1d array #added
    Pr = np.round(Pr,6)
    Y.append(Pr)

    F1 = j*P
    P1= tf.linalg.matmul(F1,jt) + Q #computes matrix multiplication of F1,jt and adds Q
    P1 = tf.reshape(P1,[1])   

    K = P1/(P1+R)       
    K = np.round(K,6)
    A.append(K)          # store new K value to the list of prevIoUs values

    S = Pr + K*(CC - Pr) #  Final Value
    S = np.round(S,6)
    P = (1-K)*P1 
    P = np.round(P,6)
    Z.append(S)   
elapsed = time.perf_counter() - start
print('Execution Time %.5f seconds.' % elapsed)    # give the time for execution

当我分别执行每个部分时,它们都没有花费很长时间运行。谁能帮我弄清楚为什么要花很长时间?请让我知道我的代码是否有任何错误,这会消耗大量时间。

谢谢:)

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)