Universal Sentence Encoder Lite慢速推理速度错误

问题描述

我想在速度和内存要求受到限制的应用程序中使用 USE-lite 。刚开始,我使用的是 USE(v4),它在速度方面是最佳的,但其内存要求足够高,可以寻找其他选项。 USE-lite看起来像是在性能和​​内存之间很好的折衷方案,因此我重写了代码以使用USE-lite。但是速度是此方法中的一个大问题-一批要花几乎1秒钟的时间(多大批量无关紧要)。您有什么优化建议吗?下面是我用于测试的代码

取自LiteV4

代码

通用句子编码器精简版v2

graph=tf.Graph()
with tf.Session(graph=graph) as sess:
  module = hub.Module("https://tfhub.dev/google/universal-sentence-encoder-lite/2")
  input_placeholder = tf.sparse_placeholder(tf.int64,shape=[None,None])
  encodings = module(
      inputs=dict(
          values=input_placeholder.values,indices=input_placeholder.indices,dense_shape=input_placeholder.dense_shape))

with tf.Session(graph=graph) as sess:
  spm_path = sess.run(module(signature="spm_path"))

sp = spm.SentencePieceProcessor()
sp.Load(spm_path)
print("SentencePiece model loaded at {}.".format(spm_path))


def process_to_IDs_in_sparse_format(sp,sentences):
  # An utility method that processes sentences with the sentence piece processor
  # 'sp' and returns the results in tf.SparseTensor-similar format:
  # (values,indices,dense_shape)
  ids = [sp.EncodeAsIds(x) for x in sentences]
  max_len = max(len(x) for x in ids)
  dense_shape=(len(ids),max_len)
  values=[item for sublist in ids for item in sublist]
  indices=[[row,col] for row in range(len(ids)) for col in range(len(ids[row]))]
  return (values,dense_shape)

with tf.Session(graph=graph) as session:
  for examples in [1,5,10,15,20,100,500]:
    avg_times = []
    for _ in range(5):
      start = time.time()
      values,dense_shape = process_to_IDs_in_sparse_format(sp,messages[:examples])
      session.run([tf.global_variables_initializer(),tf.tables_initializer()])
      message_embeddings = session.run(
          encodings,Feed_dict={input_placeholder.values: values,input_placeholder.indices: indices,input_placeholder.dense_shape: dense_shape})
      end = time.time()
      avg_times.append(end - start)
    print(f"{examples} examples took on average {sum(avg_times)/len(avg_times)} s")

输出

1 examples took on average 1.1000233173370362 s
5 examples took on average 1.0919271945953368 s
10 examples took on average 1.126042890548706 s
15 examples took on average 1.2352482795715332 s
20 examples took on average 1.2496394157409667 s
100 examples took on average 1.443721389770508 s
500 examples took on average 2.719187784194946 s

通用句子编码器v4

import numpy as np

import tensorflow_hub as hub

module_url = "https://tfhub.dev/google/universal-sentence-encoder/4" 
model_USE = hub.load(module_url)
print("module %s loaded" % module_url)


def embed(input):
    return model_USE(input)

for examples in [1,500]:
  avg_times = []
  for _ in range(5):
    start = time.time()
    embed(messages[:examples])
    end = time.time()
    avg_times.append(end - start)
  print(f"{examples} examples took on average {sum(avg_times)/len(avg_times)} s")

输出

module https://tfhub.dev/google/universal-sentence-encoder/4 loaded
1 examples took on average 0.034186267852783205 s
5 examples took on average 0.012439298629760741 s
10 examples took on average 0.012084484100341797 s
15 examples took on average 0.01157207489013672 s
20 examples took on average 0.010838794708251952 s
100 examples took on average 0.011487102508544922 s
500 examples took on average 0.012101221084594726 s

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)