Tensorflow:TFLiteConverterSaved Model-> TFLite要求所有操作数和结果具有兼容的元素类型

问题描述

我已经被这个问题困扰了几天,但是当我尝试使用下面的代码将我的saved_model.pb文件转换.tflite模型时,它给出了一个错误(堆栈跟踪以下)。


转换代码

converter = tf.lite.TFLiteConverter.from_saved_model(
    "/tmp/test_saved_model2")
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_model = converter.convert()
open("converted_model.tflite","wb").write(quantized_model)

Stacktrace:

Traceback (most recent call last):
  File "C:\Users\Mr.Ace\AppData\Roaming\Python\python38\site-packages\tensorflow\lite\python\convert.py",line 196,in toco_convert_protos
    model_str = wrap_toco.wrapped_toco_convert(model_flags_str,File "C:\Users\Mr.Ace\AppData\Roaming\Python\python38\site-packages\tensorflow\lite\python\wrap_toco.py",line 32,in wrapped_toco_convert
    return _pywrap_toco_api.TocoConvert(
Exception: <unkNown>:0: error: loc("Func/StatefulPartitionedCall/input/_0"): requires all operands and results to have compatible element types
<unkNown>:0: note: loc("Func/StatefulPartitionedCall/input/_0"): see current operation: %1 = "tf.Identity"(%arg0) {device = ""} : (tensor<1x?x?x3x!tf.quint8>) -> tensor<1x?x?x3xui8>


During handling of the above exception,another exception occurred:

Traceback (most recent call last):
  File "c:/Data/TFOD/tflite_converter.py",line 27,in <module>
    quantized_model = converter.convert()
  File "C:\Users\Mr.Ace\AppData\Roaming\Python\python38\site-packages\tensorflow\lite\python\lite.py",line 1076,in convert
    return super(TFLiteConverterV2,self).convert()
  File "C:\Users\Mr.Ace\AppData\Roaming\Python\python38\site-packages\tensorflow\lite\python\lite.py",line 899,in convert
    return super(TFLiteFrozenGraphConverterV2,File "C:\Users\Mr.Ace\AppData\Roaming\Python\python38\site-packages\tensorflow\lite\python\lite.py",line 629,in convert
    result = _toco_convert_impl(
  File "C:\Users\Mr.Ace\AppData\Roaming\Python\python38\site-packages\tensorflow\lite\python\convert.py",line 569,in toco_convert_impl
    data = toco_convert_protos(
  File "C:\Users\Mr.Ace\AppData\Roaming\Python\python38\site-packages\tensorflow\lite\python\convert.py",line 202,in toco_convert_protos
    raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unkNown>:0: error: loc("Func/StatefulPartitionedCall/input/_0"): requires all operands and results to have compatible element types
<unkNown>:0: note: loc("Func/StatefulPartitionedCall/input/_0"): see current operation: %1 = "tf.Identity"(%arg0) {device = ""} : (tensor<1x?x?x3x!tf.quint8>) -> tensor<1x?x?x3xui8>

我尝试使用tf-nightly,尽管它可以工作,但它并没有创建我需要在Android Phone上使用的“ FlatBuffer”模型。我该如何解决这个问题?

解决方法

我看到两点要注意:

  1. (张量)->张量

看起来您的模型具有动态形状,并且与它们一起使用tflite does not work well。首先,使用固定输入将模型从save_model转换为tflite。查看好方法here,例如:

tflite_convert \
  --saved_model_dir="/tmp/test_saved_model2" \
  --output_file='model.tflite' \
  --input_shapes=1,256,3 \     # <-- here,you set an
                                  #     arbitrary valid shape
  --input_arrays='input' \         
  --output_arrays='Softmax'

另一种方法是使saved_model具有固定的输入输出形状,因此您无需在save_model-> lite转换期间指定它。这是TF2的only选项

  1. converter.optimizations = [tf.lite.Optimize.DEFAULT]

在调试过程中,请尽量避免进行任何形式的优化,以便减少查找错误的位置。这是一般想法。

,

我以前也遇到过同样的问题,现在可以通过以下三个步骤来训练tflite模型:

  1. 训练数据

    !python /content/models/research/object_detection/model_main_tf2.py
    --pipeline_config_path = {pipeline_config_path}
    --model_dir = {model_dir}
    --alsologtostderr
    --num_train_steps = {num_steps}
    --sample_1_of_n_eval_examples = 1
    --num_eval_steps = {num_eval_steps}

  2. 导出tflite 2图形:

    !python模型/研究/对象检测/export_tflite_graph_tf2.py
    --pipeline_config_path = {pipeline_config_path}
    --trained_checkpoint_dir = {model_dir} --output_directory = tflite_exported

  3. 将.pb转换为tflite

    !tflite_convert --output_file'model.tflite'--saved_model_dir'tflite_exported / saved_model'