对象检测API v2 Tflite模型后期量化

问题描述

我尝试转换和量化使用Object Detection API v2训练的模型,以便在Coral Devboard上运行它。

将对象检测模型导出到lite似乎仍然存在很大的问题,尽管我希望也许有人能为我提供一些建议。

我的转换器如下所示,我尝试从Model Zoo v2转换“ SSD MobileNet v2 320x320”

def convertModel(input_dir,output_dir,pipeline_config="",checkpoint:int=-1,quantization=False ):

os.environ['CUDA_VISIBLE_DEVICES'] = '0'

files = os.listdir(input_dir)
if pipeline_config == "":
    pipeline_config = [pipe for pipe in files if pipe.endswith(".config")][0]
pipeline_config_path = os.path.join(input_dir,pipeline_config)


# Find latest or given checkpoint
checkpoint_file = ""
checkpointDir = os.path.join(input_dir,'checkpoint')
for chck in sorted(os.listdir(checkpointDir)):
    if chck.endswith(".index"):
        checkpoint_file = chck[:-6]
        # Stop search when the requested was found
        if chck.endswith(str(checkpoint)):
            break
print("#####################################")
print(checkpoint_file)
print("#####################################")
#ckeckpint_file = [chck for chck in files if chck.endswith(f"{checkpoint}.meta")][0]
trained_checkpoint_prefix = os.path.join(checkpointDir,checkpoint_file)


configs = config_util.get_configs_from_pipeline_file(pipeline_config_path)
detection_model = model_builder.build(configs['model'],is_training=False)

ckpt = tf.compat.v2.train.Checkpoint(
    model=detection_model)
ckpt.restore(trained_checkpoint_prefix).expect_partial()


class MyModel(tf.keras.Model):
    def __init__(self,model):
        super(MyModel,self).__init__()
        self.model = model
        self.seq = tf.keras.Sequential([
            tf.keras.Input([300,300,3],1),])

    def call(self,x):
        x = self.seq(x)
        images,shapes = self.model.preprocess(x)
        prediction_dict = self.model.predict(images,shapes)
        detections = self.model.postprocess(prediction_dict,shapes)
        return detections

km = MyModel(detection_model)

y = km.predict(np.random.random((1,3)).astype(np.float32))
converter = tf.lite.TFLiteConverter.from_keras_model(km)
if quantization:
    converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
    converter.target_spec.supported_ops = [ tf.lite.OpsSet.SELECT_TF_OPS,tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
    converter.representative_dataset = _genDataset
else:
    converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
converter.experimental_new_converter = True
converter.allow_custom_ops = True
tflite_model = converter.convert()

open(os.path.join(output_dir,'model.tflite'),'wb').write(tflite_model)

我的数据生成器加载从coco数据集中下载的大约100张图像以生成样本输入

def _genDataset():
sampleDir = os.path.join("Dataset","Coco")
for i in os.listdir(sampleDir):
    image = cv2.imread(os.path.join(sampleDir,i))
    image = cv2.cvtColor(image,cv2.COLOR_BGR2RGB)
    image = cv2.resize(image,(300,300))
    image = image.astype("float")
    image = np.expand_dims(image,axis=1)
    image = image.reshape(1,3)
    yield [image.astype("float32")]

我试图用TF2.2.0调优代码,这使我返回

RuntimeError: Max and min for dynamic tensors should be recorded during calibration

根据TF2.3.0的更新,当我返回时应该会有所帮助

<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.Size {device = ""}

我还测试了tf-nightly(2.4.0),它再次返回

RuntimeError: Max and min for dynamic tensors should be recorded during calibration

现在,这个tf.Size运算符似乎是我可以转换模型的原因,因为当我允许自定义操作时,我可以将其转换为tflite。 遗憾的是,这对我来说不是解决方案,因为珊瑚转换器或我的解释器无法在缺少自定义操作的情况下使用该模型。

有人知道是否有可能在后处理中删除此op或在转换期间将其忽略吗?

只需将其转换为TFlite,而无需进行量化,并且tf.lite.OpsSet.TFLITE_BUILTINS可以正常工作

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)

相关问答

错误1:Request method ‘DELETE‘ not supported 错误还原:...
错误1:启动docker镜像时报错:Error response from daemon:...
错误1:private field ‘xxx‘ is never assigned 按Alt...
报错如下,通过源不能下载,最后警告pip需升级版本 Requirem...