问题描述
尝试使用Tensorflow Lite使用Android Studio实施自定义对象检测模型。我正在遵循此处提供的指导:Running on mobile with TensorFlow Lite,但是没有成功。示例模型正常运行,显示所有检测到的标签。但是,当我尝试使用自定义模型时,我没有任何标签。我也尝试了其他模型(来自互联网,但结果相同)。就像标签没有以写入方式传递一样。我复制了 detect.tflite 和 labelmap.txt ,并在中更改了 TF_OD_API_INPUT_SIZE 和 TF_OD_API_IS_QUANTIZED > DetectorActivity.java ,但仍未获得结果(检测到的带有边框和分数的类)。
2020-10-11 18:37:54.315 31681-31681/org.tensorflow.lite.examples.detection E/HAL: PATH3 /odm/lib64/hw/gralloc.qcom.so
2020-10-11 18:37:54.315 31681-31681/org.tensorflow.lite.examples.detection E/HAL: PATH2 /vendor/lib64/hw/gralloc.qcom.so
2020-10-11 18:37:54.315 31681-31681/org.tensorflow.lite.examples.detection E/HAL: PATH1 /system/lib64/hw/gralloc.qcom.so
2020-10-11 18:37:54.315 31681-31681/org.tensorflow.lite.examples.detection E/HAL: PATH3 /odm/lib64/hw/gralloc.msm8953.so
2020-10-11 18:37:54.315 31681-31681/org.tensorflow.lite.examples.detection E/HAL: PATH2 /vendor/lib64/hw/gralloc.msm8953.so
2020-10-11 18:37:54.315 31681-31681/org.tensorflow.lite.examples.detection E/HAL: PATH1 /system/lib64/hw/gralloc.msm8953.so
2020-10-11 18:37:54.859 31681-31681/org.tensorflow.lite.examples.detection E/tensorflow: CameraActivity: Exception!
java.lang.IllegalStateException: This model does not contain associated files,and is not a Zip file.
at org.tensorflow.lite.support.Metadata.MetadataExtractor.assertZipFile(MetadataExtractor.java:325)
at org.tensorflow.lite.support.Metadata.MetadataExtractor.getAssociatedFile(MetadataExtractor.java:165)
at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.create(TFLiteObjectDetectionAPIModel.java:118)
at org.tensorflow.lite.examples.detection.DetectorActivity.onPreviewSizeChosen(DetectorActivity.java:96)
at org.tensorflow.lite.examples.detection.CameraActivity.onPreviewFrame(CameraActivity.java:200)
at android.hardware.Camera$EventHandler.handleMessage(Camera.java:1157)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:165)
at android.app.ActivityThread.main(ActivityThread.java:6375)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:912)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:802)
如何进行检测?我是否需要与标签相对的其他文件(元数据),或者我做错了其他事情? 以上情况已在Android 7设备上进行了测试。谢谢!
解决方法
那里看起来像是回归。 您可以尝试以下方法吗?
<at your TF example repo>
$ git checkout de42482b453de6f7b6488203b20e7eec61ee722e^
,
这是本文档中的一个特定问题,尚未更新。
主要问题是示例已更新为使用附加了Metadata的模型,尤其是嵌入了标签作为模型资产的模型。
将标签文件添加到模型中后,一切都应该正常工作。
,为了更好地理解Gusthema提出的建议解决方案,我为您提供了适用于我的案例的代码:
pip install tflite-support
from tflite_support import flatbuffers
from tflite_support import metadata as _metadata
from tflite_support import metadata_schema_py_generated as _metadata_fb
# Creates model info.
model_meta = _metadata_fb.ModelMetadataT()
model_meta.name = "MobileNetV1 image classifier"
model_meta.description = ("Identify Unesco Monuments Route"
"image from a set of 18 categories")
model_meta.version = "v1"
model_meta.author = "TensorFlow"
model_meta.license = ("Apache License. Version 2.0 "
"http://www.apache.org/licenses/LICENSE-2.0.")
# Creates input info.
input_meta = _metadata_fb.TensorMetadataT()
# Creates output info.
output_meta = _metadata_fb.TensorMetadataT()
input_meta.name = "image"
input_meta.description = (
"Input image to be classified. The expected image is {0} x {1},with "
"three channels (red,blue,and green) per pixel. Each value in the "
"tensor is a single byte between 0 and 255.".format(300,300))
input_meta.content = _metadata_fb.ContentT()
input_meta.content.contentProperties = _metadata_fb.ImagePropertiesT()
input_meta.content.contentProperties.colorSpace = (
_metadata_fb.ColorSpaceType.RGB)
input_meta.content.contentPropertiesType = (
_metadata_fb.ContentProperties.ImageProperties)
input_normalization = _metadata_fb.ProcessUnitT()
input_normalization.optionsType = (
_metadata_fb.ProcessUnitOptions.NormalizationOptions)
input_normalization.options = _metadata_fb.NormalizationOptionsT()
input_normalization.options.mean = [127.5]
input_normalization.options.std = [127.5]
input_meta.processUnits = [input_normalization]
input_stats = _metadata_fb.StatsT()
input_stats.max = [255]
input_stats.min = [0]
input_meta.stats = input_stats
# Creates output info.
output_meta = _metadata_fb.TensorMetadataT()
output_meta.name = "probability"
output_meta.description = "Probabilities of the 18 labels respectively."
output_meta.content = _metadata_fb.ContentT()
output_meta.content.content_properties = _metadata_fb.FeaturePropertiesT()
output_meta.content.contentPropertiesType = (
_metadata_fb.ContentProperties.FeatureProperties)
output_stats = _metadata_fb.StatsT()
output_stats.max = [1.0]
output_stats.min = [0.0]
output_meta.stats = output_stats
label_file = _metadata_fb.AssociatedFileT()
label_file.name = os.path.basename('/content/gdrive/My Drive/models/research/deploy/labelmap.txt')
label_file.description = "Labels for objects that the model can recognize."
label_file.type = _metadata_fb.AssociatedFileType.TENSOR_AXIS_LABELS
output_meta.associatedFiles = [label_file]
# Creates subgraph info.
subgraph = _metadata_fb.SubGraphMetadataT()
subgraph.inputTensorMetadata = [input_meta]
subgraph.outputTensorMetadata = 4*[output_meta]
model_meta.subgraphMetadata = [subgraph]
b = flatbuffers.Builder(0)
b.Finish(
model_meta.Pack(b),_metadata.MetadataPopulator.METADATA_FILE_IDENTIFIER)
metadata_buf = b.Output()
# metadata and the label file are written into the TFLite file
populator = _metadata.MetadataPopulator.with_model_file('/content/gdrive/My Drive/models/research/object_detection/exported_model/detect.tflite')
populator.load_metadata_buffer(metadata_buf)
populator.load_associated_files(['/content/gdrive/My Drive/models/research/deploy/labelmap.txt'])
populator.populate()
最终,如果您想创建一个json文件来显示结果(元数据文件),则可以使用:
displayer = _metadata.MetadataDisplayer.with_model_file('/content/gdrive/My Drive/models/research/object_detection/exported_model/detect.tflite')
export_json_file = os.path.join('/content/gdrive/My Drive/models/research/object_detection/exported_model',os.path.splitext('detect.tflite')[0] + ".json")
json_file = displayer.get_metadata_json()
# Optional: write out the metadata as a json file
with open(export_json_file,"w") as f:
f.write(json_file)
P.S .:请小心更改代码的某些部分,以便与您的需求兼容。 (例如,如果您使用的是512x512的图片,则必须通过“ input_meta.description”变量对其进行更改)。