TFLite:微可变操作解析器未命名类型

问题描述

我正在尝试使用 MicroMutableOpsResolver 类编译基于 TFLite 微型的 Arduino 草图(仅包含减少内存使用所需的操作)。

虽然在 TF lite 示例中看到类似的用法 - https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/micro_speech/micro_speech_test.cc

但是一直遇到下面的编译错误

IMU_Classifier_TinyML:22:1: error: 'micro_op_resolver' does not name a type
 micro_op_resolver.AddFullyConnected();
 ^~~~~~~~~~~~~~~~~
IMU_Classifier_TinyML:23:1: error: 'micro_op_resolver' does not name a type
 micro_op_resolver.Addsoftmax();
 ^~~~~~~~~~~~~~~~~
IMU_Classifier_TinyML:24:1: error: 'micro_op_resolver' does not name a type
 micro_op_resolver.AddRelu();
 ^~~~~~~~~~~~~~~~~
Using library Arduino_LSM9DS1 at version 1.1.0 in folder: /home/balaji/Arduino/libraries/Arduino_LSM9DS1 
Using library Wire in folder: /home/balaji/.arduino15/packages/arduino/hardware/mbed/1.3.2/libraries/Wire (legacy)
Using library Arduino_TensorFlowLite at version 2.4.0-ALPHA in folder: /home/balaji/Arduino/libraries/Arduino_TensorFlowLite 
exit status 1
'micro_op_resolver' does not name a type

代码片段如下所示:

#include <Arduino_LSM9DS1.h>
#include <TensorFlowLite.h>
#include <tensorflow/lite/micro/micro_mutable_op_resolver.h>
#include <tensorflow/lite/micro/kernels/micro_ops.h>
#include <tensorflow/lite/micro/micro_error_reporter.h>
#include <tensorflow/lite/micro/micro_interpreter.h>
#include <tensorflow/lite/schema/schema_generated.h>
#include <tensorflow/lite/version.h>

// Include the TFlite converted model header file
#include "model.h"

const float accelThreshold = 2.5;
const int numOfSamples = 119; // acceleration sample-rate

int samplesRead = numOfSamples;

tflite::MicroErrorReporter tfLiteErrorReporter;

/*Import only the required ops to reduce the memory usage*/
static tflite::MicroMutableOpResolver<3> micro_op_resolver;
micro_op_resolver.AddFullyConnected();
micro_op_resolver.Addsoftmax();
micro_op_resolver.AddRelu();

我是否缺少任何依赖项,或者这可能是由于 TF lite 版本不匹配造成的?

解决方法

至少像 micro_op_resolver.AddFullyConnected(); 这样的函数调用必须放在函数体中。这样的东西应该编译:

#include <Arduino_LSM9DS1.h>
#include <TensorFlowLite.h>
#include <tensorflow/lite/micro/micro_mutable_op_resolver.h>
#include <tensorflow/lite/micro/kernels/micro_ops.h>
#include <tensorflow/lite/micro/micro_error_reporter.h>
#include <tensorflow/lite/micro/micro_interpreter.h>
#include <tensorflow/lite/schema/schema_generated.h>
#include <tensorflow/lite/version.h>

// Include the TFlite converted model header file
#include "model.h"

const float accelThreshold = 2.5;
const int numOfSamples = 119; // acceleration sample-rate

int samplesRead = numOfSamples;

tflite::MicroErrorReporter tfLiteErrorReporter;

/*Import only the required ops to reduce the memory usage*/
static tflite::MicroMutableOpResolver<3> micro_op_resolver;

void setup() {
  micro_op_resolver.AddFullyConnected();
  micro_op_resolver.AddSoftmax();
  micro_op_resolver.AddRelu();
}

void loop() {
  // put your main code here,to run repeatedly:

}

相关问答

Selenium Web驱动程序和Java。元素在(x,y)点处不可单击。其...
Python-如何使用点“。” 访问字典成员?
Java 字符串是不可变的。到底是什么意思?
Java中的“ final”关键字如何工作?(我仍然可以修改对象。...
“loop:”在Java代码中。这是什么,为什么要编译?
java.lang.ClassNotFoundException:sun.jdbc.odbc.JdbcOdbc...