问题描述
我正在使用TensorFlow的Keras API开发机器模型,该模型可以根据输入数据准确预测正确的字母。训练和测试数据是分压器中3个挠性传感器的电压值;接收到的电压是浮子(X.XX),用于训练顺序模型。我已经将文件从python转换为.h文件,并将其放置在Arduino草图所在的文件夹中。
https://www.tensorflow.org/lite/microcontrollers
我目前正在尝试将其集成到Arduino Nano 33 BLE Sense中。我希望从输出串行监视器中看到一个可能的预测类,然后是其旁边的百分比或概率。可能的预测类是“ A”,“ B”,“ C”,“ D”和“ RELAXED”。但是,我没有从串行监视器获得任何输出。我认为问题出在我将输入数据发送到输入缓冲区的方式上,或者可能是因为其格式错误。请,如果您可以看一下我的代码,我将不胜感激。谢谢。
我的代码:
/*
* Flex Classifier which is to test how the model is integrated into arduino
*
*/
////////////////////MODEL////////////////////
// Including Libraries for TensorFlow
#include <TensorFlowLite.h>
#include <tensorflow/lite/micro/all_ops_resolver.h>
#include <tensorflow/lite/micro/micro_error_reporter.h>
#include <tensorflow/lite/micro/micro_interpreter.h>
#include <tensorflow/lite/schema/schema_generated.h>
#include <tensorflow/lite/version.h>
// Including Model
#include "Letter_Model5.h"
// global variables used for TensorFlow Lite (Micro)
tflite::MicroErrorReporter tflErrorReporter;
// pull in all the TFLM ops,you can remove this line and
// only pull in the TFLM ops you need,if would like to reduce
// the compiled size of the sketch.
tflite::AllOpsResolver tflOpsResolver;
const tflite::Model* tflModel = nullptr;
tflite::MicroInterpreter* tflInterpreter = nullptr;
TfLiteTensor* tflInputTensor = nullptr;
TfLiteTensor* tflOutputTensor = nullptr;
// Create a static memory buffer for TFLM,the size may need to
// be adjusted based on the model you are using
constexpr int tensorArenaSize = 8 * 1024;
byte tensorArena[tensorArenaSize];
// array to map gesture index to a name
const char* GESTURES[] = {
"A","B","C","D","RELAXED"
};
#define NUM_GESTURES (sizeof(GESTURES) / sizeof(GESTURES[0]))
////////////////////BOARD////////////////////
const float V = 3.30; //Input voltage on board
////////////////////FLEX////////////////////
//Flex Sensor #1
const int FLEX_PIN1 = A0;
//Flex Sensor #2
const int FLEX_PIN2 = A1;
//Flex Sensor #3
const int FLEX_PIN3 = A2;
void setup()
{
// get the TFL representation of the model byte array
tflModel = tflite::GetModel(Letter_Model5);
if (tflModel->version() != TFLITE_SCHEMA_VERSION)
{
Serial.println("Model schema mismatch!");
while (1);
}
// Create an interpreter to run the model
tflInterpreter = new tflite::MicroInterpreter(tflModel,tflOpsResolver,tensorArena,tensorArenaSize,&tflErrorReporter);
// Allocate memory for the model's input and output tensors
tflInterpreter->AllocateTensors();
// Get pointers for the model's input and output tensors
tflInputTensor = tflInterpreter->input(0);
tflOutputTensor = tflInterpreter->output(0);
// Begin Serial
Serial.begin(9600);
while (!Serial); // Wait for the Serial to start
// Defining flex ports
pinMode(FLEX_PIN1,INPUT);
pinMode(FLEX_PIN2,INPUT);
pinMode(FLEX_PIN3,INPUT);
}
void loop() {
// Reading flex inputs
int flexADC1 = analogRead(FLEX_PIN1);
int flexADC2 = analogRead(FLEX_PIN2);
int flexADC3 = analogRead(FLEX_PIN3);
// Finding Voltages from flex inputs
float flexV1 = (flexADC1/1023.0)*V;
float flexV2 = (flexADC2/1023.0)*V;
float flexV3 = (flexADC3/1023.0)*V;
//Sending input data into the model
float InputData[] = {flexV1,flexV2,flexV3};
tflInputTensor->data.f[0] = InputData[0];
tflInputTensor->data.f[1] = InputData[1];
tflInputTensor->data.f[2] = InputData[2];
//Printing out predicted string from the output tensor
for (int i = 0; i < NUM_GESTURES; i++)
{
Serial.print(GESTURES[i]);
Serial.print(": ");
Serial.println(tflOutputTensor->data.f[i],2);
}
delay(500);
}
这就是我在串行监视器上获得的输出。另外,为什么字母“ B”得到“ nan”? https://imgur.com/a/eaSBLBe
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)