如何在 iOS 上使用袖珍狮身人面像运行唤醒词检测?

问题描述

我尝试在 iOS 上从 pocket sphinx 运行唤醒词检测。作为基础,我使用了 TLSphinx 并且语音转文本工作(不是很好的 STT,但它可以识别单词)。

我通过一个新函数扩展了decoder.swift:

public func detectWakeWord (complete: @escaping (Bool?) -> ()) throws {

    ps_set_keyphrase(psDecoder,"keyphrase_search","ZWEI")
    ps_set_search(psDecoder,"keyphrase_search")
            
    do {
      if #available(iOS 10.0,*) {
          try AVAudioSession.sharedInstance().setCategory(.playAndRecord,mode: .voiceChat,options: [])
      } else {
          try AVAudioSession.sharedInstance().setCategory(.playAndRecord)
      }
    } catch let error as NSError {
        print("Error setting the shared AVAudioSession: \(error)")
        throw DecodeErrors.CantSetAudioSession(error)
    }

    engine = AVAudioEngine()

    let input = engine.inputNode
    let mixer = AVAudioMixerNode()
    let output = engine.outputNode
    engine.attach(mixer)
    engine.connect(input,to: mixer,format: input.outputFormat(forBus: 0))
    engine.connect(mixer,to: output,format: input.outputFormat(forBus: 0))

    // We forceunwrap this because the docs for AVAudioFormat specify that this constructor return nil when the channels
    // are greater than 2.
    let formatIn = AVAudioFormat(commonFormat: .pcmFormatFloat32,sampleRate: 44100,channels: 1,interleaved: false)!
    let formatOut = AVAudioFormat(commonFormat: .pcmFormatInt16,sampleRate: 16000,interleaved: false)!
    guard let bufferMapper = AVAudioConverter(from: formatIn,to: formatOut) else {
        // Returns nil if the format conversion is not possible.
        throw DecodeErrors.CantConvertAudioFormat
    }

    mixer.installTap(onBus: 0,bufferSize: 2048,format: formatIn,block: {
        [unowned self] (buffer: AVAudioPCMBuffer!,time: AVAudioTime!) in

        guard let sphinxBuffer = AVAudioPCMBuffer(pcmFormat: formatOut,frameCapacity: buffer.frameCapacity) else {
            // Returns nil in the following cases:
            //    - if the format has zero bytes per frame (format.streamDescription->mBytesPerFrame == 0)
            //    - if the buffer byte capacity (frameCapacity * format.streamDescription->mBytesPerFrame)
            //    cannot be represented by an uint32_t
            print("Can't create PCM buffer")
            return
        }

        // This is needed because the 'frameLenght' default value is 0 (since iOS 10) and cause the 'convert' call
        // to faile with an error (Error Domain=NSOSStatusErrorDomain Code=-50 "(null)")
        // More here: http://stackoverflow.com/questions/39714244/avaudioconverter-is-broken-in-ios-10
        sphinxBuffer.frameLength = sphinxBuffer.frameCapacity

        var error : NSError?
        let inputBlock : AVAudioConverterInputBlock = {
              inNumPackets,outStatus in
              outStatus.pointee = AVAudioConverterInputStatus.haveData
              return buffer
          }
        bufferMapper.convert(to: sphinxBuffer,error: &error,withInputFrom: inputBlock)
        print("Error? ",error as Any);
      
        let audioData = sphinxBuffer.toData()
        self.process_raw(audioData)

        print("Process: \(buffer.frameLength) frames - \(audioData.count) bytes - sample time: \(time.sampleTime)")

        self.end_utt()
        
        let hypothesis = self.get_hyp()
          
        print("HYPOTHESIS: ",hypothesis)

        DispatchQueue.main.async {
          complete(hypothesis != nil)
        }
      
        self.start_utt()
    })

    start_utt()

    do {
        try engine.start()
    } catch let error as NSError {
        end_utt()
        print("Can't start AVAudioEngine: \(error)")
        throw DecodeErrors.CantStartAudioEngine(error)
    }
  }

没有错误,但 hypothesis 始终为零。 我的字典将所有内容都映射到“ZWEI”,因此如果检测到任何内容,应该检测唤醒词。

ZWEI AH P Z EH TS B AAH EX
ZWEI(2) HH IH T
ZWEI(3) F EH EX Q OE F EH N T L IH CC T
ZWEI(4) G AX V AH EX T AX T
...
ZWEI(12113) N AY NZWO B IIH T AX N

有人知道为什么假设总是为零吗?

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)