问题描述
是否可以将实时音频流传递到ibm watson Speech到文本api并在命令提示符下获得转录结果?
我的下面的代码:
contentType: 'audio/l16;rate=16000',model: 'en-US_NarrowbandModel',iterimResults: true,continuous: true,};
// Handle Web Socket Connection
wss.on("connection",function connection(ws) {
console.log("New Connection Initiated");
let recognizeStream = speechToText.recognizeUsingWebSocket(params);
ws.on("message",function incoming(message) {
const msg = JSON.parse(message);
switch (msg.event) {
case "connected":
console.log(`A new call has connected.`);
recognizeStream.on('data',function(data) {
console.log(data.results[0].alternatives[0].transcript);
});
break;
case "start":
console.log(`Starting Media Stream ${msg.streamSid}`);
break;
case "media":
recognizeStream.write(msg.media.payload);
break;
case "stop":
console.log(`Call Has Ended`);
break;
}
});
});```
nothing happens when I write the msg.media.payload to the window.
Am I doing something wrong?
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)