在Android Studio中使用OkHttp将事件https请求发送到AVS之后,没有通过response.bodystring返回downchannelStream响应

问题描述

我正在关注本教程:https://developer.amazon.com/en-US/docs/alexa/alexa-voice-service/manage-http2-connection.html

总而言之,在使用Android中的OkHttp将事件https请求发送到AVS之后,我没有收到任何带有response.body()。string()的downchannelStream响应。

在这里,我通过创建指令http请求来建立下行通道流,该指令应根据教程保持打开状态:

private void establishDownChanDirective(String accesstoken,OkHttpClient downChannelClient) throws IOException {
    // OKHttp header creation.
    final Request getRequest = new Request.Builder()
            .url("https://alexa.na.gateway.devices.a2z.com/" + AVS_API_VERSION + "/directives")//endpoint url
            .get()
            .addHeader("authorization","Bearer " + accesstoken)
            .build();

    Log.d("Request_header",getRequest.toString());

    downChannelClient.newCall(getRequest).enqueue(new Callback() {
        @Override
        public void onFailure(@NotNull Call call,@NotNull IOException e) {
            Log.d("downChannelResp","failure: " + e.getMessage());
            call.cancel();
        }

        @Override
        public void onResponse(@NotNull Call call,@NotNull Response response) throws IOException {
            Log.d("downChannelResp","Down channel recieved! Test 1");
            processResponse(response,"downChannelResp",true);
            Log.d("downChannelResp","Down channel recieved! Test 2");

            responseDirective = response;
        }
    });
}

接下来,我尝试通过发送事件与AVS同步:

private void sendSyncEvent(OkHttpClient downChannelClient,String accesstoken) throws IOException {
    String msgid = UUID.randomUUID().toString();
    String speakToken = "";
    long offsetMili = 20; // if lags put down to 10.
    String playerActivity = "PLAYING";

    final String JSON_SYNC = "{\"context\":[{\"header\":{\"namespace\":\"SpeechRecognizer\",\"name\":\"RecognizerState\"},\"payload\":{\"wakeword\":\"ALEXA\"}},{\"header\":{\"namespace\":\"SpeechSynthesizer\",\"name\":\"SpeechState\"},\"payload\":{\"token\":\"" + speakToken + "\",\"offsetInMilliseconds\":" + offsetMili + ",\"playerActivity\":\"" + playerActivity + "\"}}],\"event\":{\"header\":{\"namespace\":\"System\",\"name\":\"SynchronizeState\",\"messageId\":\"" + msgid + "\"},\"payload\":{}}}";

    List<MultipartBody.Part> partList = new ArrayList<>();
    MultipartBody.Part syncPart = MultipartBody.Part.create(Headers.of(
            "Content-disposition","form-data; name=\"Metadata\""),RequestBody.create(JSON_SYNC,JSON_TYPE));
    partList.add(syncPart);

    RequestBody body = new MultipartBody(ByteString.encodeUtf8(BOUNDARY_TERM),MultipartBody.FORM,partList);

    Log.d("part",syncPart.headers().toString());
    Log.d("body",body.contentType().toString());

    final Request postRequest = new Request.Builder()
            .url("https://alexa.na.gateway.devices.a2z.com/"+AVS_API_VERSION+"/events")//endpoint url
            .post(body)
            .addHeader("authorization","Bearer " + accesstoken)
            .addHeader("content-type","multipart/form-data; boundary=" + BOUNDARY_TERM) // Don't kNow whether or not this is needed.
            .build();

    Log.d("post_request",postRequest.toString());
    Log.d("post_req_body",JSON_SYNC);

    downChannelClient.newCall(postRequest).enqueue(new Callback() {
        @Override
        public void onFailure(@NotNull Call call,@NotNull IOException e) {
            Log.d("syncResp",@NotNull Response response) throws IOException {
            processResponse(response,"syncResp",false);
        }
    });
}

然后,我尝试发送一个测试识别事件,(根据本教程)该事件旨在通过初始downChannelStream返回响应:

private void testRecognizeEventAVS(OkHttpClient downChannelClient,String accesstoken) throws IOException {
    final MediaType AUdio_TYPE = MediaType.parse("application/octet-stream");

    String audioMsgid = UUID.randomUUID().toString();
    String dialogId = UUID.randomUUID().toString();
    final String JSON_SPEECH_EVENT = "{\"event\": {\"header\": {\"namespace\": \"SpeechRecognizer\",\"name\": \"Recognize\",\"messageId\": \"" + audioMsgid + "\",\"dialogRequestId\": \"" + dialogId + "\"},\"payload\": {\"profile\": \"CLOSE_TALK\",\"format\": \"AUdio_L16_RATE_16000_CHANNELS_1\"}},\"context\": [{\"header\": {\"namespace\": \"AudioPlayer\",\"name\": \"PlaybackState\"},\"payload\": {\"token\": \"\",\"offsetInMilliseconds\": 0,\"playerActivity\": \"FINISHED\"}},{\"header\": {\"namespace\": \"SpeechSynthesizer\",\"name\": \"SpeechState\"},{ \"header\" : { \"namespace\" : \"Alerts\",\"name\" : \"Alertsstate\" },\"payload\" : { \"allAlerts\" : [ ],\"activeAlerts\" : [ ] } },{\"header\": {\"namespace\": \"Speaker\",\"name\": \"VolumeState\"},\"payload\": {\"volume\": 25,\"muted\": false}}]}";

    List<MultipartBody.Part> partList = new ArrayList<>();

    // Metadata Part
    Map<String,String> MetaHeaders = new HashMap<String,String>();
    MetaHeaders.put("Content-disposition","form-data; name=\"Metadata\"");
    MultipartBody.Part MetaPart = MultipartBody.Part.create(Headers.of(MetaHeaders),RequestBody.create(JSON_SPEECH_EVENT,JSON_TYPE));
    partList.add(MetaPart);

    // Audio Part
    Map<String,String> audioHeaders = new HashMap<String,String>();
    audioHeaders.put("Content-disposition","form-data; name=\"Metadata\"");
    MultipartBody.Part audioPart = MultipartBody.Part.create(Headers.of(audioHeaders),RequestBody.create(createTestFile(),AUdio_TYPE));
    partList.add(audioPart);

    RequestBody reqBody = new MultipartBody(ByteString.encodeUtf8(BOUNDARY_TERM),partList);

    Log.d("MetaPart",MetaPart.headers().toString());
    Log.d("audioPart",audioPart.headers().toString());
    Log.d("body",reqBody.contentType().toString());

    // https://developer.amazon.com/en-US/docs/alexa/alexa-voice-service/structure-http2-request.html

    Request speechRequest = new Request.Builder()
            .url("https://alexa.na.gateway.devices.a2z.com/"+AVS_API_VERSION+"/events")
            .addHeader("authorization","multipart/form-data; boundary=" + BOUNDARY_TERM) // Don't kNow whether or not this is needed.
            .post(reqBody)
            .build();

    Log.d("speech_request",speechRequest.toString());

    downChannelClient.newCall(speechRequest).enqueue(new Callback() {
        @Override
        public void onFailure(@NotNull Call call,@NotNull IOException e) {
            Log.d("speechResp","speechResp",false);
        }
    });
}

这是上述每种方法中使用的processResponse方法,用于获取响应并将有关它的信息输出到Android日志:

private void processResponse(Response response,final String TAG,boolean readBodySource) throws IOException {
    //Log.d(TAG,"response-string: " + response.body().string()); // This never shows up and always stops the rest of this method running for the response from establishDownChanDirective().
    Log.d(TAG,"response-success: " + response.isSuccessful());
    Log.d(TAG,"response" + response.toString());

    // Tried this from stack over flow posts,but right Now we aren't even receiving a response-string from the downChannelDirective,so we need to figure that out first.
    if (readBodySource) {
        BufferedSource bufferedSource = response.body().source();

        Buffer buffer = new Buffer();

        while (!bufferedSource.exhausted()) {
            Log.w("bufferedSource","downchannel recieved!");
            long bs = bufferedSource.read(buffer,8192);
            Log.d("bufferedSource_read",String.valueOf(bs));
            Log.d("buffersize",String.valueOf(buffer.size()));
        }

        Log.d("buffer_response",buffer.toString());
    }
}

方法的字符串响应已注释掉,但未注释掉时,它仅将D/syncResp: response-string:作为输出,其中响应字符串只是用于syncResp和SpeechResp的空字符串。但是,对于downChannelResp,它不提供任何输出,并完全停止运行Log.d(TAG,"response-string: " + response.body().string());下的其余代码

现在我运行此命令...

try {
    establishDownChanDirective(accesstoken,downChannelClient); // Establish a down channel directive that will remain open.
    sendSyncEvent(downChannelClient,accesstoken); // Send a Syncronize event through the same connection as the down channel directive.
    testRecognizeEventAVS(downChannelClient,accesstoken); // Send a Speech directive through the same connection as the down channel directive.
    Log.d("OkHttp","Test: Http stuff finished.");
    if (responseDirective != null) {
        Log.d("OkHttp","Response: " + responseDirective.body().string());
    } else {
        Log.d("OkHttp","No response!");
    }
} catch (IOException e) {
    Log.d("OkHttpError","error: START{" + e.toString() + "}END");
    e.printstacktrace();
}

...它将其作为输出

D/Request_header: Request{method=GET,url=https://alexa.na.gateway.devices.a2z.com/v20160207/directives,headers=[authorization:Bearer <the access token - censored for this post>]}
D/part: Content-disposition: form-data; name="Metadata"
D/body: multipart/form-data; boundary=------------------------qM9tn4VZyj
D/post_request: Request{method=POST,url=https://alexa.na.gateway.devices.a2z.com/v20160207/events,headers=[authorization:Bearer <the access token - censored for this post>,content-type:multipart/form-data; boundary=------------------------qM9tn4VZyj]}
D/post_req_body: {"context":[{"header":{"namespace":"SpeechRecognizer","name":"RecognizerState"},"payload":{"wakeword":"ALEXA"}},{"header":{"namespace":"SpeechSynthesizer","name":"SpeechState"},"payload":{"token":"","offsetInMilliseconds":20,"playerActivity":"PLAYING"}}],"event":{"header":{"namespace":"System","name":"SynchronizeState","messageId":"2c46b1a9-8b41-47be-bd09-61166b78492e"},"payload":{}}}
D/parent: /storage/emulated/0/Android/data/aut.rnd.alexa/files
D/fileexists: true
D/media_file: successfully created: true
D/MetaPart: Content-disposition: form-data; name="Metadata"
D/audioPart: Content-disposition: form-data; name="Metadata"
D/body: multipart/form-data; boundary=------------------------qM9tn4VZyj
D/speech_request: Request{method=POST,content-type:multipart/form-data; boundary=------------------------qM9tn4VZyj]}
D/OkHttp: Test: Http stuff finished.
    No response!
D/downChannelResp: Down channel recieved! Test 1
    response-success: true
    responseResponse{protocol=h2,code=200,message=,url=https://alexa.na.gateway.devices.a2z.com/v20160207/directives}
W/bufferedSource: downchannel recieved!
D/bufferedSource_read: 18
D/buffersize: 18
D/syncResp: response-success: true
    responseResponse{protocol=h2,code=204,url=https://alexa.na.gateway.devices.a2z.com/v20160207/events}
D/speechResp: response-success: true
    responseResponse{protocol=h2,url=https://alexa.na.gateway.devices.a2z.com/v20160207/events}

这是意外的,因为响应应返回可以转换为JSON的数据,但似乎根本没有任何返回。

解决方法

这可能是您的问题。 https://square.github.io/okhttp/4.x/okhttp/okhttp3/-response-body/#the-response-body-can-be-consumed-only-once

响应正文只能使用一次。

此类可用于流式传输非常大的响应。例如,可以使用此类读取大于分配给当前进程的整个内存的响应。它甚至可以流式传输比当前设备上的总存储更大的响应,这是视频流应用程序的常见要求。