问题描述
我正在创建一个带有 Spring Boot 的 REST-Api,它需要将大量的 JSON-Data 传输到客户端。为了避免过多的内存使用或过多的数组实例化,我使用 streamingresponsebody 以块的形式发送数据:
@Controller
class FooController{
@Autowired
FooService fooService;
//...
private ResponseEntity<streamingresponsebody> getPropertyVariants(@PathVariable(required = false) String propertyName,@RequestParam(required = false) String instruction) throws JsonProcessingException
{
streamingresponsebody streamingresponsebody = out -> {
if (propertyName == null) fooService.writeReportToOutStream(out);
else (fooService.writeReportToOutStream(propertyName,out);
};
return ResponseEntity.ok().contentType(MediaType.APPLICATION_JSON).body(streamingresponsebody);
}
}
FooService 有大量数据,它会过滤这些数据,然后使用 JsonGenerator 将其写入 Stream。我无法在此处显示实际服务。请放心,生成的 Json-Array 会逐项写入流条目中,一切都已正确刷新,并且 JsonGenerator 已关闭。 如果我的输出数组包含大约 100000 个条目,则一切正常。但是,如果我将该输出增加到 1000000 个条目,则请求会在传输过程中失败。
部分堆栈跟踪:
org.apache.coyote.CloseNowException: Failed write
at org.apache.coyote.http11.Http11OutputBuffer$SocketoutputBuffer.doWrite(Http11OutputBuffer.java:548)
at org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:110)
at org.apache.coyote.http11.Http11OutputBuffer.doWrite(Http11OutputBuffer.java:193)
at org.apache.coyote.Response.doWrite(Response.java:606)
at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:340)
at org.apache.catalina.connector.OutputBuffer.flushByteBuffer(OutputBuffer.java:783)
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:299)
at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:273)
at org.apache.catalina.connector.CoyoteOutputStream.flush(CoyoteOutputStream.java:118)
at com.fasterxml.jackson.core.json.UTF8JsonGenerator.flush(UTF8JsonGenerator.java:1178)
at com.fasterxml.jackson.databind.ObjectMapper.writeValue(ObjectMapper.java:3060)
at com.fasterxml.jackson.core.base.GeneratorBase.writeObject(GeneratorBase.java:388)
at de.jmzb.ecomwdc.service.properties.FooService.lambda$null$2(FooService.java:37)
at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
at java.util.stream.AbstractPipeline.wrapAndcopyInto(AbstractPipeline.java:472)
at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
at de.jmzb.ecomwdc.service.properties.FooService.lambda$filter$10(KeyFilterStrategy.java:34)
at java.util.ArrayList.forEach(ArrayList.java:1259)
at de.jmzb.ecomwdc.service.properties.PropertyReportService.writeReportToOutStream(FooService.java:56)
at de.jmzb.ecomwdc.controller.WDCDataSourceController.lambda$getPropertyVariants$1(FooController.java:77)
at org.springframework.web.servlet.mvc.method.annotation.streamingresponsebodyReturnValueHandler$streamingresponsebodyTask.call(streamingresponsebodyReturnValueHandler.java:111)
at org.springframework.web.servlet.mvc.method.annotation.streamingresponsebodyReturnValueHandler$streamingresponsebodyTask.call(streamingresponsebodyReturnValueHandler.java:98)
at org.springframework.web.context.request.async.WebAsyncManager.lambda$startCallableProcessing$4(WebAsyncManager.java:337)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
httpie 上的错误:
HTTP/1.1 200
Connection: keep-alive
content-encoding: gzip
Content-Type: application/json
Date: Thu,27 May 2021 15:21:46 GMT
Keep-Alive: timeout=60
transfer-encoding: chunked
vary: origin,access-control-request-method,access-control-request-headers,accept-encoding
http: error: ChunkedEncodingError: ("Connection broken: InvalidChunkLength(got length b'',0 bytes read)",InvalidChunkLength(got length b'',0 bytes read))
一旦我的服务输入不再需要存储在内存中,我希望能够处理任何大小的数据。显然 OutputStream 在某些时候不再工作? 想法 1: 由于某种原因,传输的某些块长度错误,客户端取消请求,并且 OutputStream 关闭。因此,服务器有一个例外。 想法 2:出于某种原因,服务器停止了 OutputStream,导致客户端内的响应意外结束。
关于如何解决这个问题的任何想法?很抱歉我不能分享我的原始代码。
感谢您的帮助!
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)