为什么Spring Cloud Data Flow教程未显示预期结果

问题描述

我正在尝试完成第一个Spring Cloud Dataflow教程,但在该教程中没有得到结果。

https://dataflow.spring.io/docs/stream-developer-guides/streams/

本教程让我使用curl到http源,并在日志接收器中看到带有stdout文件尾部的结果。

我看不到结果。我在日志中看到启动。

我拖尾了日志 docker exec-它的船长尾巴-f / path / from / stdout / textBox / in / dashboard

我输入 curl http:// localhost:20100 -H“内容类型:文本/纯文本” -d“快乐流式传输”

我所看到的就是

2020-10-05 16:30:03.315  INFO 110 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka version : 2.0.1
2020-10-05 16:30:03.316  INFO 110 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId : fa14705e51bd2ce5
2020-10-05 16:30:03.322  INFO 110 --- [           main] o.s.s.c.ThreadPoolTaskScheduler          : Initializing ExecutorService
2020-10-05 16:30:03.338  INFO 110 --- [           main] s.i.k.i.KafkaMessageDrivenChannelAdapter : started org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter@106faf11
2020-10-05 16:30:03.364  INFO 110 --- [container-0-C-1] org.apache.kafka.clients.Metadata        : Cluster ID: 2J0QTxzQQmm2bLxFKgRwmA
2020-10-05 16:30:03.574  INFO 110 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 20041 (http) with context path ''
2020-10-05 16:30:03.584  INFO 110 --- [           main] o.s.c.s.a.l.s.k.LogSinkKafkaApplication  : Started LogSinkKafkaApplication in 38.086 seconds (JVM running for 40.251)
2020-10-05 16:30:05.852  INFO 110 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-3,groupId=http-ingest] discovered group coordinator kafka-broker:9092 (id: 2147482646 rack: null)
2020-10-05 16:30:05.857  INFO 110 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-3,groupId=http-ingest] Revoking prevIoUsly assigned partitions []
2020-10-05 16:30:05.858  INFO 110 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$1  : partitions revoked: []
2020-10-05 16:30:05.858  INFO 110 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-3,groupId=http-ingest] (Re-)joining group
2020-10-05 16:30:08.943  INFO 110 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-3,groupId=http-ingest] Successfully joined group with generation 1
2020-10-05 16:30:08.945  INFO 110 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-3,groupId=http-ingest] Setting newly assigned partitions [http-ingest.http-0]
2020-10-05 16:30:08.964  INFO 110 --- [container-0-C-1] o.a.k.c.consumer.internals.Fetcher       : [Consumer clientId=consumer-3,groupId=http-ingest] Resetting offset for partition http-ingest.http-0 to offset 0.
2020-10-05 16:30:08.981  INFO 110 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$1  : partitions assigned: [http-ingest.http-0]

没有快乐的流媒体

有什么建议吗?

解决方法

感谢您试用开发人员指南!

据我所知,似乎SCDF中的http | log流定义是在没有显式端口的情况下提交的。在这种情况下,当http-sourcelog-sink应用程序启动时,端口会由Spring Boot随机分配。

如果导航到http-source应用程序日志,则会看到列出的应用程序端口,这是您在CURL命令上使用的端口。

指南中对此有以下注释,供您参考。

如果使用本地Data Flow Server,请添加以下部署属性来设置端口,以避免端口冲突。

或者,您可以在定义中使用显式端口部署流。例如:http --server.port=9004 | log。这样,您的CURL将是:

卷曲http:// localhost:9004 -H“内容类型:文本/纯文本” -d“快乐流”