问题描述
我正在尝试使OpenTelemetry导出器与OpenTelemetry收集器一起使用。
我找到了这个OpenTelemetry collector demo。
所以我复制了这四个配置文件
- docker-compose.yml(在我的应用中,我删除了当前正在运行的问题的生成器部分和prometheus)
- otel-agent-config.yaml
- otel-collector-config.yaml
- .env
到我的应用。
还基于open-telemetry / opentelemetry-js存储库中的这两个演示:
我想出了我的版本(很抱歉,由于缺少文档,很难设置最低工作版本):
.env
OTELCOL_IMG=otel/opentelemetry-collector-dev:latest
OTELCOL_ARGS=
docker-compose.yml
version: '3.7'
services:
# Jaeger
jaeger-all-in-one:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686"
- "14268"
- "14250"
# Zipkin
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
# Collector
otel-collector:
image: ${OTELCOL_IMG}
command: ["--config=/etc/otel-collector-config.yaml","${OTELCOL_ARGS}"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "1888:1888" # pprof extension
- "8888:8888" # Prometheus metrics exposed by the collector
- "8889:8889" # Prometheus exporter metrics
- "13133:13133" # health_check extension
- "55678" # OpenCensus receiver
- "55680:55679" # zpages extension
depends_on:
- jaeger-all-in-one
- zipkin-all-in-one
# Agent
otel-agent:
image: ${OTELCOL_IMG}
command: ["--config=/etc/otel-agent-config.yaml","${OTELCOL_ARGS}"]
volumes:
- ./otel-agent-config.yaml:/etc/otel-agent-config.yaml
ports:
- "1777:1777" # pprof extension
- "8887:8888" # Prometheus metrics exposed by the agent
- "14268" # Jaeger receiver
- "55678" # OpenCensus receiver
- "55679:55679" # zpages extension
- "13133" # health_check
depends_on:
- otel-collector
otel-agent-config.yaml
receivers:
opencensus:
zipkin:
endpoint: :9411
jaeger:
protocols:
thrift_http:
exporters:
opencensus:
endpoint: "otel-collector:55678"
insecure: true
logging:
loglevel: debug
processors:
batch:
queued_retry:
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check,pprof,zpages]
pipelines:
traces:
receivers: [opencensus,jaeger,zipkin]
processors: [batch,queued_retry]
exporters: [opencensus,logging]
metrics:
receivers: [opencensus]
processors: [batch]
exporters: [logging,opencensus]
otel-collector-config.yaml
receivers:
opencensus:
exporters:
prometheus:
endpoint: "0.0.0.0:8889"
namespace: promexample
const_labels:
label1: value1
logging:
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
format: proto
jaeger:
endpoint: jaeger-all-in-one:14250
insecure: true
processors:
batch:
queued_retry:
extensions:
health_check:
pprof:
endpoint: :1888
zpages:
endpoint: :55679
service:
extensions: [pprof,zpages,health_check]
pipelines:
traces:
receivers: [opencensus]
processors: [batch,queued_retry]
exporters: [logging,zipkin,jaeger]
metrics:
receivers: [opencensus]
processors: [batch]
exporters: [logging]
运行docker-compose up -d
后,我可以打开Jaeger(http:// localhost:16686)和Zipkin UI(http:// localhost:9411)。
我的ConsoleSpanExporter
在Web客户端和Express.js服务器中均可工作。
但是,我在客户端和服务器上都尝试过此OpenTelemetry导出程序代码,但在连接OpenTelemetry收集器时仍然遇到问题。
import { CollectorTraceExporter } from '@opentelemetry/exporter-collector';
// ...
tracerProvider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
tracerProvider.addSpanProcessor(
new SimpleSpanProcessor(
new CollectorTraceExporter({
serviceName: 'my-service',// url: 'http://localhost:55680/v1/trace',// Return error 404.
// url: 'http://localhost:55681/v1/trace',// No response,not exists.
// url: 'http://localhost:14268/v1/trace',not exists.
})
)
);
有什么主意吗?谢谢
解决方法
您尝试的演示使用的是较早的配置和opencensus,应将其替换为otlp接收器。话虽如此,这是一个可行的例子 https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/collector-exporter-node/docker 所以我要从那里复制文件:
docker-compose.yaml
version: "3"
services:
# Collector
collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/conf/collector-config.yaml","--log-level=DEBUG"]
volumes:
- ./collector-config.yaml:/conf/collector-config.yaml
ports:
- "9464:9464"
- "55680:55680"
- "55681:55681"
depends_on:
- zipkin-all-in-one
# Zipkin
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
# Prometheus
prometheus:
container_name: prometheus
image: prom/prometheus:latest
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
collector-config.yaml
receivers:
otlp:
protocols:
grpc:
http:
exporters:
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
prometheus:
endpoint: "0.0.0.0:9464"
processors:
batch:
queued_retry:
service:
pipelines:
traces:
receivers: [otlp]
exporters: [zipkin]
processors: [batch,queued_retry]
metrics:
receivers: [otlp]
exporters: [prometheus]
processors: [batch,queued_retry]
prometheus.yaml
global:
scrape_interval: 15s # Default is every 1 minute.
scrape_configs:
- job_name: 'collector'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['collector:9464']
这在opentelemetry-js ver中应该可以正常工作。 0.10.2
跟踪的默认端口为55680,度量标准的默认端口为55681
我之前发布的链接-您将始终在这里找到最新的工作示例: https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/collector-exporter-node 对于网络示例,您可以使用相同的docker并在此处查看所有工作示例: https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/tracer-web/
,非常感谢@BObecny的帮助!这是@BObecny答案的补充。
由于我对与Jaeger集成更感兴趣。因此,这是与所有Jaeger,Zipkin,Prometheus一起设置的配置。现在它既可以在前端也可以在后端使用。
首先,前端和后端都使用相同的导出程序代码:
import { CollectorTraceExporter } from '@opentelemetry/exporter-collector';
new SimpleSpanProcessor(
new CollectorTraceExporter({
serviceName: 'my-service',})
)
docker-compose.yaml
version: "3"
services:
# Collector
collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/conf/collector-config.yaml","--log-level=DEBUG"]
volumes:
- ./collector-config.yaml:/conf/collector-config.yaml
ports:
- "9464:9464"
- "55680:55680"
- "55681:55681"
depends_on:
- jaeger-all-in-one
- zipkin-all-in-one
# Jaeger
jaeger-all-in-one:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686"
- "14268"
- "14250"
# Zipkin
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
# Prometheus
prometheus:
container_name: prometheus
image: prom/prometheus:latest
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
collector-config.yaml
receivers:
otlp:
protocols:
grpc:
http:
exporters:
jaeger:
endpoint: jaeger-all-in-one:14250
insecure: true
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
prometheus:
endpoint: "0.0.0.0:9464"
processors:
batch:
queued_retry:
service:
pipelines:
traces:
receivers: [otlp]
exporters: [zipkin]
processors: [batch,queued_retry]
prometheus.yaml
global:
scrape_interval: 15s # Default is every 1 minute.
scrape_configs:
- job_name: 'collector'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['collector:9464']