将Clickhouse中的各个值相加后计算每秒的峰值

问题描述

我当前正在使用Clickhouse群集(2个分片,2个副本)从服务器读取事务日志。日志包含诸如时间戳记,传递的字节,ttms等字段。我的表结构如下:

CREATE TABLE db.log_data_local ON CLUSTER '{cluster}' (
  timestamp DateTime,bytes UInt64,/*lots of other fields */
  ) ENGINE = ReplicatedMergeTree('/clickhouse/{cluster}/db/tables/logs/{shard}','{replica}')
PARTITION BY toYYYYMMDD(timestamp)
ORDER BY timestamp
TTL timestamp + INTERVAL 1 MONTH;

CREATE TABLE db.log_data ON CLUSTER '{cluster}'
AS cdn_data.http_access_data_local
ENGINE = Distributed('{cluster}','db','log_data_local',rand());

我正在从Kafka提取数据,并使用实例化视图填充此表。现在,我需要从该表计算每秒的峰值吞吐量。因此,基本上我需要每秒总结字节字段,然后在5分钟内找到最大值。

我尝试使用带有聚合函数的ReplicatedAggregatingMergeTree来提高吞吐量,但与直接查询原始表时得到的值相比,我得到的峰值要小得多。

问题是,在创建材料视图以填充峰值时,直接查询分布式表不会给出任何结果,但是如果我查询本地表,则仅会考虑部分数据集。我尝试使用中间表来计算每秒总数,然后创建实例化表,但是我遇到了同样的问题。

这是我的Peaks表和我要创建的实例化视图的架构:

CREATE TABLE db.peak_metrics_5m_local ON CLUSTER '{cluster}'
(
  timestamp DateTime,peak_throughput AggregateFunction(max,UInt64),)
ENGINE=ReplicatedAggregatingMergeTree('/clickhouse/{cluster}/db/tables/peak_metrics_5m_local/{shard}','{replica}')
PARTITION BY toYYYYMMDD(timestamp)
ORDER BY (timestamp)
TTL timestamp + toIntervalDay(90);


CREATE TABLE db.peak_metrics_5m ON CLUSTER '{cluster}'
AS cdn_data.peak_metrics_5m_local
ENGINE = Distributed('{cluster}','peak_metrics_5m_local',rand());

CREATE MATERIALIZED VIEW db.peak_metrics_5m_mv ON CLUSTER '{cluster}'
TO db.peak_metrics_5m_local
AS SELECT
        toStartOfFiveMinute(timestamp) as timestamp,maxState(bytes) as peak_throughput,FROM (
        SELECT 
            timestamp,sum(bytes) as bytes,FROM db.log_data_local
        GROUP BY timestamp
    )
    GROUP BY timestamp;

请帮我解决这个问题。

解决方法

用MV不可能实现。 MV是插入触发器。

sum(bytes) as bytes,... GROUP BY timestamp对插入的缓冲区起作用,并且不从log_data_local表读取数据。

https://github.com/ClickHouse/ClickHouse/issues/14266#issuecomment-684907869

相关问答

错误1:Request method ‘DELETE‘ not supported 错误还原:...
错误1:启动docker镜像时报错:Error response from daemon:...
错误1:private field ‘xxx‘ is never assigned 按Alt...
报错如下,通过源不能下载,最后警告pip需升级版本 Requirem...