问题描述
请帮助,我正在尝试在Logstash管道中添加grok过滤器,该过滤器将在日志行下方转换
2020-11-06 12:57:43,854 INFO Bandwidth: NASDAQ:224.0.130.65:30408 0.000059 Gb/S
到
{
"ts": [
[
"2020-11-06 12:57:43,854"
]
],"YEAR": [
[
"2020"
]
],"MONTHNUM": [
[
"11"
]
],"MONTHDAY": [
[
"06"
]
],"HOUR": [
[
"12",null
]
],"MINUTE": [
[
"57","SECOND": [
[
"43,"ISO8601_TIMEZONE": [
[
null
]
],"loglevel": [
[
"INFO"
]
],"Metric": [
[
"Bandwidth"
]
],"Chanel": [
[
"NASDAQ:224.0.130.65:30408"
]
],"Data": [
[
"0.000059 Gb/S"
]
]
}
下面是我的希腊过滤器
input{
beats{
port => "5044"
}
}
filter{
if "Bandwidth" in [message]{
grok{
match => {"message" => "%{TIMESTAMP_ISO8601:ts} %{LOGLEVEL:loglevel} %{WORD:Metric}: (?<Chanel>[A-Z]+:[0-9]+.[0-9]+.[0-9]+.[0-9]+:[0-9]+)"}
}
}
}
output{
elasticsearch{
hosts => [ "localhost:9200" ]
}
}
当我在Grok调试器中尝试使用此过滤器时,该过滤器工作得很好,而在Kibana中查看时,在Logstash中却无法使用。我没有从过滤器中捕获任何名称。只是消息。如果删除过滤器的正则表达式部分并添加GREEDYDATA,则一切正常。我确定我在正则表达式部分做错了。
解决方法
您的正则表达式模式正确,并且确实提供了预期的过滤器输出。在Kibana中刷新索引模式,或尝试重新摄取数据。
但是,如果通道模式类似于某些数据:IP地址:端口
,我认为您不需要使用正则表达式尝试以下模式
grok{
match => { "message" => ["%{TIMESTAMP_ISO8601:ts} %{LOGLEVEL:loglevel} %{WORD:Metric}: (?<Channel>%{DATA}:%{HOSTPORT}) (?<Data>%{GREEDYDATA})"]}
}
Logstash输出将为
{
"ts" => "2020-11-06 12:57:43,854","Metric" => "Bandwidth","@timestamp" => 2020-11-06T22:47:20.383Z,"loglevel" => "INFO","host" => "e7c15acec470","Data" => "0.000059 Gb/S","Channel" => "NASDAQ:224.0.130.65:30408","@version" => "1","message" => "2020-11-06 12:57:43,854 INFO Bandwidth: NASDAQ:224.0.130.65:30408 0.000059 Gb/S"
}
尝试将stdout输出与elasticsearch一起使用,这样您就可以看到logstash输出到elastic了。
output{
stdout { codec => rubydebug }
}