更新记录
时间 |
内容 |
2021-10-20 |
初稿 |
2021-10-21 |
定时清理脚本 |
2021-10-24 |
日志平台状态监测脚本 |
软件版本
soft |
Version |
elasticsearch |
7.2.0 |
楔子
今天开发找我说要 elk 看下日志,查了一下10月13号以后,就再也没有新索引上来了,真是多事之秋,前两天docker平台 我改了个 Docker Root Dir
让我折腾倒了,日志平台又倒了。
我就赶紧从头开始各个服务往下排查,filebeat
没问题,kafka
没问题,logstash
没问题,elasticsearch
也没问题,真是奇事了。
最后在 logstach 的日志中定位了问题
1
|
[WARN ] 2021-10-20 01:01:59.311 [[main]>worker2] elasticsearch - Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"java-cleaning-log-2021.10.19", :_type=>"_doc", :routing=>nil}, #<LogStash::Event:0x7deeb07c>], :response=>{"index"=>{"_index"=>"java-cleaning-log-2021.10.19", "_type"=>"_doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [14999]/[15000] maximum shards open;"}}}}
|
从这段内容中我们发现,es 索引已经用完,我赶紧查了一下 索引数量
1
2
3
4
5
|
➜ curl -s 192.168.189.166:9200/_cat/indices/ | wc -l
5116
➜ curl -s 192.168.189.166:9200/_cat/indices/*2020* | wc -l
715
|
好吧,这雀实是运维的锅,2020年都有700+的索引没有删掉,实在是不应该。
正在我想怎么清理索引呢,es 一个节点挂掉了,整个es 集群倒了。。。
1
2
3
4
5
|
➜ curl 192.168.189.166:9200/_cat/health
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
➜ curl 192.168.189.166:9200/_cat/health
1634716854 08:00:54 gjr-application red 2 2 5127 5105 0 6 9856 18 1.8m 34.2%
|
问题处理
①启动 es集群
1
2
3
4
5
6
7
8
9
10
|
[2021-10-20T15:59:05,057][INFO ][o.e.c.s.ClusterApplierService] [node-2] removed {{node-1}{-NviWhNwRF-8lveYQ7rTZA}{YFQaM1LlTdGqmC4z8dA1BA}{192.168.189.167}{192.168.189.167:9300}{ml.machine_memory=16656637952, ml.max_open_jobs=20, xpack.installed=true},}, term: 274, version: 218302, reason: ApplyCommitRequest{term=274, version=218302, sourceNode={master}{CRkxVk4ATJ6ted3qxnxz8w}{yPIsZn-qRTWWAPMsaDACrw}{192.168.189.166}{192.168.189.166:9300}{ml.machine_memory=16656801792, ml.max_open_jobs=20, xpack.installed=true}}
[2021-10-20T15:59:13,710][WARN ][o.e.x.m.e.l.LocalExporter] [node-2] unexpected error while indexing monitoring document
org.elasticsearch.xpack.monitoring.exporter.ExportException: RemoteTransportException[[master][192.168.189.166:9300][indices:admin/create]]; nested: IndexCreationException[failed to create index [.monitoring-es-7-2021.10.20]]; nested: IllegalArgumentException[Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [14989]/[10000] maximum shards open;];
at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:125) ~[?:?]
Caused by: org.elasticsearch.transport.RemoteTransportException: [master][192.168.189.166:9300][indices:admin/create]
Caused by: org.elasticsearch.indices.IndexCreationException: failed to create index [.monitoring-es-7-2021.10.20]
Caused by: java.lang.IllegalArgumentException: Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [14989]/[10000] maximum shards open;
|
启动es节点的时候报出了这些日志,经过分析发现,是es在加入集群的时候还是需要创建索引的,不能一点空间都不留。
所以接下来就是先将停止 logstash 服务全部停止,删除掉所有的冗余索引,再等es 集群状态重新变为 green,再重新去收集日志。
②停止 logstash
1
|
➜ ps -ef|grep logstash|grep -v grep|awk '{print $2}'|xargs kill -9
|
③删除 索引
1
2
3
4
5
6
7
8
9
10
11
|
# 先删除 2020年的 冗余索引,其他的等集群恢复以后再删除
➜ curl -s 192.168.189.166:9200/_cat/indices/*2020* | wc -l
715
# 一开始我直接 用通配符删除,报错说不允许通配符
➜ curl -XDELETE http://192.168.189.166:9200/*2020*
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Wildcard expressions or all indices are not allowed"}],"type":"illegal_argument_exception","reason":"Wildcard expressions or all indices are notallowed"},"status":400}
# 先将2020年的索引名 输出至文件中,再用shell命令,for循环执行。
➜ curl -s 192.168.189.166:9200/_cat/indices/*2020* | awk '{print $3}' > del_index_2020
➜ for index in $(cat del_index_2020); do curl -XDELETE http://192.168.189.166:9200/${index}; done
|
④等待 同步
1
2
3
4
5
6
7
8
9
|
➜ curl 192.168.189.166:9200/_cat/health
1634724402 10:06:42 gjr-application yellow 3 3 9262 5902 0 4 2503 7 925.1ms 78.7%
➜ curl 192.168.189.166:9200/_cat/health
1634727908 11:05:08 gjr-application yellow 3 3 11119 5902 0 2 648 3 1.9s 94.5%
# 经过漫长的等待,集群同步状态才恢复到100%,状态重新变为green
➜ curl 192.168.189.166:9200/_cat/health
1634730566 11:49:26 gjr-application green 3 3 11769 5902 2 0 0 2 222.5ms 100.0%
|
⑤恢复 logstash服务
将logstash服务启动命令 整理为脚本
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
|
Host: P2-Elk-slave01
➜ vim /etc/logstash/start_logstash.sh
# log
cd /data/logstash/logs/household/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/household.conf --path.data /data/logstash/logs/household/ &
cd /data/logstash/logs/message/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/message.conf --path.data /data/logstash/logs/message/ &
# alarm
cd /data/logstash/logs/log_alarm/message/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/log_alarm/message_alarm.conf --path.data /data/logstash/logs/log_alarm/message_alarm/ &
Host: P2-Elk-slave02
➜ vim /etc/logstash/start_logstash.sh
# log
cd /data/logstash/logs/nginx/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/nginx-log.conf --path.data /data/logstash/logs/nginx/ &
cd /data/logstash/logs/Am-square-third/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/Am-square-third.conf --path.data /data/logstash/logs/Am-square-third/ &
cd /data/logstash/logs/MOSPT/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/M-O-S-P-T.conf --path.data /data/logstash/logs/MOSPT/ &
cd /data/logstash/logs/ECBUZ/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/E-C-B-U-Z.conf --path.data /data/logstash/logs/ECBUZ/ &
cd /data/logstash/logs/Gps-all/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/Cleaning-Processing-Communication.conf --path.data /data/logstash/logs/Gps-all/ &
cd /data/logstash/logs/Admin-Job-coin-management-coupon/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/Admin-Job-Coin-Coupon-management.conf --path.data /data/logstash/logs/Admin-Job-coin-management-coupon/ &
# alarm
cd /data/logstash/logs/log_alarm/order_alarm/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/log_alarm/order_alarm.conf --path.data /data/logstash/logs/log_alarm/order_alarm/ &
cd /data/logstash/logs/log_alarm/coupon_alarm/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/log_alarm/coupon_alarm.conf --path.data /data/logstash/logs/log_alarm/coupon_alarm/ &
cd /data/logstash/logs/log_alarm/user_alarm/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/log_alarm/user_alarm.conf --path.data /data/logstash/logs/log_alarm/user_alarm/ &
cd /data/logstash/logs/log_alarm/three-party_alarm/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/log_alarm/three-party_alarm.conf --path.data /data/logstash/logs/log_alarm/three-party_alarm/ &
cd /data/logstash/logs/log_alarm/admin_alarm/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/log_alarm/admin_alarm.conf --path.data /data/logstash/logs/log_alarm/admin_alarm/ &
cd /data/logstash/logs/log_alarm/third_alarm/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/log_alarm/third_alarm.conf --path.data /data/logstash/logs/log_alarm/third_alarm/ &
cd /data/logstash/logs/log_alarm/square_alarm/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/log_alarm/square_alarm.conf --path.data /data/logstash/logs/log_alarm/square_alarm/ &
cd /data/logstash/logs/log_alarm/school_alarm/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/log_alarm/school_alarm.conf --path.data /data/logstash/logs/log_alarm/school_alarm/ &
cd /data/logstash/logs/log_alarm/processing_alarm/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/log_alarm/processing_alarm.conf --path.data /data/logstash/logs/log_alarm/processing_alarm/ &
cd /data/logstash/logs/log_alarm/machine_alarm/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/log_alarm/machine_alarm.conf --path.data /data/logstash/logs/log_alarm/machine_alarm/ &
cd /data/logstash/logs/log_alarm/job_alarm/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/log_alarm/job_alarm.conf --path.data /data/logstash/logs/log_alarm/job_alarm/ &
cd /data/logstash/logs/log_alarm/eureka_alarm/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/log_alarm/eureka_alarm.conf --path.data /data/logstash/logs/log_alarm/eureka_alarm/ &
cd /data/logstash/logs/log_alarm/coin_alarm/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/log_alarm/coin_alarm.conf --path.data /data/logstash/logs/log_alarm/coin_alarm/ &
cd /data/logstash/logs/log_alarm/business_alarm/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/log_alarm/business_alarm.conf --path.data /data/logstash/logs/log_alarm/business_alarm/ &
cd /data/logstash/logs/log_alarm/am_alarm/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/log_alarm/am_alarm.conf --path.data /data/logstash/logs/log_alarm/am_alarm/ &
cd /data/logstash/logs/log_alarm/zuuls_alarm/; >nohup.out; nohup /usr/share/logstash/bin/logstash --path.config /etc/logstash/conf.d/log_alarm/zuuls_alarm.conf --path.data /data/logstash/logs/log_alarm/zuuls_alarm/ &
|
⑥验证服务
执行脚本启动logstash以后,再查看elasticsearch的索引发现,已经有今天的索引了。
1
2
3
4
5
6
|
➜ curl -s 192.168.189.166:9200/_cat/indices/*2021.10.20*
green open java-processing-log-2021.10.20 F5o2_1efTHKAsD5HZ0cG2A 1 1 2 0 40.3kb 20.1kb
green open java-am-log-2021.10.20 YIXxVecdQ_qFtYg0kTiJEg 1 1 0 0 9.7kb 230b
green open java-communication-log-2021.10.20 ci8-CjdRTc-F370QBPJNYw 1 1 1334793 0 1003.6mb 501.8mb
green open slb-nginx-logs01-2021.10.20 sqTozWYWSOO9ac_Nph-s4g 1 1 0 0 10.8mb 5.3mb
green open .monitoring-es-7-2021.10.20 AjSknr03Q2CSUMdfjYx6tg 1 1 22659658 435721 9gb 4.2gb
|
改进
①去冗余
吃一堑长一智,好的运维不应该在出现问题以后解决问题,而是应该消灭问题。
针对冗余索引的问题,我们应该写个脚本
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
|
➜ vim remove_es_indexs.sh
#!/bin/bash
# Describe: Remove es indexs for day
# Create Date: 2021-10-21
# Create Time: 08:36
# Update Date: 2021-10-21
# Update Time: 14:13
# Author: MiaoCunFa
# Version: v0.0.4
#===================================================================
#delDate=$1
#yymm=$(echo ${delDate}| awk -F. '{print $1"."$2}')
delDate=$(date -d "90 days ago" +%Y.%m.%d)
yymm=$(date -d "90 days ago" +%Y.%m)
logFile=/script/es/remove_es_indexs.log
function __Write_LOG()
{
echo "$(date "+%Y-%m-%d %H:%M:%S") [$1] $2" >> ${logFile}
}
#--------------------------Main Script------------------------------------
__Write_LOG "LOG" "${delDate}: Begin"
cd /script/es/
if [ ! -d delDate/${yymm} ]
then
mkdir -p delDate/${yymm}
fi
curl -s 192.168.189.166:9200/_cat/indices/*${delDate}* | awk '{print $3}' > delDate/${yymm}/${delDate}
for index in $(cat delDate/${yymm}/${delDate})
do
cmd="curl -s -XDELETE http://192.168.189.166:9200/${index}"
__Write_LOG "LOG" "[${index}]: ${cmd}"
result=$(${cmd})
__Write_LOG "LOG" "[${index}]: ${result}"
done
__Write_LOG "LOG" "${delDate}: End"
|
这个脚本是按照日期去删除90天前的,那一天的所有索引,跑定时任务每天执行一次就可以
1
2
|
➜ crontab -e
0 1 * * * /bin/sh /script/es/remove_es_indexs.sh
|
我们现在是需要清理历史冗余数据,只需要将脚本改造一下,改成传参,先将历史冗余数据全部清理一遍。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
|
# 将脚本中的这两个变量 改成如下
delDate=$1
yymm=$(echo ${delDate}| awk -F. '{print $1"."$2}')
# 先将需要删除的 索引日期 输出到文本
➜ curl -s 192.168.189.166:9200/_cat/indices/*2021.01* | awk '{print $3}' | awk -F'-2' '{print 2$2}' | sort -k2n | uniq > del_day_2021_01
➜ curl -s 192.168.189.166:9200/_cat/indices/*2021.02* | awk '{print $3}' | awk -F'-2' '{print 2$2}' | sort -k2n | uniq > del_day_2021_02
➜ curl -s 192.168.189.166:9200/_cat/indices/*2021.03* | awk '{print $3}' | awk -F'-2' '{print 2$2}' | sort -k2n | uniq > del_day_2021_03
➜ curl -s 192.168.189.166:9200/_cat/indices/*2021.04* | awk '{print $3}' | awk -F'-2' '{print 2$2}' | sort -k2n | uniq > del_day_2021_04
➜ curl -s 192.168.189.166:9200/_cat/indices/*2021.05* | awk '{print $3}' | awk -F'-2' '{print 2$2}' | sort -k2n | uniq > del_day_2021_05
➜ curl -s 192.168.189.166:9200/_cat/indices/*2021.06* | awk '{print $3}' | awk -F'-2' '{print 2$2}' | sort -k2n | uniq > del_day_2021_06
➜ curl -s 192.168.189.166:9200/_cat/indices/*2021.07* | awk '{print $3}' | awk -F'-2' '{print 2$2}' | sort -k2n | uniq > del_day_2021_07
# 再在终端内调用脚本 将文本输入至脚本
➜ for day in $(cat del_day_2021_01); do ./remove_es_indexs.sh $day; done
➜ for day in $(cat del_day_2021_02); do ./remove_es_indexs.sh $day; done
➜ for day in $(cat del_day_2021_03); do ./remove_es_indexs.sh $day; done
➜ for day in $(cat del_day_2021_04); do ./remove_es_indexs.sh $day; done
➜ for day in $(cat del_day_2021_05); do ./remove_es_indexs.sh $day; done
➜ for day in $(cat del_day_2021_06); do ./remove_es_indexs.sh $day; done
➜ for day in $(cat del_day_2021_07); do ./remove_es_indexs.sh $day; done
|
②问题排查
当然脚本只能解决冗余的问题,经过这次排查问题,我发现日志平台服务分散的太散了,想要一步步的排查确实需要时间,
为了节省时间,我们还是写个脚本。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
|
➜ vim check_log_platform.sh
#!/bin/bash
# Describe: check Log monitoring platform status
# Create Date: 2021-10-22
# Create Time: 16:50
# Update Date: 2021-10-24
# Update Time: 17:50
# Author: MiaoCunFa
# Version: v0.0.10
#===================================================================
curDate=$(date +'%Y.%m.%d')
#--------------------------Main Script------------------------------------
# filebeat
filebeats=(
192.168.189.171
192.168.189.175
192.168.189.176
192.168.189.177
192.168.189.164
)
echo
echo "filebeats Status:"
for host in "${filebeats[@]}"
do
# check process
process_num=$(ssh root@${host} "ps -ef|grep filebeat|grep -v grep|wc -l")
if [ ${process_num} -eq 1 ]
then
echo "host: ${host} status: OK!"
# check network
#echo "host: ${host} network"
#ssh root@${host} "lsof -i|grep filebeat"
else
echo "host: ${host} status: failed!"
fi
done
#---------------------------------------------------------
# kafka
kafka=(
192.168.196.82
192.168.196.83
192.168.196.84
)
echo
echo "Kafka Cluster Status:"
echo "zookeeper status:"
for host in "${kafka[@]}"
do
zookeeper_status=$(echo stat|nc ${host} 2181|grep Mode)
if [ $? -eq 0 ]
then
echo "host: ${host} ${zookeeper_status}"
else
echo "host: ${host} status: failed!"
fi
done
echo "kafka status:"
for host in "${kafka[@]}"
do
kafka_node_status=$(echo dump | nc ${host} 2181 | grep broker)
if [ $? -eq 0 ]
then
echo "host: ${host}"
echo "${kafka_node_status}"
else
echo "host: ${host} status: failed!"
fi
done
#---------------------------------------------------------
# logstash
slave01=(
household.conf
message.conf
message_alarm.conf
)
slave02=(
nginx-log.conf
Am-square-third.conf
M-O-S-P-T.conf
E-C-B-U-Z.conf
Cleaning-Processing-Communication.conf
Admin-Job-Coin-Coupon-management.conf
order_alarm.conf
coupon_alarm.conf
user_alarm.conf
three-party_alarm.conf
admin_alarm.conf
third_alarm.conf
square_alarm.conf
school_alarm.conf
processing_alarm.conf
machine_alarm.conf
job_alarm.conf
eureka_alarm.conf
coin_alarm.conf
business_alarm.conf
am_alarm.conf
zuuls_alarm.conf
)
echo
echo "logstash Status:"
echo "host: slave01"
for conf in "${slave01[@]}"
do
conf_num=$(ssh root@192.168.189.167 "ps -ef|grep ${conf}|grep -v grep|wc -l")
if [ ${conf_num} -eq 1 ]
then
echo "config: ${conf} status: OK!"
else
echo "config: ${conf} status: failed!"
fi
done
echo "host: slave02"
for conf in "${slave02[@]}"
do
conf_num=$(ssh root@192.168.189.168 "ps -ef|grep ${conf}|grep -v grep|wc -l")
if [ ${conf_num} -eq 1 ]
then
echo "config: ${conf} status: OK!"
else
echo "config: ${conf} status: failed!"
fi
done
#---------------------------------------------------------
# elasticsearch
elasticsearch=(
192.168.189.166
192.168.189.167
192.168.189.168
)
> normal_es_node
echo
echo "elasticsearch Cluster Status:"
echo "Cluster health: "
for host in "${elasticsearch[@]}"
do
result=$(curl -s ${host}:9200/_cat/health)
if [ $? -eq 0 ]
then
status=$(echo ${result} | awk '{print $4}')
echo "nodes: ${host} status: ${status}"
echo ${host} > normal_es_node
else
echo "nodes: ${host} status: failed"
fi
done
echo "Index status: "
host=$(cat normal_es_node)
curl ${host}:9200/_cat/indices/*${curDate}*
echo
# eof
|
现在我们可以一目了然的看到日志平台运行的状态了。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
|
➜ ./check_log_platform.sh
filebeats Status:
host: 192.168.189.171 status: OK!
host: 192.168.189.175 status: OK!
host: 192.168.189.176 status: OK!
host: 192.168.189.177 status: OK!
host: 192.168.189.164 status: OK!
Kafka Cluster Status:
zookeeper status:
host: 192.168.196.82 Mode: follower
host: 192.168.196.83 Mode: leader
host: 192.168.196.84 Mode: follower
kafka status:
host: 192.168.196.82
/brokers/ids/3
/brokers/ids/1
/brokers/ids/2
host: 192.168.196.83
/brokers/ids/3
/brokers/ids/1
/brokers/ids/2
host: 192.168.196.84
/brokers/ids/3
/brokers/ids/1
/brokers/ids/2
logstash Status:
host: slave01
config: household.conf status: OK!
config: message.conf status: failed!
config: message_alarm.conf status: failed!
host: slave02
config: nginx-log.conf status: OK!
config: Am-square-third.conf status: OK!
config: M-O-S-P-T.conf status: OK!
config: E-C-B-U-Z.conf status: OK!
config: Cleaning-Processing-Communication.conf status: OK!
config: Admin-Job-Coin-Coupon-management.conf status: OK!
config: order_alarm.conf status: OK!
config: coupon_alarm.conf status: OK!
config: user_alarm.conf status: OK!
config: three-party_alarm.conf status: OK!
config: admin_alarm.conf status: OK!
config: third_alarm.conf status: OK!
config: square_alarm.conf status: OK!
config: school_alarm.conf status: OK!
config: processing_alarm.conf status: OK!
config: machine_alarm.conf status: OK!
config: job_alarm.conf status: OK!
config: eureka_alarm.conf status: OK!
config: coin_alarm.conf status: OK!
config: business_alarm.conf status: OK!
config: am_alarm.conf status: OK!
config: zuuls_alarm.conf status: OK!
elasticsearch Cluster Status:
Cluster health:
nodes: 192.168.189.166 status: green
nodes: 192.168.189.167 status: green
nodes: 192.168.189.168 status: green
Index status:
green open java-household-log-2021.10.24 3klh9U9PTW-yvmW-zR_Lug 1 1 4 0 1002.1kb 509.8kb
green open java-coupon-log-2021.10.24 N7ejwbk4TjqooWLdCBBAvQ 1 1 92 0 195kb 97.5kb
green open java-message-log-2021.10.24 -v_r6EPwSdKmdpkayog67A 1 1 1338 0 1.3mb 722.1kb
green open java-square-log-2021.10.24 mQeN7eLOTdW9sloBWWS7Nw 1 1 119 0 275.1kb 137.5kb
green open java-coin-log-2021.10.24 S37r7NMJRh2RCuoVTXUsLQ 1 1 5055 0 4.8mb 2.4mb
green open java-am-log-2021.10.24 P7jclv3qTLiiqBByd0ouoQ 1 1 754 0 895.3kb 471.3kb
green open java-application-management-log-2021.10.24 Gxq0mkeVSCu4h1mcRsUcoA 1 1 2 0 608.1kb 300.1kb
green open .monitoring-es-7-2021.10.24 40FDkRCOS-G0a4EzXe7LuQ 1 1 8560559 1467429 7.8gb 3.8gb
green open java-user-log-2021.10.24 YfFVWrW3T-uRqIiz6-26XQ 1 1 39 0 31.4mb 15.6mb
green open .monitoring-kibana-7-2021.10.24 7ABfdO_0TEmaRB8sU91ZWA 1 1 3 0 657.7kb 321.5kb
green open java-zuuls-log-2021.10.24 YaW8CHXCTzimYueKgOdiew 1 1 5 0 6.6mb 3.3mb
green open java-craftsman-log-2021.10.24 e44caHySTracQYJy1hoUNQ 1 1 7140 0 68.6mb 34.5mb
green open slb-nginx-logs01-2021.10.24 eUPcnFG_Q7-Q5XKp6YNx0g 1 1 6 0 7.3mb 3.7mb
green open java-cleaning-log-2021.10.24 2Y_b6zQgSICZBCOJ2ej8Lg 1 1 33253396 0 28.1gb 14.1gb
green open java-third-log-2021.10.24 wmZAIVwRTL2JQQV09qjK-A 1 1 118 0 221.2kb 105.6kb
green open java-job-log-2021.10.24 HbwZtVYtTomCUIKd26Jj6Q 1 1 11 0 445.7kb 208.3kb
green open java-business-log-2021.10.24 nYltFD1tTUyBSDF1QBq8ng 1 1 1041 0 1.4mb 743.1kb
green open java-eureka-log-2021.10.24 m1D7K2kLRUeiiXVKWX2eXw 1 1 12 0 1.2mb 659.9kb
green open java-admin-log-2021.10.24 RWBh_ykHSoW9N_NFKXu6_w 1 1 7 0 411.9kb 182.7kb
green open java-communication-log-2021.10.24 cJTCZ1q6R1SJlSCdBIR2-w 1 1 12669171 0 9.2gb 4.6gb
green open java-processing-log-2021.10.24 AFJAhcs3R86oEjCifXRTUg 1 1 1 0 100.5kb 50.2kb
|