问题描述
今天我的集群突然抱怨 38 个清理错误。 ceph pg repair 帮助修复 不一致,但 ceph -s 仍然报告警告
ceph -s
cluster:
id: 86bbd6c5-ae96-4c78-8a5e-50623f0ae524
health: HEALTH_WARN
Too many repaired reads on 1 OSDs
services:
mon: 4 daemons,quorum s0,mBox,s1,r0 (age 35m)
mgr: s0(active,since 10d),standbys: s1,r0
mds: fs:1 {0=s0=up:active} 3 up:standby
osd: 10 osds: 10 up,10 in
data:
pools: 6 pools,289 pgs
objects: 1.29M objects,1.6 TiB
usage: 3.3 TiB used,7.4 TiB / 11 TiB avail
pgs: 289 active+clean
阅读文档后,我尝试了:
ceph tell osd.8 clear_shards_repaired
no valid command found; 10 closest matches:
0
1
2
abort
assert
bench [<count:int>] [<size:int>] [<object_size:int>] [<object_num:int>]
bluefs stats
bluestore allocator dump block
bluestore allocator dump bluefs-db
bluestore allocator fragmentation block
Error EINVAL: invalid command
如您所见,存在问题。我的 ceph 版本是:
ceph version
ceph version 15.2.9 (357616cbf726abb779ca75a551e8d02568e15b17) octopus (stable)
ceph health detail
HEALTH_WARN Too many repaired reads on 1 OSDs
[WRN] OSD_TOO_MANY_REPAirs: Too many repaired reads on 1 OSDs
osd.8 had 38 reads repaired
我如何摆脱警告以及如何找出问题的真正所在。 所有磁盘都健康。日记里什么都没有。 smartctl -t short /dev/sdd 很开心。
感谢任何帮助。
马格努斯
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)