使用的ceph raw大于所有池中使用的ceph的总和ceph df detail

问题描述

首先对我的英语不好对不起 在我的ceph集群中,当我运行ceph df detail命令时,它显示如下所示的结果

RAW STORAGE:
    CLASS     SIZE        AVAIL       USED        RAW USED     %rAW USED 
    hdd        62 TiB      52 TiB      10 TiB       10 TiB         16.47 
    ssd       8.7 TiB     8.4 TiB     370 GiB      377 GiB          4.22 
    TOTAL      71 TiB      60 TiB      11 TiB       11 TiB         14.96 
 
POOLS:
    POOL                ID     STORED      OBJECTS     USED        %USED     MAX AVAIL     QUOTA OBJECTS     QUOTA BYTES     DIRTY       USED COMPR     UNDER COMPR 
    rbd-kubernetes      36     288 GiB      71.56k     865 GiB      1.73        16 TiB     N/A               N/A              71.56k            0 B             0 B 
    rbd-cache           41     2.4 GiB     208.09k     7.2 GiB      0.09       2.6 TiB     N/A               N/A             205.39k            0 B             0 B 
    cephfs-Metadata     51     529 MiB         221     1.6 GiB         0        16 TiB     N/A               N/A                 221            0 B             0 B 
    cephfs-data         52     1.0 GiB         424     3.1 GiB         0        16 TiB     N/A               N/A                 424            0 B             0 B 

所以我对结果有疑问 如您所见,我的池使用的总存储量小于1 TB,但是在RAW STORAGE部分中,HDD硬盘的使用量为10 TB,并且每天都在增长。我认为这是不寻常的,并且出现了问题CEPH集群。

仅供参考,ceph osd dump | grep replicated输出

pool 36 'rbd-kubernetes' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 244 pg_num_target 64 pgp_num_target 64 last_change 1376476 lfor 2193/2193/2193 flags hashpspool,selfmanaged_snaps,creating tiers 41 read_tier 41 write_tier 41 stripe_width 0 application rbd
pool 41 'rbd-cache' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode on last_change 1376476 lfor 2193/2193/2193 flags hashpspool,incomplete_clones,creating tier_of 36 cache_mode writeback target_bytes 1000000000000 hit_set bloom{false_positive_probability: 0.05,target_size: 0,seed: 0} 3600s x1 decay_rate 0 search_last_n 0 min_read_recency_for_promote 1 min_write_recency_for_promote 1 stripe_width 0
pool 51 'cephfs-Metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 31675 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
pool 52 'cephfs-data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 742334 flags hashpspool,selfmanaged_snaps stripe_width 0 application cephfs

Ceph版本ceph -v

ceph version 14.2.10 (b340acf629a010a74d90da5782a2c5fe0b54ac20) nautilus (stable)

Ceph OSD版本 ceph tell osd.* version会返回所有OSD,例如

osd.0: {
    "version": "ceph version 14.2.10 (b340acf629a010a74d90da5782a2c5fe0b54ac20) nautilus (stable)"
}

Ceph状态ceph -s

  cluster:
    id:     6a86aee0-3171-4824-98f3-2b5761b09feb
    health: HEALTH_OK
 
  services:
    mon: 3 daemons,quorum ceph-sn-03,ceph-sn-02,ceph-sn-01 (age 37h)
    mgr: ceph-sn-01(active,since 4d),standbys: ceph-sn-03,ceph-sn-02
    mds: cephfs-shared:1 {0=ceph-sn-02=up:active} 2 up:standby
    osd: 63 osds: 63 up (since 41h),63 in (since 41h)
 
  task status:
    scrub status:
        mds.ceph-sn-02: idle
 
  data:
    pools:   4 pools,384 pgs
    objects: 280.29k objects,293 GiB
    usage:   11 TiB used,60 TiB / 71 TiB avail
    pgs:     384 active+clean
 

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)

相关问答

Selenium Web驱动程序和Java。元素在(x,y)点处不可单击。其...
Python-如何使用点“。” 访问字典成员?
Java 字符串是不可变的。到底是什么意思?
Java中的“ final”关键字如何工作?(我仍然可以修改对象。...
“loop:”在Java代码中。这是什么,为什么要编译?
java.lang.ClassNotFoundException:sun.jdbc.odbc.JdbcOdbc...