基于Centos7.4搭建Ceph


本文使用ceph-deploy工具,能快速搭建出一个ceph集群。


一、环境准备

ditaa-cffd08dd3e192a5f1d724ad7930cb04200

  • 修改主机名

  1. [root@admin-node ~]# cat /etc/redhat-release

  2. CentOS Linux release 7.4.1708 (Core)


IP
主机名 角色

10.10.10.20

admin-node ceph-deploy
10.10.10.21 node1 mon
10.10.10.22 node2 osd
10.10.10.23 node3 osd


  • 设置DNS解析(我们这里修改/etc/hosts文件)

  • 每个节点都要配置


  1. [root@admin-node ~]# cat /etc/hosts

  2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

  3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

  4. 10.10.10.20 admin-node

  5. 10.10.10.21 node1

  6. 10.10.10.22 node2

  7. 10.10.10.23 node3


  • 配置yum源

  • 每个节点都要配置


  1. [root@admin-node ~]# mv /etc/yum.repos.d{,.bak}

  2. [root@admin-node ~]# mkdir /etc/yum.repos.d

  3. [root@admin-node ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

  4. [ceph]

  5. name=Ceph packages for $basearch

  6. baseurl=http://download.ceph.com/rpm-jewel/el7/$basearch

  7. enabled=1

  8. priority=2

  9. gpgcheck=1

  10. type=rpm-md

  11. gpgkey=https://download.ceph.com/keys/release.asc


  12. [ceph-noarch]

  13. name=Ceph noarch packages

  14. baseurl=http://download.ceph.com/rpm-jewel/el7/noarch

  15. enabled=1

  16. priority=2

  17. gpgcheck=1

  18. type=rpm-md

  19. gpgkey=https://download.ceph.com/keys/release.asc


  20. [ceph-source]

  21. name=Ceph source packages

  22. baseurl=http://download.ceph.com/rpm-jewel/el7/SRPMS

  23. enabled=0

  24. priority=2

  25. gpgcheck=1

  26. type=rpm-md

  27. gpgkey=https://download.ceph.com/keys/release.asc


  • 关闭防火墙和Selinux

  • 每个节点都要配置


  1. [root@admin-node ~]# systemctl stop firewalld.service

  2. [root@admin-node ~]# systemctl disable firewalld.service

  3. [root@admin-node ~]# setenforce 0

  4. [root@admin-node ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config


  • 设置节点之间面秘钥登入

  • 每个节点都要配置

  1. [root@admin-node ~]# ssh-keygen

  2. [root@admin-node ~]# ssh-copy-id 10.10.10.21

  3. [root@admin-node ~]# ssh-copy-id 10.10.10.22

  4. [root@admin-node ~]# ssh-copy-id 10.10.10.23


  • 使用chrony同步时间

  • 每个节点都要配置


  1. [root@admin-node ~]# yum install chrony -y

  2. [root@admin-node ~]# systemctl restart chronyd

  3. [root@admin-node ~]# systemctl enable chronyd

  4. [root@admin-node ~]# chronyc source -v (查看时间是否同步,*表示同步完成)


二、安装ceph-luminous


  • 安装ceph-deploy

  • 只在admin-node节点安装


  1. [root@admin-node ~]# yum install ceph-deploy -y


  • 在管理节点上创建一个目录,用于保存 ceph-deploy 生成的配置文件和密钥对

  • 只在admin-node节点安装


  1. [root@admin-node ~]# mkdir /etc/ceph

  2. [root@admin-node ~]# cd /etc/ceph/


  • 清除配置(若想从新安装可以执行以下命令)

  • 只在admin-node节点安装


  1. [root@admin-node ceph]# ceph-deploy purgedata node1 node2 node3

  2. [root@admin-node ceph]# ceph-deploy forgetkeys


  • 创建集群

  • 只在admin-node节点安装


  1. [root@admin-node ceph]# ceph-deploy new node1


  • 修改ceph的配置,将副本数改为2

  • 只在admin-node节点安装


  1. [root@admin-node ceph]# vi ceph.conf

  2. [global]

  3. fsid = 183e441b-c8cd-40fa-9b1a-0387cb8e8735

  4. mon_initial_members = node1

  5. mon_host = 10.10.10.21

  6. auth_cluster_required = cephx

  7. auth_service_required = cephx

  8. auth_client_required = cephx

  9. filestore_xattr_use_omap = true

  10. osd journal size = 1024

  11. filestore xattr use omap = true

  12. osd pool default size = 2

  13. osd pool default min size = 1

  14. osd pool default pg num = 333

  15. osd pool default pgp num = 333

  16. osd crush chooseleaf type = 1


  • 安装ceph

  • 只在admin-node节点安装


  1. [root@admin-node ceph]# ceph-deploy install admin-node node1 node2 node3


  • 配置初始 monitor(s)、并收集所有密钥

  • 只在admin-node节点安装


  1. [root@admin-node ceph]# ceph-deploy mon create-initial

  2. [root@admin-node ceph]# ls

  3. ceph.bootstrap-mds.keyring ceph.bootstrap-rgw.keyring ceph-deploy-ceph.log

  4. ceph.bootstrap-mgr.keyring ceph.client.admin.keyring ceph.mon.keyring

  5. ceph.bootstrap-osd.keyring ceph.conf rbdmap

  6. [root@admin-node ceph]# ceph -s (查看集群状态)

  7. cluster 8d395c8f-6ac5-4bca-bbb9-2e0120159ed9

  8. health HEALTH_ERR

  9. no osds

  10. monmap e1: 1 mons at {node1=10.10.10.21:6789/0}

  11. election epoch 3,quorum 0 node1

  12. osdmap e1: 0 osds: 0 up,0 in

  13. flags sortbitwise,require_jewel_osds

  14. pgmap v2: 64 pgs,1 pools,0 bytes data,0 objects

  15. 0 kB used,0 kB / 0 kB avail

  16. 64 creating


  • 创建OSD


  1. [root@node2 ~]# lsblk(node2,node3做osd)

  2. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

  3. fd0 2:0 1 4K 0 disk

  4. sda 8:0 0 20G 0 disk

  5. ├─sda1 8:1 0 1G 0 part /boot

  6. └─sda2 8:2 0 19G 0 part

  7. ├─cl-root 253:0 0 17G 0 lvm /

  8. └─cl-swap 253:1 0 2G 0 lvm [SWAP]

  9. sdb 8:16 0 50G 0 disk /var/local/osd0

  10. sdc 8:32 0 5G 0 disk

  11. sr0 11:0 1 4.1G 0 rom

  12. [root@node2 ~]# mkfs.xfs /dev/sdb

  13. [root@node2 ~]# mkdir /var/local/osd0

  14. [root@node2 ~]# mount /dev/sdb /var/local/osd0

  15. [root@node2 ~]# chown ceph:ceph /var/local/osd0

  16. [root@node3 ~]# mkdir /var/local/osd1

  17. [root@node3 ~]# mkfs.xfs /dev/sdb

  18. [root@node3 ~]# mount /dev/sdb /var/local/osd1/

  19. [root@node3 ~]# chown ceph:ceph /var/local/osd1

  20. [root@admin-node ceph]# ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1(在admin-node节点执行)


  • 将admin-node上的密钥和配合文件拷贝到各个节点

  • 只在admin-node节点安装


  1. [root@admin-node ceph]# ceph-deploy admin admin-node node1 node2 node3


  • 确保对 ceph.client.admin.keyring 有正确的操作权限

  • 只在OSD节点执行


  1. [root@node2 ~]# chmod +r /etc/ceph/ceph.client.admin.keyring


  • 管理节点执行 ceph-deploy 来准备 OSD


  1. [root@admin-node ceph]# ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1


  • 激活 OSD


  1. [root@admin-node ceph]# ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1


  • 检查集群的健康状况


  1. [root@admin-node ceph]# ceph health

  2. HEALTH_OK


  1. [root@admin-node ceph]# ceph health

  2. HEALTH_OK

  3. [root@admin-node ceph]# ceph -s

  4. cluster 69f64f6d-f084-4b5e-8ba8-7ba3cec9d927

  5. health HEALTH_OK

  6. monmap e1: 1 mons at {node1=10.10.10.21:6789/0}

  7. election epoch 3,quorum 0 node1

  8. osdmap e14: 3 osds: 3 up,3 in

  9. flags sortbitwise,require_jewel_osds

  10. pgmap v29: 64 pgs,0 objects

  11. 15459 MB used,45950 MB / 61410 MB avail

  12. 64 active+clean

相关文章

linux下开机自启: 在/etc/init.d目录下新建文件elasticsear...
1、因为在centos7中/etc/rc.d/rc.local的权限被降低了,所以...
最简单的查看方法可以使用ls -ll、ls-lh命令进行查看,当使用...
ASP.NET Core应用程序发布linux在shell中运行是正常的。可一...
设置时区(CentOS 7) 先执行命令timedatectl status|grep &...
vim /etc/sysconfig/network-scripts/ifcfg-eth0 B...