ceph动态增加osd,增加的osd状态是down,怎样才能up?

centos7下ceph搭建好了后,我想要动态增加osd,增加的osd也都显示已经启动,如下:

     #/etc/init.d/ceph -a start

 === mon.0 ===
 Starting Ceph mon.0 on node56...already running   
 === mds.0 ===   
 Starting Ceph mds.0 on node56...already running   
 === osd.0 ===   
 Starting Ceph osd.0 on node57...already running   
 === osd.1 ===   
 Starting Ceph osd.1 on node58...already running   
 === osd.2 ===   
 Starting Ceph osd.2 on node57...already running   
 === osd.3 ===   
 Starting Ceph osd.3 on node58...already running   
 === osd.4 ===   
 Starting Ceph osd.4 on node56...already running 

但是查看osd的状态却是down,我使用/etc/init.d/ceph start osd.0,启动没出错,但是osd的状态还是down。怎么样才能把osd up?

    # ceph osd tree
    # id weight type name up/down reweight
-1 5 root default
-3 5 rack unknownrack
-2 1 host node57
0 1 osd.0 down 0 
-4 1 host node58
1 1 osd.1 down 0 
-5 3 host node56
4 1 osd.4 up 1 
2 1 osd.2 up 1 
3 1 osd.3 up 1

    # ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED 
    61410M     58299M        3110M          5.07 
POOLS:
    NAME         ID     USED     %USED     MAX AVAIL     OBJECTS 
    data         0         0         0             0           0 
    metadata     1      2082         0             0          20 
    rbd          2         0         0             0           0 

参与14

2同行回答

zhuqibszhuqibs软件开发工程师Adidas
ceph -s,  显示是否为正常如果不正常 (1)剔除该盘: ceph osd out osdxxx,   ceph osd rm osdxxx   ,   ceph auth del osdxxx (2)umount:   umount /dev/sdb1 (3)初始化: ceph-deploy --overwrite-conf osd  prepare ceph02:/dev/sdb1 (4)再次激活: ...显示全部

ceph -s,  显示是否为正常

如果不正常
(1)剔除该盘: ceph osd out osdxxx,   ceph osd rm osdxxx   ,   ceph auth del osdxxx
(2)umount:   umount /dev/sdb1
(3)初始化: ceph-deploy --overwrite-conf osd  prepare ceph02:/dev/sdb1
(4)再次激活: ceph-deploy osd activate ceph01:/dev/sdb1 ceph02:/dev/sdb1
  (5)再次确认: ceph osd tree,  ceph -s

收起
互联网服务 · 2020-04-17
浏览1691
youki2008youki2008系统架构师DDT
 1.剔除该盘: ceph osd out osdxxx, ceph osd rm osdxxx , ceph auth del osdxxx  2.卸载 umount: umount /dev/sdb1  3.初始化: ceph-deploy --overwrite-conf osd prepare ceph02:/dev/sdb1  4.再次激活: ceph-deploy osd activate ceph01:/dev/sdb1 ceph02:/dev/...显示全部

 1.剔除该盘: ceph osd out osdxxx, ceph osd rm osdxxx , ceph auth del osdxxx
 2.卸载 umount: umount /dev/sdb1
 3.初始化: ceph-deploy --overwrite-conf osd prepare ceph02:/dev/sdb1
 4.再次激活: ceph-deploy osd activate ceph01:/dev/sdb1 ceph02:/dev/sdb1
 5.再次确认: ceph osd tree, ceph -s

收起
互联网服务 · 2020-04-27
浏览1443

问题来自

相关问题

相关资料

相关文章

问题状态

  • 发布时间:2020-04-17
  • 关注会员:3 人
  • 问题浏览:2615
  • 最近回答:2020-04-27
  • X社区推广