1000字范文,内容丰富有趣,学习的好帮手!
1000字范文 > Ceph集群修改IP地址

Ceph集群修改IP地址

时间:2021-01-29 08:12:48

相关推荐

Ceph集群修改IP地址

获取monmap,并查看

[root@i-03C020FE ~]# ceph mon getmap -o monmapgot monmap epoch 3[root@i-03C020FE ~]# monmaptool --print monmap monmaptool: monmap file monmapepoch 3fsid 92cc47e8-bd9f-4ec9-a861-6a20784da190last_changed -06-27 10:31:13.329183created -06-27 10:31:13.3291830: 10.202.131.33:6789/0 mon.01: 10.202.131.195:6789/0 mon.22: 10.202.131.206:6789/0 mon.1

删除旧的map配置,添加新的配置

[root@i-03C020FE ~]# monmaptool --rm 0 --rm 1 --rm 2 monmap monmaptool: monmap file monmapmonmaptool: removing 0monmaptool: removing 1monmaptool: removing 2monmaptool: writing epoch 3 to monmap (0 monitors)[root@i-03C020FE ~]# monmaptool --print monmap monmaptool: monmap file monmapepoch 3fsid 92cc47e8-bd9f-4ec9-a861-6a20784da190last_changed -06-27 10:31:13.329183created -06-27 10:31:13.329183[root@i-03C020FE ~]# monmaptool --add 0 192.168.1.9:6789 --add 1 192.168.1.11 --add 2 192.168.1.10 monmap monmaptool: monmap file monmapmonmaptool: writing epoch 3 to monmap (3 monitors)[root@i-03C020FE ~]# monmaptool --print monmap monmaptool: monmap file monmapepoch 3fsid 92cc47e8-bd9f-4ec9-a861-6a20784da190last_changed -06-27 10:31:13.329183created -06-27 10:31:13.3291830: 192.168.1.9:6789/0 mon.01: 192.168.1.10:6789/0 mon.22: 192.168.1.11:6789/0 mon.1

将monmap拷贝至所有的mon节点

scp monmap 192.168.1.10:scp monmap 192.168.1.11:

更改ceph.conf中所有IP为现在IP

[root@i-03C020FE ~]# cat /etc/ceph/ceph.conf [global]fsid = 92cc47e8-bd9f-4ec9-a861-6a20784da190mon_host = 192.168.1.9,192.168.1.10,192.168.1.11public_addr = 192.168.1.9auth_cluster_required = noneauth_service_required = noneauth_client_required = noneosd_pool_default_size = 2osd_pool_default_min_size = 1mon_clock_drift_allowed = 0.5max_open_files = 1048576[mon]mon_osd_down_out_interval = 300mon_osd_min_down_reports = 3mon_osd_report_timeout = 250osd_mon_report_interval_max = 120osd_mon_report_interval_min = 5[osd]cluster_addr = 192.168.1.9osd_max_write_size = 256osd_op_threads = 16osd_disk_thread_ioprio_class = idleosd_client_message_size_cap = 1073741824journal_max_write_bytes = 1073741824journal_max_write_entries = 10000journal_queue_max_ops = 50000journal_queue_max_bytes = 10737418240journal_force_aio = truefilestore_merge_threshold = 40filestore_split_multiple = 8filestore_op_threads = 32filestore_min_sync_interval = 10filestore_max_sync_interval = 15filestore_queue_max_ops = 2500filestore_queue_max_bytes = 1073741824filestore_queue_committing_max_ops = 25000filestore_queue_committing_max_bytes = 10737418240osd_recovery_max_active = 1osd_recovery_max_single_start = 1osd_recovery_op_priority = 50osd_recovery_threads = 1osd_max_backfills = 2osd_backfill_scan_min = 8osd_backfill_scan_max = 64osd_max_scrubs = 1osd_scrub_sleep = 0.1osd_scrub_chunk_min = 1osd_scrub_chunk_max = 5osd_deep_scrub_stride = 1048576osd_scrub_begin_hour = 0osd_scrub_end_hour = 24[client.radosgw]rgw_frontends =civetweb port=rgw print continue = falsergw_enable_apis = s3,adminrgw_admin_entry = .bcmgrrgw_user_max_buckets = 100rgw_override_bucket index max shards = 16rgw_enable_ops_log = truergw_enable_usage_log = true[client]rbd_default_format = 2rbd default features = 1

停止mon主机mon服务

systemctl stop ceph-mon@0.service

所有mon主机载入新的monmap信息

ceph-mon -i 0 --inject-monmap monmap

启动mon服务。重启osd服务。重启mds服务查看信息

[root@i-03C020FE ~]# ceph mon dumpdumped monmap epoch 3epoch 3fsid 92cc47e8-bd9f-4ec9-a861-6a20784da190last_changed -06-27 10:31:13.329183created -06-27 10:31:13.3291830: 192.168.1.9:6789/0 mon.01: 192.168.1.10:6789/0 mon.22: 192.168.1.11:6789/0 mon.1[root@i-03C020FE ~]# ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.56776 root default -2 0.19559host i-8E0417280 0.04900 osd.0 up 1.000001.00000 1 0.04900 osd.1 up 1.000001.00000 8 0.04880 osd.8 up 1.000001.00000 9 0.04880 osd.9 up 1.000001.00000 -3 0.19559host i-91A9F1862 0.04900 osd.2 up 1.000001.00000 3 0.04900 osd.3 up 0.949981.00000 10 0.04880 osd.10 up 1.000001.00000 11 0.04880 osd.11 up 1.000001.00000 -4 0.17659host i-03C020FE4 0.03000 osd.4 up 1.000001.00000 5 0.04900 osd.5 up 0.564561.00000 6 0.04880 osd.6 up 1.000001.00000 7 0.04880 osd.7 up 1.000001.00000[root@i-03C020FE ~]# ceph -scluster 92cc47e8-bd9f-4ec9-a861-6a20784da190health HEALTH_OKmonmap e3: 3 mons at {0=192.168.1.9:6789/0,1=192.168.1.11:6789/0,2=192.168.1.10:6789/0}election epoch 466, quorum 0,1,2 0,2,1fsmap e139: 1/1/1 up {0=2=up:active}, 1 up:standbyosdmap e582: 12 osds: 12 up, 12 inflags sortbitwise,require_jewel_osdspgmap v664247: 188 pgs, 6 pools, 13999 MB data, 3640 objects101 GB used, 498 GB / 599 GB avail188 active+clean

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。