ceph-ansible-osd-准备

本篇文章的ceph 版本是luminous,介绍了如何使用ansible自动化的对磁盘分区,创建pv、vg、lv等操作.由于Luminous里,data盘可以不用分区,我们这里只是把db磁盘分为两个分区

说明

ansible到各osd节点都是免密登陆状态
/dev/vdb: data盘
/dev/vdc: data盘
/dev/vdd: db 盘,分两个区

准备分区用的shell脚本

这里操作的磁盘是/dev/vdd,如果所有节点的盘符一致,则可以直接使用循环操作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
cat parted_osd.sh
#!/bin/bash
PATH=/bin:/sbin:/usr/bin:/usr/sbin
export PATH
##db分区2区,99+1 表示d
i=1
while [ $i -lt 2 ]
do
j=`echo $i|awk '{printf "%c",99+$i}'`
echo "Start build : /dev/vd${j}"
/usr/bin/expect<<EOF
spawn parted /dev/vd$j
send "mklabel gpt\r"
expect {
"*Ignore/Cancel?" { send "Cancel\r"; exp_continue }
"*Yes/No" { send "Yes\r"; exp_continue }
}
send "mkpart primary 0% 50%\r"
send "mkpart primary 51% 100%\r"
send "quit\r"
expect eof
EOF
printf $j
i=$(($i+1))
done

准备pv/vg/lv脚本

注意,这里创建lv大小的时候,需要根据vg的大小决定

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
cat pv_vg_lv.sh
#!/bin/bash
PATH=/bin:/sbin:/usr/bin:/usr/sbin
export PATH
pvcreate /dev/vdb
pvcreate /dev/vdc
vgcreate data_vg1 /dev/vdb
vgcreate data_vg2 /dev/vdc
vgcreate db_vg1 /dev/vdd1
vgcreate db_vg2 /dev/vdd2
lvcreate -n data_lv1 -L 499.99g data_vg1
lvcreate -n data_lv2 -L 499.99g data_vg2
lvcreate -n db_lv1 -L 7.99g db_vg1
lvcreate -n db_lv2 -L 7.83g db_vg2

准备yml文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
cat ceph_disk_playbook.yml
---
- hosts: osds
remote_user: root
gather_facts: false
tasks:
- name: Install dependencies
yum: name={{ item }} state=present
with_items:
- expect
- lvm2
- name: Transfer the parted_osd.sh script
copy: src=parted_osd.sh dest=/root/parted_osd.sh mode=0777
- name: Execute the parted_osd.sh script
command: sh /root/parted_osd.sh
register: command_result
failed_when: "'Error' in command_result.stderr"
- name: Transfer the pv_vg_lv.sh script
copy: src=pv_vg_lv.sh dest=/root/pv_vg_lv.sh mode=0777
- name: Execute the pv_vg_lv.sh script
command: sh /root/pv_vg_lv.sh

ansible 配置

1
2
3
4
5
6
cat /etc/ansible/hosts
...
[ods]
cephluminous-00[1:3]
...

执行操作

1
# ansible-playbook ceph_disk_playbook.yml