ceph 常用 命令

### 查看单个osd上的pg数量

查看osd编号为0的pg数量
1
ceph pg ls-by-osd 0 |wc -l

ceph 查看 块设备 对象 位置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#rbd info vms/f27113b2-8d0e-401e-8301-78f2c3f4b033_disk ##查看vms池下的该对象信息
rbd image 'f27113b2-8d0e-401e-8301-78f2c3f4b033_disk':
size 200 GB in 51200 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.5e5927428e906
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
parent: images/63bc46bf-d92e-4fac-92f8-14ded42d12d4@snap
overlap: 10240 MB
# rados ls -p vms|grep rbd_data.5e5927428e906
rbd_data.5e5927428e906.0000000000008800
rbd_data.5e5927428e906.0000000000000644
rbd_data.5e5927428e906.0000000000009e00
rbd_data.5e5927428e906.000000000000026f
rbd_data.5e5927428e906.0000000000000240
rbd_data.5e5927428e906.0000000000000836
rbd_data.5e5927428e906.0000000000000670
rbd_data.5e5927428e906.0000000000007000
rbd_data.5e5927428e906.000000000000043c
....

查看一个镜像的存储位置

1
2
3
4
5
6
7
8
9
10
11
12
13
#rbd info vms/ff06eff2-ccb0-4af7-b67d-bee196ff3c8f_disk
rbd image 'ff06eff2-ccb0-4af7-b67d-bee196ff3c8f_disk':
size 1024 MB in 256 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.7f6dac1a36ade9
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
parent: images/159ac013-5005-40e0-8e35-8d289a45716c@snap
overlap: 40162 kB
# ceph osd map vms rbd_data.7f6dac1a36ade9
osdmap e119306 pool 'vms' (1) object 'rbd_data.7f6dac1a36ade9' -> pg 1.51503ea2 (1.2a2) -> up ([1,21,32], p1) acting ([1,21,32], p1)

可以看出该镜像(ff06eff2-ccb0-4af7-b67d-bee196ff3c8f_disk) 存放在pg 1.2a2里,存放在osd 1、21、32里

查看所有pg的最后一次deep-scrub时间

1
ceph pg dump pgs|awk -F "\t" '{print $1,$21}'

ceph 动态 修改 参数

1
ceph tell 'osd.*' injectargs "--osd_deep_scrub_randomize_ratio 0.01"

ceph 查询 某个 osd最大读写能力

1
2
3
4
5
6
7
# ceph tell osd.0 bench
{
"bytes_written": 1073741824,
"blocksize": 4194304,
"bytes_per_sec": 179300661
}
表示每秒170M的速度

rados 写入测试

1
2
测试往rbd pool里写入5秒的数据,会自动清空
# rados -p rbd bench 5 write

ceph pg 查看

1
2
查看pg 0.1 的信息
# ceph pg 0.1 query

查看对象位置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# rados -p rbd ls
rbd_object_map.10482ae8944a
rbd_id.test3
rbd_object_map.10452ae8944a
rbd_object_map.104074b0dc51
benchmarkzp0
# rados -p rbd ls|grep benchmarkzp0
benchmarkzp0
# ceph osd map rbd benchmarkzp0
osdmap e61 pool 'rbd' (0) object 'benchmarkzp0' -> pg 0.744bf467 (0.27) -> up ([2,0,1], p2) acting ([2,0,1], p2)
对象benchmarkzp0所在pg为0.27, 所在osd为[2,0,1],主osd为2
# ceph osd tree 查看osd所在机器,然后登陆到该机器,到/var/lib/ceph/osd/ceph-2/current/0.27_head目录下存放这该对象信息
total 8.0M
-rw-r--r-- 1 ceph ceph 8.0M Apr 8 14:58 benchmarkzp0__head_744BF467__0
-rw-r--r-- 1 ceph ceph 0 Sep 1 2017 __head_00000027__0

获取 pg 的osd分布情况

1
ceph pg dump pgs|awk '{print $1,$15}'|grep -v pg > pg.txt 输出到pg.txt文件里

调整 reweight

1
2
3
4
5
# ceph osd reweight 4 1.0 # 4是osd的编号, 1.0 是 REWEIGHT的值
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 2.43996 root default
-2 0.48799 host bj-xg-oam-cephvm-001
0 0.48799 osd.4 up 1.00000 1.00000

调整 weight

1
2
3
4
5
# ceph osd crush reweight osd.4 0.48799 调整WEIGHT的值
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 2.43996 root default
-2 0.48799 host bj-xg-oam-cephvm-001
0 0.48799 osd.4 up 1.00000 1.00000