openstack 扩容 添加 计算节点

随着业务的发展,资源使用日趋紧张了,为了能够满足业务需求,需要对原有计算资源进行扩容,这里简述一下扩容的步骤

系统环境

物理硬件

确保cpu、内存、磁盘、网卡数与之前的节点一致,以防后期动态迁移失败

操作系统

Centos7.2

Openstck版本

mitaka

Ceph 版本

Jewel(10.2.3)

Openstack网络

采用openvswitch插件

前期准备

  1. 统一repo,把原有的repo拷贝到新的节点,确保安装的软件版本一直
  2. 统一hosts,把新节点的主机名和ip加入到hosts文件,并同步到所有控制节点和计算节点
  3. 关闭NetworkManager,并禁止自启动

软件安装

1
2
3
4
5
yum install openstack-neutron-openvswitch openstack-neutron openstack-nova-compute
yum install ceph-common
systemctl enable neutron-openvswitch-agent openvswitch openstack-nova-compute.service libvirtd.service
systemctl start neutron-openvswitch-agent openvswitch openstack-nova-compute.service libvirtd.service

启动可能报错,因为还没有进行配置

Nova 配置(nova.conf)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
[DEFAULT]
my_ip=192.168.0.52
firewall_driver=nova.virt.firewall.NoopFirewallDriver
network_api_class=nova.network.neutronv2.api.API
force_snat_range = 0.0.0.0/0
metadata_host=controller # 控制节点(元数据与控制节点在一起)
dhcp_domain=tang-lei.com
security_group_api=neutron
debug=true
use_syslog=true
image_service=nova.image.glance.GlanceImageService
notification_topics=notifications
[api_database]
connection=mysql+pymysql://nova_api:NOVA_API_PASSWORD@controller/nova_api # 控制节点ip接nova_api访问数据库的密码
[barbican]
[cache]
[cells]
[cinder]
[conductor]
[cors]
[cors.subdomain]
[database]
connection=mysql+pymysql://nova:NOVA_PASSWORD@controller/nova # 控制节点ip接nova用户访问数据库的密码
[ephemeral_storage_encryption]
[glance]
api_servers=controller:9292 # glance-api 部署在控制节点上
[guestfs]
[hyperv]
[image_file_url]
[ironic]
[keymgr]
[keystone_authtoken]
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c338 # 加密的key,后面需要统一
disk_cachemodes="network=writeback"
inject_password = False
inject_key = False
inject_partition = -2
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
virt_type=kvm
live_migration_uri=qemu+tcp://nova@%s/system
live_migration_progress_timeout=0
cpu_mode=custom
cpu_model=kvm64
vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
[matchmaker_redis]
[metrics]
[neutron]
url=http://controller:9696 # 控制节点ip
region_name=RegionOne
ovs_bridge=br-int
extension_sync_interval=600
auth_url=http://controller:35357/v3 #控制节点ip
password=PASSWORD #注意密码
project_domain_name=Default
project_name=services
timeout=30
user_domain_name=Default
username=neutron
auth_plugin=v3password
[osapi_v21]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host=controller # rabbit的ip(与控制节点部署在一起)
rabbit_port=5672
rabbit_hosts=controller:5672
rabbit_use_ssl=False
rabbit_userid=guest
rabbit_password=guest
rabbit_virtual_host=/
rabbit_ha_queues=True
heartbeat_timeout_threshold=0
[oslo_middleware]
[oslo_policy]
[rdp]
[serial_console]
[spice]
[ssl]
[trusted_computing]
[upgrade_levels]
[vmware]
[vnc]
enabled=True
keymap=en-us
vncserver_listen=0.0.0.0 # 本地ip
vncserver_proxyclient_address=compute08 # 计算节点主机名(或者ip)
novncproxy_base_url=http://controller/vnc_auto.html # 控制节点ip
[workarounds]
[xenserver]

由于各个环节不一致,有可能配置也不一样

Neutron 配置(neutron.conf)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[DEFAULT]
bind_host = 0.0.0.0
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins = router
debug = true
verbose = True
log_dir = /var/log/neutron
use_syslog = true
[agent]
[cors]
[cors.subdomain]
[database]
[keystone_authtoken]
[matchmaker_redis]
[nova]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = controller # 控制节点ip(rabbit部署在控制节点)
rabbit_port = 5672
rabbit_hosts=controller:5672 #控制节点ip
rabbit_use_ssl = False
rabbit_userid = guest
rabbit_password = guest
rabbit_ha_queues=True
heartbeat_timeout_threshold = 0
[oslo_policy]
[quotas]
[ssl]

ML2 配置(openvswitch_agent.ini)

1
2
3
4
5
6
7
8
9
[DEFAULT]
use_syslog = true
[agent]
extensions=qos
[ovs]
bridge_mappings = provider:br-provider
enable_tunneling=False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

Libvirtd 配置

1
2
3
4
5
6
7
vim /etc/libvirt/libvirtd.conf
listen_tls = 0 #注释打开
listen_tcp = 1 #注释打开
auth_tcp = "none" #设置为none
vim /etc/sysconfig/libvirtd
LIBVIRTD_ARGS="--listen" # 注释打开,如果不配置,则会导致libvirtd服务无监听,虚拟机迁移失败

重启服务

1
systemctl restart neutron-openvswitch-agent openvswitch openstack-nova-compute.service libvirtd.service

添加网桥端口

1
2
3
4
5
6
ovs-vsctl add-br br-provider ##与openvswitch_agent.ini里配置一致
ovs-vsctl add-port br-provider em2 ##绑定真实网卡
systemctl restart openvswitch
systemctl restart neutron-openvswitch-agent
等待初始化,当执行ovs-vsctl show 能看到Port phy-br-provider 表示成功

网卡配置

1
2
3
4
5
6
7
前面的br-provider桥在em2网卡上,
vim /etc/sysconfig/network-scripts/ifcfg-em2:
TYPE=Ethernet
BOOTPROTO=none
NAME=em2
DEVICE=em2
ONBOOT=yes

拷贝ceph配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
1. ceph集群管理节点执行(或者从其他计算节点拷贝):
ceph auth get-or-create client.cinder | ssh {your-nova-compute-node} sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-key client.cinder | ssh {your-nova-compute-node} tee ~/client.cinder.key
2. 其他计算节点执行:
virsh secret-list # 获取uuid
3. 待扩容节点执行:
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>步骤二获取的uuid</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
EOF
4. virsh secret-define --file secret.xml
virsh secret-set-value --secret 步骤2获取的uuid --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml

nova用户登录

1
2
3
4
5
6
7
8
9
10
vim /etc/passwd,修改为:
nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/bin/bash
su - nova
ssh-keygen # 生成秘钥,或者从别的计算节点的nova用户,把秘钥拷贝过来。否则动态迁移会失败
vim .ssh/config:
Host *
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null