继上一篇博客介绍了,本篇继续讲述后续部分的内容
1 虚拟机相关
1.1 虚拟机位置介绍openstack上创建的虚拟机实例存放位置是/var/lib/nova/instances
如下,可以查看到虚拟机的ID[root@linux-node2 ~]# nova list+--------------------------------------+---------------+--------+------------+-------------+--------------------+| ID | Name | Status | Task State | Power State | Networks |+--------------------------------------+---------------+--------+------------+-------------+--------------------+| 980fd600-a4e3-43c6-93a6-0f9dec3cc020 | kvm-server001 | ACTIVE | - | Running | flat=192.168.1.110 || e7e05369-910a-4dcf-8958-ee2b49d06135 | kvm-server002 | ACTIVE | - | Running | flat=192.168.1.111 || 3640ca6f-67d7-47ac-86e2-11f4a45cb705 | kvm-server003 | ACTIVE | - | Running | flat=192.168.1.112 || 8591baa5-88d4-401f-a982-d59dc2d14f8c | kvm-server004 | ACTIVE | - | Running | flat=192.168.1.113 |+--------------------------------------+---------------+--------+------------+-------------+--------------------+[root@linux-node2 ~]# cd /var/lib/nova/instances/[root@linux-node2 instances]# lltotal 8drwxr-xr-x. 2 nova nova 85 Aug 30 17:16 3640ca6f-67d7-47ac-86e2-11f4a45cb705 #虚拟机的IDdrwxr-xr-x. 2 nova nova 85 Aug 30 17:17 8591baa5-88d4-401f-a982-d59dc2d14f8cdrwxr-xr-x. 2 nova nova 85 Aug 30 17:15 980fd600-a4e3-43c6-93a6-0f9dec3cc020drwxr-xr-x. 2 nova nova 69 Aug 30 17:15 _base-rw-r--r--. 1 nova nova 39 Aug 30 17:17 compute_nodes #计算节点信息drwxr-xr-x. 2 nova nova 85 Aug 30 17:15 e7e05369-910a-4dcf-8958-ee2b49d06135drwxr-xr-x. 2 nova nova 4096 Aug 30 17:15 locks #锁[root@linux-node2 instances]# cd 3640ca6f-67d7-47ac-86e2-11f4a45cb705/
[root@linux-node2 3640ca6f-67d7-47ac-86e2-11f4a45cb705]# lltotal 6380-rw-rw----. 1 qemu qemu 20856 Aug 30 17:17 console.log #vnc 的终端输出-rw-r--r--. 1 qemu qemu 6356992 Aug 30 17:43 disk #虚拟磁盘(不是全部,有后端文件)-rw-r--r--. 1 nova nova 162 Aug 30 17:16 disk.info #disk详情-rw-r--r--. 1 qemu qemu 197120 Aug 30 17:16 disk.swap-rw-r--r--. 1 nova nova 2910 Aug 30 17:16 libvirt.xml #xml 配置,此文件在虚拟机启动时动态生成的,改了也没卵用。[root@linux-node2 3640ca6f-67d7-47ac-86e2-11f4a45cb705]# file disk
disk: QEMU QCOW Image (v3), has backing file (path /var/lib/nova/instances/_base/378396c387dd437ec61d59627fb3fa9a6), 10737418240 bytes #disk后端文件[root@openstack-server 3640ca6f-67d7-47ac-86e2-11f4a45cb705]# qemu-img info diskimage: diskfile format: qcow2virtual size: 10G (10737418240 bytes)disk size: 6.1Mcluster_size: 65536backing file: /var/lib/nova/instances/_base/378396c387dd437ec61d59627fb3fa9a67f857deFormat specific information: compat: 1.1 lazy refcounts: falsedisk 是写时复制的方式,后端文件不变,变动的文件放在 2.2M 的 disk 文件中,不变的在后端文件放置。 占用更小的空间。
2 安装配置 Horizon-dashboard(web 界面)
这个在http://www.cnblogs.com/kevingrace/p/5707003.html这篇中已经配置过了,这里再赘述一下吧:dashboard 通过 api 来通信的2.1 安装配置 dashboard
1、安装[root@linux-node1 ~]# yum install -y openstack-dashboard2、 修改配置文件[root@linux-node1 ~]# vim /etc/openstack-dashboard/local_settingsOPENSTACK_HOST = "192.168.1.17" #更改为keystone机器地址OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" #默认的角色ALLOWED_HOSTS = ['*'] #允许所有主机访问CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache','LOCATION': '192.168.1.17:11211', #连接memcached}}#CACHES = { # 'default': { # 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',# }#}TIME_ZONE = "Asia/Shanghai" #设置时区重启 httpd 服务
[root@linux-node1 ~]# systemctl restart httpdweb 界面登录访问dashboard
http://58.68.250.17/dashboard/用户密码 demo 或者 admin(管理员)3 虚拟机创建流程(非常重要)
第一阶段:
1、用户通过 Dashboard 或者命令行,发送用户名密码给 Keystone 进行验证,验证成功后,返回 OS_TOKEN(令牌)2、 Dashboard 或者命令行访问 nova-api,我要创建虚拟机3、 nova-api 去找 keystone 验证确认。第二阶段: nova 之间的组件交互
4、 nova-api 和 nova 数据库进行交互,记录5-6、 nova-api 通过消息队列讲信息发送给 nova-scheduler7、 nova-scheduler 收到消息后,和数据库进行交互,自己进行调度8、 nova-scheduler 通过消息队列将信息发送给 nova-compute9-11、 nova-compute 通过消息队列和 nova-conductor 通信,通过 nova-conductor 和数据库进行交互,获取相关信息。(图上有点问题), nova-conductor 就是专门和数据库进行通信的。第三阶段:
12、 nova-compute 发起 api 调用 Glance 获取镜像。13、 Glance 去找 keystone 认证,认证成功后将镜像给 nova-compute14、 nova-compute 找 Neutron 获取网络15、 Neutron 去找 keystone 认证,认证后为 nova-compute 提供网络16-17 同理第四阶段:
nova-compute 通过 libvirt 调用 kvm 生成虚拟机18.、 nova-compute 和底层的 hypervisor 进行交互,如果是使用的 kvm,则通过 libvirt 调用kvm 去创建虚拟机,创建过程中 nova-api 会一直去数据库轮询查看虚拟机创建状态。*************************************************************************************************
细节:
新的计算节点第一次创建虚拟机会慢因为 glance 需要把镜像上传到计算节点上,即_bash 目录下,之后才会创建虚拟机[root@linux-node2 _base]# pwd/var/lib/nova/instances/_base[root@openstack-server _base]# lltotal 10485764-rw-r--r--. 1 nova qemu 10737418240 Aug 30 17:57 378396c387dd437ec61d59627fb3fa9a67f857de-rw-r--r--. 1 nova qemu 1048576000 Aug 30 17:57 swap_1000第一个虚拟机创建后,后续在创建其他的虚拟机时就快很多了。
创建虚拟机操作,具体见:************************************************************************************************
4 cinder 云存储服务
4.1 存储的分类1、块存储磁盘2、文件存储nfs3、对象存储4.2 cinder 介绍
云硬盘一般 cinder-api 和 cinder-scheduler 安装在控制节点上, cinder-volume 安装在存储节点上。
4.3 cinder 控制节点配置
1、安装软件包控制节点[root@linux-node1 ~]#yum install -y openstack-cinder python-cinderclient计算节点[root@linux-node2 ~]#yum install -y openstack-cinder python-cinderclient2、 创建 cinder 的数据库之前的一篇中已经创建了:3、修改配置文件
[root@linux-node1 ~]# cat /etc/cinder/cinder.conf|grep -v "^#"|grep -v "^$"[DEFAULT]glance_host = 192.168.1.17auth_strategy = keystonerpc_backend = rabbit[BRCD_FABRIC_EXAMPLE][CISCO_FABRIC_EXAMPLE][cors][cors.subdomain][database]connection = mysql://cinder:cinder@192.168.1.17/cinder[fc-zone-manager][keymgr][keystone_authtoken]auth_uri = http://192.168.1.17:5000auth_url = http://192.168.1.17:35357auth_plugin = passwordproject_domain_id = defaultuser_domain_id = defaultproject_name = serviceusername = cinderpassword = cinder[matchmaker_redis][matchmaker_ring][oslo_concurrency]lock_path = /var/lib/cinder/tmp[oslo_messaging_amqp][oslo_messaging_qpid][oslo_messaging_rabbit]rabbit_host = 192.168.1.17rabbit_port = 5672rabbit_userid = openstackrabbit_password = openstack[oslo_middleware][oslo_policy][oslo_reports][profiler]在 nova 配置文件中添加
[root@linux-node1 ~]# vim /etc/nova/nova.confos_region_name=RegionOne #在[cinder]区域里添加4、同步数据库
[root@linux-node1 ~]# su -s /bin/sh -c "cinder-manage db sync" cinder..................2016-08-30 18:27:20.204 67111 INFO migrate.versioning.api [-] done2016-08-30 18:27:20.204 67111 INFO migrate.versioning.api [-] 59 -> 60... 2016-08-30 18:27:20.208 67111 INFO migrate.versioning.api [-] done5、 创建 keystone 用户
[root@linux-node1 ~]# cd /usr/local/src/[root@linux-node1 src]# source admin-openrc.sh[root@linux-node1 src]# openstack user create --domain default --password-prompt cinderUser Password: #这里我设置的是cinderRepeat User Password:+-----------+----------------------------------+| Field | Value |+-----------+----------------------------------+| domain_id | default || enabled | True || id | 955a2e684bed4617880942acd69e1073 || name | cinder |+-----------+----------------------------------+[root@openstack-server src]# openstack role add --project service --user cinder admin6、启动服务
[root@linux-node1 ~]# systemctl restart openstack-nova-api.service[root@linux-node1 ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service[root@linux-node1 ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service7、在 keystone 上创建服务并注册
v1 和 v2 都要注册[root@linux-node1 src]# source admin-openrc.sh [root@linux-node1 src]# openstack service create --name cinder --description "OpenStack Block Storage" volume+-------------+----------------------------------+| Field | Value |+-------------+----------------------------------+| description | OpenStack Block Storage || enabled | True || id | 7626bd9be54a444589ae9f8f8d29dc7b || name | cinder || type | volume |+-------------+----------------------------------+[root@linux-node1 src]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2+-------------+----------------------------------+| Field | Value |+-------------+----------------------------------+| description | OpenStack Block Storage || enabled | True || id | 5680a0ce912b484db88378027b1f6863 || name | cinderv2 || type | volumev2 |+-------------+----------------------------------+[root@linux-node1 src]# openstack endpoint create --region RegionOne volume public http://192.168.1.17:8776/v1/%\(tenant_id\)s+--------------+-------------------------------------------+| Field | Value |+--------------+-------------------------------------------+| enabled | True || id | 10de5ed237d54452817e19fd65233ae6 || interface | public || region | RegionOne || region_id | RegionOne || service_id | 7626bd9be54a444589ae9f8f8d29dc7b || service_name | cinder || service_type | volume || url | http://192.168.1.17:8776/v1/%(tenant_id)s |+--------------+-------------------------------------------+[root@linux-node1 src]# openstack endpoint create --region RegionOne volume internal http://192.168.1.17:8776/v1/%\(tenant_id\)s+--------------+-------------------------------------------+| Field | Value |+--------------+-------------------------------------------+| enabled | True || id | f706552cfb40471abf5d16667fc5d629 || interface | internal || region | RegionOne || region_id | RegionOne || service_id | 7626bd9be54a444589ae9f8f8d29dc7b || service_name | cinder || service_type | volume || url | http://192.168.1.17:8776/v1/%(tenant_id)s |+--------------+-------------------------------------------+[root@linux-node1 src]# openstack endpoint create --region RegionOne volume admin http://192.168.1.17:8776/v1/%\(tenant_id\)s+--------------+-------------------------------------------+| Field | Value |+--------------+-------------------------------------------+| enabled | True || id | c9dfa19aca3c43b5b0cf2fe7d393efce || interface | admin || region | RegionOne || region_id | RegionOne || service_id | 7626bd9be54a444589ae9f8f8d29dc7b || service_name | cinder || service_type | volume || url | http://192.168.1.17:8776/v1/%(tenant_id)s |+--------------+-------------------------------------------+[root@linux-node1 src]# openstack endpoint create --region RegionOne volumev2 public http://192.168.1.17:8776/v2/%\(tenant_id\)s+--------------+-------------------------------------------+| Field | Value |+--------------+-------------------------------------------+| enabled | True || id | 9ac83d0fab134f889e972e4e7680b0e6 || interface | public || region | RegionOne || region_id | RegionOne || service_id | 5680a0ce912b484db88378027b1f6863 || service_name | cinderv2 || service_type | volumev2 || url | http://192.168.1.17:8776/v2/%(tenant_id)s |+--------------+-------------------------------------------+[root@linux-node1 src]# openstack endpoint create --region RegionOne volumev2 internal http://192.168.1.17:8776/v2/%\(tenant_id\)s+--------------+-------------------------------------------+| Field | Value |+--------------+-------------------------------------------+| enabled | True || id | 9d18eac0868b4c49ae8f6198a029d7e0 || interface | internal || region | RegionOne || region_id | RegionOne || service_id | 5680a0ce912b484db88378027b1f6863 || service_name | cinderv2 || service_type | volumev2 || url | http://192.168.1.17:8776/v2/%(tenant_id)s |+--------------+-------------------------------------------+[root@linux-node1 src]# openstack endpoint create --region RegionOne volumev2 admin http://192.168.1.17:8776/v2/%\(tenant_id\)s+--------------+-------------------------------------------+| Field | Value |+--------------+-------------------------------------------+| enabled | True || id | 68c93bd6cd0f4f5ca6d5a048acbddc91 || interface | admin || region | RegionOne || region_id | RegionOne || service_id | 5680a0ce912b484db88378027b1f6863 || service_name | cinderv2 || service_type | volumev2 || url | http://192.168.1.17:8776/v2/%(tenant_id)s |+--------------+-------------------------------------------+查看注册信息:
[root@linux-node1 src]# openstack endpoint list+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------------------------+| ID | Region | Service Name | Service Type | Enabled | Interface | URL |+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------------------------+| 02fed35802734518922d0ca2d672f469 | RegionOne | keystone | identity | True | internal | http://192.168.1.17:5000/v2.0 || 10de5ed237d54452817e19fd65233ae6 | RegionOne | cinder | volume | True | public | http://192.168.1.17:8776/v1/%(tenant_id)s || 1a3115941ff54b7499a800c7c43ee92a | RegionOne | nova | compute | True | internal | http://192.168.1.17:8774/v2/%(tenant_id)s || 31fbf72537a14ba7927fe9c7b7d06a65 | RegionOne | glance | image | True | admin | http://192.168.1.17:9292 || 5278f33a42754c9a8d90937932b8c0b3 | RegionOne | nova | compute | True | admin | http://192.168.1.17:8774/v2/%(tenant_id)s || 52b0a1a700f04773a220ff0e365dea45 | RegionOne | keystone | identity | True | public | http://192.168.1.17:5000/v2.0 || 68c93bd6cd0f4f5ca6d5a048acbddc91 | RegionOne | cinderv2 | volumev2 | True | admin | http://192.168.1.17:8776/v2/%(tenant_id)s || 88df7df6427d45619df192979219e65c | RegionOne | keystone | identity | True | admin | http://192.168.1.17:35357/v2.0 || 8c4fa7b9a24949c5882949d13d161d36 | RegionOne | nova | compute | True | public | http://192.168.1.17:8774/v2/%(tenant_id)s || 9ac83d0fab134f889e972e4e7680b0e6 | RegionOne | cinderv2 | volumev2 | True | public | http://192.168.1.17:8776/v2/%(tenant_id)s || 9d18eac0868b4c49ae8f6198a029d7e0 | RegionOne | cinderv2 | volumev2 | True | internal | http://192.168.1.17:8776/v2/%(tenant_id)s || be788b4aa2ce4251b424a3182d0eea11 | RegionOne | glance | image | True | public | http://192.168.1.17:9292 || c059a07fa3e141a0a0b7fc2f46ca922c | RegionOne | neutron | network | True | public | http://192.168.1.17:9696 || c9dfa19aca3c43b5b0cf2fe7d393efce | RegionOne | cinder | volume | True | admin | http://192.168.1.17:8776/v1/%(tenant_id)s || d0052712051a4f04bb59c06e2d5b2a0b | RegionOne | glance | image | True | internal | http://192.168.1.17:9292 || ea325a8a2e6e4165997b2e24a8948469 | RegionOne | neutron | network | True | internal | http://192.168.1.17:9696 || f706552cfb40471abf5d16667fc5d629 | RegionOne | cinder | volume | True | internal | http://192.168.1.17:8776/v1/%(tenant_id)s || ffdec11ccf024240931e8ca548876ef0 | RegionOne | neutron | network | True | admin | http://192.168.1.17:9696 |+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------------------------+4.4 cinder 存储节点配置
1、 使用 ISCSI 方式创建云硬盘计算节点添加硬盘并创建 VG[root@linux-node2 ~]# df -h
Filesystem Size Used Avail Use% Mounted on/dev/sda2 100G 44G 57G 44% /devtmpfs 10G 0 10G 0% /devtmpfs 10G 0 10G 0% /dev/shmtmpfs 10G 90M 10G 1% /runtmpfs 10G 0 10G 0% /sys/fs/cgroup/dev/sda1 197M 127M 71M 65% /boottmpfs 6.3G 0 6.3G 0% /run/user/0/dev/sda5 811G 33M 811G 1% /home由于这里我的计算节点上没有多余的硬盘和空间了
所以考虑将上面的home分区卸载,拿来做云硬盘卸载home分区前,将home分区下的数据备份。
等到home卸载后,再创建/home目录,将备份数据拷贝到/home下[root@linux-node2 ~]# umount /home
[root@linux-node2 ~]# df -hFilesystem Size Used Avail Use% Mounted on/dev/sda2 100G 44G 57G 44% /devtmpfs 10G 0 10G 0% /devtmpfs 10G 0 10G 0% /dev/shmtmpfs 10G 90M 10G 1% /runtmpfs 10G 0 10G 0% /sys/fs/cgroup/dev/sda1 197M 127M 71M 65% /boottmpfs 6.3G 0 6.3G 0% /run/user/0[root@linux-node2 ~]# fdisk -l
Disk /dev/sda: 999.7 GB, 999653638144 bytes, 1952448512 sectors
Units = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk label type: dosDisk identifier: 0x000b2db8Device Boot Start End Blocks Id System
/dev/sda1 * 2048 411647 204800 83 Linux/dev/sda2 411648 210126847 104857600 83 Linux/dev/sda3 210126848 252069887 20971520 82 Linux swap / Solaris/dev/sda4 252069888 1952448511 850189312 5 Extended/dev/sda5 252071936 1952448511 850188288 83 Linux这样,home分区卸载的/dev/sda5可以拿来做lvm
[root@linux-node2 ~]# vim /etc/lvm/lvm.conf
filter = [ "a/sda5/", "r/.*/"]其中:a 表示同意, r 是不同意
---------------------------------------------------------------------------------------------------------
上面的home分区没有做lvm,设备名是/dev/sda5,则/etc/lvm/lvm.conf可以如上设置。如果home分区做了lvm,“df -h”命令查看home分区的设备名比如是/dev/mapper/centos-home
那么/etc/lvm/lvm.conf这里就要这样配置了:filter = [ "a|^/dev/mapper/centos-home$|", "r|.*/|" ]--------------------------------------------------------------------------------------------------------[root@linux-node2 ~]# pvcreate /dev/sda5
WARNING: xfs signature detected on /dev/sda5 at offset 0. Wipe it? [y/n]: y Wiping xfs signature on /dev/sda5. Physical volume "/dev/sda5" successfully created[root@linux-node2 ~]# vgcreate cinder-volumes /dev/sda5 Volume group "cinder-volumes" successfully created2、修改配置文件
[root@linux-node1 ~]# scp /etc/cinder/cinder.conf 192.168.1.8:/etc/cinder/cinder.conf需要更改[root@linux-node2 ~]# vim /etc/cinder/cinder.conf enabled_backends = lvm #在[DEFAULT]区域添加[lvm] #文件底部添加lvm区域设置
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDrivervolume_group = cinder-volumesiscsi_protocol = iscsiiscsi_helper = lioadm3、启动服务
[root@linux-node2 ~]#systemctl enable openstack-cinder-volume.service target.service[root@linux-node2 ~]#systemctl start openstack-cinder-volume.service target.service4.5 创建云硬盘
1、在控制节点上检查时间不同步可能会出现 down 的状态,[root@linux-node1 ~]# systemctl restart chronyd[root@linux-node1 ~]# source admin-openrc.sh[root@openstack-server ~]# cinder service-list+------------------+----------------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |+------------------+----------------------+------+---------+-------+----------------------------+-----------------+| cinder-scheduler | openstack-server | nova | enabled | up | 2016-08-31T07:50:06.000000 | - || cinder-volume | openstack-server@lvm | nova | enabled | up | 2016-08-31T07:50:08.000000 | - |+------------------+----------------------+------+---------+-------+----------------------------+-----------------+--------------------------------------------------------
这个时候,退出openstack的dashboard,再次登录!就可以在左侧栏的“计算”里看见“云硬盘”了--------------------------------------------------------
2、使用 dashboard 创建云硬盘
(注意:可以利用已有的虚拟机做快照(快照做好后,这台做快照的虚拟机就会关机,需要之后再手动启动),然后就能利用快照进行创建/启动虚拟机)
(注意:通过快照创建的虚拟机,默认是没有ip的,需要做下修改。修改参考另一篇博客webvirtmgr中克隆虚拟机后的修改方法:)
此时可以在计算节点上查看到:
[root@linux-node2 ~]# lvdisplay--- Logical volume --- LV Path /dev/cinder-volumes/volume-efb1d119-e006-41a8-b695-0af9f8d35063 LV Name volume-efb1d119-e006-41a8-b695-0af9f8d35063 VG Name cinder-volumes LV UUID aYztLC-jljz-esGh-UTco-KxtG-ipce-Oinx9j LV Write Access read/write LV Creation host, time openstack-server, 2016-08-31 15:55:05 +0800 LV Status available # open 0 LV Size 50.00 GiB Current LE 12800 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0下面可将创建的云硬盘挂载到相应的虚拟机上了!
登陆虚拟机kvm-server001查看,就能发现挂载的云硬盘了。挂载就能直接用了。
[root@kvm-server001 ~]# fdisk -l
Disk /dev/vda: 10.7 GB, 10737418240 bytes
16 heads, 63 sectors/track, 20805 cylindersUnits = cylinders of 1008 * 512 = 516096 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00046e27..............
Disk /dev/vdc: 53.7 GB, 53687091200 bytes
16 heads, 63 sectors/track, 104025 cylindersUnits = cylinders of 1008 * 512 = 516096 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000格式化连接过来的云硬盘
[root@kvm-server001 ~]# mkfs.ext4 /dev/vdcmke2fs 1.41.12 (17-May-2010)............Creating journal (32768 blocks): doneWriting superblocks and filesystem accounting information: done创建挂载目录/data
[root@kvm-server001 ~]# mkdir /data然后挂载
[root@kvm-server001 ~]# mount /dev/vdc /data[root@kvm-server001 ~]# df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/VolGroup00-LogVol00 8.2G 737M 7.1G 10% /tmpfs 2.9G 0 2.9G 0% /dev/shm/dev/vda1 194M 28M 156M 16% /boot/dev/vdc 50G 180M 47G 1% /data----------------------------------特别说明下----------------------------------------------------------
由于制作的虚拟机的根分区很小,可以把挂载的云硬盘制作成lvm,扩容到根分区上(根分区也是lvm)操作记录如下:
[root@localhost ~]# fdisk -l........................Disk /dev/vdc: 161.1 GB, 161061273600 bytes #这是挂载的云硬盘16 heads, 63 sectors/track, 312076 cylindersUnits = cylinders of 1008 * 512 = 516096 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on/dev/mapper/VolGroup00-LogVol008.1G 664M 7.0G 9% / #vm的根分区,可以进行手动lvm扩容tmpfs 2.9G 0 2.9G 0% /dev/shm/dev/vda1 190M 37M 143M 21% /boot首先将挂载下来的云硬盘制作新分区
[root@localhost ~]# fdisk /dev/vdcDevice contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabelBuilding a new DOS disklabel with disk identifier 0x3256d3cb.Changes will remain in memory only, until you decide to write them.After that, of course, the previous content won't be recoverable.Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to sectors (command 'u').Command (m for help): p
Disk /dev/vdc: 161.1 GB, 161061273600 bytes
16 heads, 63 sectors/track, 312076 cylindersUnits = cylinders of 1008 * 512 = 516096 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x3256d3cbDevice Boot Start End Blocks Id System
Command (m for help): n
Command action e extended p primary partition (1-4)pPartition number (1-4): 1First cylinder (1-312076, default 1): Using default value 1Last cylinder, +cylinders or +size{K,M,G} (1-312076, default 312076): Using default value 312076Command (m for help): w
The partition table has been altered!Calling ioctl() to re-read partition table.
Syncing disks.[root@localhost ~]# fdisk /dev/vdcWARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to sectors (command 'u').Command (m for help): p
Disk /dev/vdc: 161.1 GB, 161061273600 bytes
16 heads, 63 sectors/track, 312076 cylindersUnits = cylinders of 1008 * 512 = 516096 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x3256d3cbDevice Boot Start End Blocks Id System
/dev/vdc1 1 312076 157286272+ 83 Linux开始进行根分区的lvm扩容:
[root@localhost ~]# pvcreate /dev/vdc1 Physical volume "/dev/vdc1" successfully created[root@localhost ~]# lvdisplay
--- Logical volume --- LV Path /dev/VolGroup00/LogVol01 LV Name LogVol01 VG Name VolGroup00 LV UUID xtykaQ-3ulO-XtF0-BUqB-Pure-LH1n-O2zF1Z LV Write Access read/write LV Creation host, time localhost.localdomain, 2016-09-05 22:21:00 -0400 LV Status available # open 1 LV Size 1.50 GiB Current LE 48 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Path /dev/VolGroup00/LogVol00 #这是虚拟机的根分区的lvm逻辑卷,就是给这个扩容 LV Name LogVol00 VG Name VolGroup00 LV UUID 7BW8Wm-4VSt-5GzO-sIew-D1OI-pqLP-eXgM80 LV Write Access read/write LV Creation host, time localhost.localdomain, 2016-09-05 22:21:00 -0400 LV Status available # open 1 LV Size 8.28 GiB Current LE 265 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1[root@localhost ~]# vgdisplay
--- Volume group --- VG Name VolGroup00 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 5 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 9.78 GiB PE Size 32.00 MiB Total PE 313 Alloc PE / Size 313 / 9.78 GiB Free PE / Size 0 / 0 #VolGroup00这个卷组没有剩余空间了,需要vg进行自身扩容 VG UUID tEEreQ-O2HZ-rm9d-vS8Y-VemY-D7uY-qAYdWU[root@localhost ~]# vgextend VolGroup00 /dev/vdc1 #vg扩容
Volume group "VolGroup00" successfully extended[root@localhost ~]# vgdisplay #vg扩容后再次查看
--- Volume group --- VG Name VolGroup00 System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 6 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 2 Act PV 2 VG Size 159.75 GiB PE Size 32.00 MiB Total PE 5112 Alloc PE / Size 313 / 9.78 GiB Free PE / Size 4799 / 149.97 GiB #发现剩余空间有了149.97G VG UUID tEEreQ-O2HZ-rm9d-vS8Y-VemY-D7uY-qAYdWU在上面查询可知的vg所有的剩余空间全部增加给逻辑卷/dev/VolGroup00/LogVol00
[root@localhost ~]# lvextend -l +4799 /dev/VolGroup00/LogVol00 Size of logical volume VolGroup00/LogVol00 changed from 8.28 GiB (265 extents) to 158.25 GiB (5064 extents). Logical volume LogVol00 successfully resized.修改逻辑卷大小后,通过resize2fs来修改文件系统的大小
[root@localhost ~]# resize2fs /dev/VolGroup00/LogVol00resize2fs 1.41.12 (17-May-2010)Filesystem at /dev/VolGroup00/LogVol00 is mounted on /; on-line resizing requiredold desc_blocks = 1, new_desc_blocks = 10Performing an on-line resize of /dev/VolGroup00/LogVol00 to 41484288 (4k) blocks.The filesystem on /dev/VolGroup00/LogVol00 is now 41484288 blocks long.再次查看,根分区已经扩容了!!
[root@localhost ~]# df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/VolGroup00-LogVol00156G 676M 148G 1% /tmpfs 2.9G 0 2.9G 0% /dev/shm/dev/vda1 190M 37M 143M 21% /boot--------------------------------------------------------------------------------------------****************************************************************************************
云硬盘添加是热添加
注意:
虚拟机上发现的云硬盘格式化并挂载到如/data目录下删除云硬盘需要先卸载【不仅在虚拟机上卸载,在dashboard界面里也要卸载】
-----------------------------------------------------------------------------------------------------------------------------------
可以在虚拟机上对连接的云硬盘做lvm逻辑卷,以便以后不够用时,可以再加硬盘做lvm扩容,无缝扩容!
如下,虚拟机kvm-server001连接了一块100G的云硬盘
现对这100G的硬盘分区,制作lvm[root@kvm-server001 ~]# fdisk -lDisk /dev/vda: 10.7 GB, 10737418240 bytes
16 heads, 63 sectors/track, 20805 cylindersUnits = cylinders of 1008 * 512 = 516096 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00046e27...........................Disk /dev/vdc: 107.4 GB, 107374182400 bytes
16 heads, 63 sectors/track, 208050 cylindersUnits = cylinders of 1008 * 512 = 516096 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000先制作分区
[root@kvm-server001 ~]# fdisk /dev/vdc Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabelBuilding a new DOS disklabel with disk identifier 0x4e0d7808.Changes will remain in memory only, until you decide to write them.After that, of course, the previous content won't be recoverable.Command (m for help): p
Disk /dev/vdc: 107.4 GB, 107374182400 bytes
16 heads, 63 sectors/track, 208050 cylindersUnits = cylinders of 1008 * 512 = 516096 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x4e0d7808Device Boot Start End Blocks Id System
Command (m for help): n
Command action e extended p primary partition (1-4)pPartition number (1-4): 1First cylinder (1-208050, default 1): #回车Using default value 1Last cylinder, +cylinders or +size{K,M,G} (1-208050, default 208050): #回车,即使用全部剩余空间创建新分区Using default value 208050Command (m for help): p
Disk /dev/vdc: 107.4 GB, 107374182400 bytes
16 heads, 63 sectors/track, 208050 cylindersUnits = cylinders of 1008 * 512 = 516096 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x4e0d7808Device Boot Start End Blocks Id System
/dev/vdc1 1 208050 104857168+ 83 LinuxCommand (m for help): w
The partition table has been altered!Calling ioctl() to re-read partition table.
Syncing disks.[root@kvm-server001 ~]# pvcreate /dev/vdc1 #制作pv
Physical volume "/dev/vdc1" successfully created[root@kvm-server001 ~]# vgcreate vg0 /dev/vdc1 #制作vg Volume group "vg0" successfully created[root@kvm-server001 ~]# vgdisplay #查看vg大小 --- Volume group --- VG Name vg0 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 100.00 GiB PE Size 4.00 MiB Total PE 25599 Alloc PE / Size 0 / 0 Free PE / Size 25599 / 100.00 GiB VG UUID UIsTAe-oUzt-3atO-PVTw-0JUL-7Z8s-XVppIH[root@kvm-server001 ~]# lvcreate -L +99.99G -n lv0 vg0 #lv逻辑卷大小不能超过vg大小
Rounding up size to full physical extent 99.99 GiB Logical volume "lv0" created[root@kvm-server001 ~]# mkfs.ext4 /dev/vg0/lv0 #格式化lvm逻辑卷mke2fs 1.41.12 (17-May-2010)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=0 blocks, Stripe width=0 blocks6553600 inodes, 26212352 blocks1310617 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=4294967296800 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872Writing inode tables: done
Creating journal (32768 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 20 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.[root@kvm-server001 ~]# mkdir /data #创建挂载目录
[root@kvm-server001 ~]# mount /dev/vg0/lv0 /data #挂载lvm[root@kvm-server001 ~]# df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/VolGroup00-LogVol00 8.2G 842M 7.0G 11% /tmpfs 2.9G 0 2.9G 0% /dev/shm/dev/vda1 194M 28M 156M 16% /boot/dev/mapper/vg0-lv0 99G 188M 94G 1% /data****************************************************************************************
背景:
由于计算节点内网网关不存在,所以vm不能通过桥接模式自行联网了。要想使安装后的vm联网,还需要我们手动进行些特殊配置:(1)计算节点部署squid代理环境,即vm对外的访问请求通过计算节点机squid代理出去。(2)vm对内的访问请求通过计算节点的iptables进行nat端口转发,web应用请求可以利用nginx或haproxy进行代理转发。---------------------------------------------------------------------------------------------------------
下面说的是http方式的squid代理;如果是https的squid代理,可以参考我的另一篇技术博客内容:http://www.cnblogs.com/kevingrace/p/5853199.html---------------------------------------------------------------------------------------------------------(1)
1)计算节点上的操作:
yum命令直接在线安装squid[root@linux-node2 ~]# yum install squid安装完成后,修改squid.conf 文件中的内容,修改之前可以先备份该文件[root@linux-node2 ~]# cd /etc/squid/[root@linux-node2 squid]# cp squid.conf squid.conf_bak[root@linux-node2 squid]# vim squid.confhttp_access allow allhttp_port 192.168.1.17:3128cache_dir ufs /var/spool/squid 100 16 256然后执行下面命令,进行squid启动前测试
[root@linux-node2 squid]# squid -k parse2016/08/31 16:53:36| Startup: Initializing Authentication Schemes .................2016/08/31 16:53:36| Initializing https proxy context在第一次启动之前或者修改了cache路径之后,需要重新初始化cache目录。
[root@kvm-linux-node2 squid]# squid -z2016/08/31 16:59:21 kid1| /var/spool/squid exists2016/08/31 16:59:21 kid1| Making directories in /var/spool/squid/00................--------------------------------------------------------------------------------
如果有下面报错:2016/09/06 15:19:23 kid1| No cache_dir stores are configured.解决办法:
# vim squid.confcache_dir ufs /var/spool/squid 100 16 256 #打开这行的注释#ll /var/spool/squid 确保这个目录存在
再次squid -z初始化就ok了
--------------------------------------------------------------------------------[root@kvm-linux-node2 squid]# systemctl enable squid
Created symlink from /etc/systemd/system/multi-user.target.wants/squid.service to /usr/lib/systemd/system/squid.service.[root@kvm-server001 squid]# systemctl start squid[root@kvm-server001 squid]# lsof -i:3128COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEsquid 62262 squid 16u IPv4 4275294 0t0 TCP openstack-server:squid (LISTEN)如果计算节点开启了iptables防火墙规则
这里我的centos7.2系统上设置了iptables(关闭默认的firewalle)则还需要在/etc/sysconfig/iptables里添加下面一行:-A INPUT -s 192.168.1.0/24 -p tcp -m state --state NEW -m tcp --dport 3128 -j ACCEPT我这里防火墙配置如下:
[root@linux-node2 squid]# cat /etc/sysconfig/iptables# sample configuration for iptables service# you can edit this manually or use system-config-firewall# please do not ask us to add additional ports/services to this default configuration*filter:INPUT ACCEPT [0:0]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [0:0]-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT-A INPUT -p icmp -j ACCEPT-A INPUT -i lo -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT-A INPUT -s 192.168.1.0/24 -p tcp -m state --state NEW -m tcp --dport 3128 -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 6080 -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 10050 -j ACCEPT-A INPUT -j REJECT --reject-with icmp-host-prohibited-A FORWARD -j REJECT --reject-with icmp-host-prohibitedCOMMIT然后重启iptables服务
[root@linux-node2 ~]# systemctl restart iptables.service #最后重启防火墙使配置生效[root@linux-node2 ~]# systemctl enable iptables.service #设置防火墙开机启动-----------------------------------------------
2)下面是虚拟机上的squid配置:只需要在系统环境变量配置文件/etc/profile里添加下面一行即可(在文件底部添加)
[root@kvm-server001 ~]# vim /etc/profile .......export http_proxy=http://192.168.1.17:3128[root@kvm-server001 ~]# source /etc/profile #使上面的配置生效
测试虚拟机是否能对外访问:
[root@kvm-server001 ~]# curl http://www.baidu.com #能正常对外访问
[root@kvm-server001 ~]# yum list #yum能正常在线使用
[root@kvm-server001 ~]# wget http://my.oschina.net/mingpeng/blog/293744 #能正常在线下载
这样,虚拟机的对外请求就可以通过squid顺利代理出去了!
这里,squid代理的是http方式,如果是https方式的squid代理,可以参考我的另一篇博客:http://www.cnblogs.com/kevingrace/p/5853199.html
***********************************************
(2)
1)下面说下虚拟机的对内请求的代理配置:
NAT端口转发,可以参考我的另一篇博客内容:
在计算节点(即虚拟机的宿主机)上配置iptables规则:
[root@linux-node2 ~]# cat iptables
# sample configuration for iptables service# you can edit this manually or use system-config-firewall# please do not ask us to add additional ports/services to this default configuration*filter:INPUT ACCEPT [0:0]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [0:0]-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT-A INPUT -p icmp -j ACCEPT-A INPUT -i lo -j ACCEPT-A INPUT -s 192.168.1.0/24 -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT-A INPUT -s 192.168.1.0/24 -p tcp -m state --state NEW -m tcp --dport 3128 -j ACCEPT #开放squid代理端口-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT #开放dashboard访问端口-A INPUT -p tcp -m state --state NEW -m tcp --dport 6080 -j ACCEPT #开放控制台vnc访问端口-A INPUT -p tcp -m state --state NEW -m tcp --dport 15672 -j ACCEPT #开放RabbitMQ访问端口-A INPUT -p tcp -m state --state NEW -m tcp --dport 10050 -j ACCEPT#-A INPUT -j REJECT --reject-with icmp-host-prohibited #注意,这两行要注释掉!不然,开启这两行后,虚拟机之间就相互ping不通了!#-A FORWARD -j REJECT --reject-with icmp-host-prohibitedCOMMIT--------------------------------------------------------------------------------------------------------------------------------
说明:-A INPUT -j REJECT --reject-with icmp-host-prohibited-A FORWARD -j REJECT --reject-with icmp-host-prohibited这两条的意思是在INPUT表和FORWARD表中拒绝所有其他不符合上述任何一条规则的数据包。并且发送一条host prohibited的消息给被拒绝的主机。这个是iptables的默认策略,可以删除这两行,并且配置符合自己需求的策略。这两行策略开启后,宿主机和虚拟机之间的ping无阻碍
但虚拟机之间就相互ping不通了,因为vm之间ping要经过宿主机,这两条规则阻碍了他们之间的通信!删除即可~--------------------------------------------------------------------------------------------------------------------------------重启虚拟机
这样,开启防火墙后,宿主机和虚拟机,虚拟机之间都可以相互ping通~[root@linux-node2 ~]# systemctl restart iptables.service************************************************************************************************************************
openstack私有云环境,在一个计算节点上创建的虚拟机,其实就是一个局域网内的机器群了。如上述在宿主机上开启防火墙,一番设置后,虚拟机和宿主机之间/同一个节点下的虚拟机之间/虚拟机和宿主机同一内网段内的机器之间都是可以相互连接的,即能相互ping通************************************************************************************************************************2)虚拟机的web应用的代理部署
两种方案(宿主机上部署nginx或haproxy):
a.采用nginx的反向代理。即将各个域名解析到宿主机ip,在nginx的vhost里配置,然后通过proxy_pass代理转发到虚拟机上。
b.采用haproxy代理。也是将各个域名解析到宿主机ip,然后通过域名进行转发规则的设置。
这样,就能保证通过宿主机的80端口,将各个域名的访问请求转发给相应的虚拟机了。
nginx反向代理,可以参考下面两篇博客:
*****************************************************************
nginx反向代理思路:
在宿主机上启动nginx的80端口,根据不通域名进行转发;后端的虚拟机上vhost下不同域名的配置要启用不同的端口了~比如:
在宿主机上下面两个域名的代理配置(其他域名配置同理)[root@linux-node1 vhosts]# cat www.world.com.conf
upstream 8080 { server 192.168.1.150:8080; }server {
listen 80; server_name www.world.com; location / { proxy_store off; proxy_redirect off; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host;proxy_pass http://8080; } }[root@linux-node1 vhosts]# cat www.tech.com.conf
upstream 8081 { server 192.168.1.150:8081; }server {
listen 80; server_name www.tech.com; location / { proxy_store off; proxy_redirect off; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host;proxy_pass http://8081; } }即www.world.com和www.tech.com域名都解析到宿主机的公网ip上,然后:
访问http://www.world.com的请求就被宿主机代理到后端虚拟机192.168.1.150的8080端口上,即在虚拟机上这个域名配置的是8080端口;访问http://www.tech.com的请求就被宿主机代理到后端虚拟机192.168.1.150的8081端口上,即在虚拟机上这个域名配置的是8081端口;要是后端虚拟机配置了多个域名,那么其他域名的配置和上面是一样的~~~
另外:
最好在代理服务器和后端真实服务器上做host映射(/etc/hosts文件里将各个域名指定对应到127.0.0.1),不然,可能代理后访问域名有问题~~---------------------------------------------------------------------------------------------
由于宿主机上做web应用的代理转发,需要用到80端口。80端口已被dashboard占用,这里需要修改下dashboard的访问端口,比如改为8080端口则需要做如下修改:1)vim /etc/httpd/conf/httpd.conf 将80端口修改为8080端口Listen 8080
ServerName 192.168.1.8:80802)vim /etc/openstack-dashboard/local_settings #将下面两处的端口由80改为8080'from_port': '8080','to_port': '8080',3)防火墙添加8080端口访问规则-A INPUT -p tcp -m state --state NEW -m tcp --dport 8080 -j ACCEPT然后重启http服务:
#systemctl restart httpd这样,dashboard访问url:
http://58.68.250.17:8080/dashboard---------------------------------------------------------------------------------------------