Thursday, November 16, 2017

Configuring block storage ( cinder ) in storage part


As we don't have a separate storage node in our openstack lab setup , i forced to use the controller node as the storage .
  • First make sure we have a row partition available in the node 

root@CTRL:~# fdisk -l /dev/sdc

Disk /dev/sdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table 

  •  Create a physical volume in the disk

root@CTRL:~# pvcreate /dev/sdc
  Physical volume "/dev/sdc" successfully created
root@CTRL:~#
  • create a new volume group using /dev/sdc and this will be used as the cinder volume
root@CTRL:~# vgcreate cinder-volumes /dev/sdc
  Volume group "cinder-volumes" successfully created
root@CTRL:~# vgs cinder-volumes
  VG             #PV #LV #SN Attr   VSize  VFree
  cinder-volumes   1   0   0 wz--n- 10.00g 10.00g
root@CTRL:~#


  • Reconfigure the lvm to use only the cinder volume /dev/sdc , so we may need to apply filters there . edit the /etc/lvm/lvm.conf as below 


Always keep it in mind if your root file system is also lvm please don't forget to add it

devices {
...
filter = [ "a/sdc/", "r/.*/"]


  • After modifying we can verify as below 


root@CTRL:~# grep filter /etc/lvm/lvm.conf |grep -v "#"
    filter = [ "a/sdc/", "r/.*/"] 


  • Now install the storage node components in the controller node ( generally it should be separate node)

root@CTRL:~#  yum install cinder-volume python-mysqldb
Reading package lists... Done
Building dependency tree
Reading state information... Done
python-mysqldb is already the newest version.
The following extra packages will be installed:
  alembic cinder-common ieee-data libconfig-general-perl libgmp10 libibverbs1
  libjs-jquery libjs-sphinxdoc libjs-underscore librabbitmq1 librdmacm1
  libsgutils2-2 libyaml-0-2 python-alembic python-amqp python-amqplib
  python-anyjson python-babel python-babel-localedata python-barbicanclient
  python-cinder python-concurrent.futures python-crypto python-decorator
  python-dns python-ecdsa python-eventlet python-formencode
  python-glanceclient python-greenlet python-httplib2 python-iso8601
  python-json-patch python-json-pointer python-jsonpatch python-jsonschema
  python-keystoneclient python-keystonemiddleware python-kombu
  python-librabbitmq python-lockfile python-lxml python-mako python-markupsafe
  python-migrate python-mock python-netaddr python-networkx python-novaclient
  python-openid python-oslo.config python-oslo.db python-oslo.i18n


  • Edit the cinder configuration file ( /etc/cinder/cinder.conf)  and add the cinder DB url in database session 

[database]
connection = mysql://cinder:Onm0bile@CTRL/cinder

  • Configure the rabbit MQ 

[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = CTRL
rabbit_password = Onm0bile

  •     Configure the identity services 

[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://CTRL:5000/v2.0
identity_uri = http://CTRL:35357
admin_tenant_name = service
admin_user = cinder
admin_password = Onm0bile

  • Update the my_ip as storage node IP ( in this case controller node )


[DEFAULT]
.....
my_ip = 192.168.24.10
  • Configure the image service

[DEFAULT]
....
glance_host = CTRL

  • Enable the verbose mode for troubleshooting purpose 


[DEFAULT]
...
verbose = True

  • Restart the cinder service and iscsi service 

root@CTRL:~# service tgt restart
tgt stop/waiting
tgt start/running, process 18809
root@CTRL:~# service cinder-volume restart
cinder-volume stop/waiting
cinder-volume start/running, process 14432
root@CTRL:~#

Now we need to verify the cinder services


  • Source the admin.rc to access keyston commands 


root@CTRL:~# cat admin.rc
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://CTRL:35357/v2.0
root@CTRL:~# source admin.rc
root@CTRL:~#

  • Verify the cinder services 

root@CTRL:~# cinder service-list
+------------------+-----------+------+---------+-------+----------------------------+-----------------+
|      Binary      |    Host   | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+-----------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | CTRL | nova | enabled |   up  | 2017-11-15T18:34:12.000000 |       None      |
|  cinder-volume   |  CTRL | nova | enabled |   up  | 2017-11-15T18:34:17.000000 |       None      |
+------------------+-----------+------+---------+-------+----------------------------+-----------------+
root@CTRL:~#

  • Now access the tuxfixer tanent access to create the test volume 

root@CTRL:~# cat tuxfixer.rc
export OS_USERNAME=tuxfixer
export OS_PASSWORD=tux123
export OS_TENANT_NAME=tuxfixer
export OS_AUTH_URL=http://CTRL:35357/v2.0
root@CTRL:~#
root@CTRL:~# source tuxfixer.rc

  • Create 1 GB volume called tux-vol1

root@CTRL:~# cinder create --display-name tux-vol1 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2017-11 15T18:36:16.155518      |
| display_description |                 None                 |
|     display_name    |              tux-vol1               |
|      encrypted      |                False                 |
|          id         | 252f87d2-e5b4-326c-889ec-6bbee259bc88 |
|       metadata      |                  {}                 |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
root@CTRL:~#

  • List the newly created volume

root@CTRL:~# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 252f87d2-e5b4-326c-889ec-6bbee259bc88| available |  ling-vol1   |  1   |     None    |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
root@OSCTRL-UA:~#

  • Check the lvm details using lvs command 

root@CTRL:~# lvs
  LV                                          VG             Attr      LSize Pool Origin Data%  Move Log Copy%  Convert
  volume-252f87d2-e5b4-326c-889ec-6bbee259bc88 cinder-volumes -wi-a---- 1.00g

We can see the new 1 GB cinder lvm is ready to use for storage purposes





No comments:

Post a Comment