Cinder is the block storage server in openstack configuration . This setup is designed to consume storage devices either from local storage through LVM or third party devices for the computer node NOVA.
In our case we will use the default LVM as a storage device which will share to the instances which we created earlier . ( as we don't have separate storage node configured we will use controller node as storage)
Cinder-API - This is a wsgi based api which will routes and authenticate the requests to the block storage service. It supports openstack API which will call as a cinder client ( Nova EC2 instances also supports this as an alternate)
Cinder -Scheduler - This will schedule the requests and route it to appropriate volume service as per your configuration .
Cinder Volume - This is the back end storage devices . typically the devices which are supported are given below
In our case we will use the default LVM as a storage device which will share to the instances which we created earlier . ( as we don't have separate storage node configured we will use controller node as storage)
The cinder architecture
Cinder-API - This is a wsgi based api which will routes and authenticate the requests to the block storage service. It supports openstack API which will call as a cinder client ( Nova EC2 instances also supports this as an alternate)
Cinder -Scheduler - This will schedule the requests and route it to appropriate volume service as per your configuration .
Cinder Volume - This is the back end storage devices . typically the devices which are supported are given below
- Ceph RADOS Block Device (RBD)
- Coraid AoE driver configuration
- Dell EqualLogic volume driver
- EMC VMAX iSCSI and FC drivers
- EMC VNX direct driver
- EMC XtremIO OpenStack Block Storage driver guide
- GlusterFS driver
- HDS HNAS iSCSI and NFS driver
- HDS HUS iSCSI driver
- Hitachi storage volume driver
- HP 3PAR Fibre Channel and iSCSI drivers
- HP LeftHand/StoreVirtual driver
- HP MSA Fibre Channel driver
- Huawei storage driver
- IBM GPFS volume driver
- IBM Storwize family and SVC volume driver
- IBM XIV and DS8000 volume driver
- LVM
- NetApp unified driver
- Nexenta drivers
- NFS driver
- ProphetStor Fibre Channel and iSCSI drivers
- Pure Storage volume driver
- Sheepdog driver
- SolidFire
- VMware VMDK driver
- Windows iSCSI volume driver
- XenAPI Storage Manager volume driver
- XenAPINFS
- Zadara
- Oracle ZFSSA iSCSI Driver
Cinder backup - Provides the cinder back up to various targets.
Cinder work flow
- A volume is created through the cinder create command. This command creates an LV into the volume group (VG) “cinder-volumes.”
- The volume is attached to an instance through the nova volume-attach command. This command creates a unique iSCSI IQN that is exposed to the compute node.
- The compute node, which runs the instance, now has an active iSCSI session and new local storage (usually a /dev/sdX disk).
- Libvirt uses that local storage as storage for the instance. The instance get a new disk, usually a /dev/vdX disk.
While entering to the configuration side we have two parts for the configuration .
configuration of the controller side and configuration of the storage node side. But as i mentioned earlier we have no storage node configured separately due to the limitation in my lab and we will configure both in controller node
Configuring the controller node for cinder setup
- Login to the CTRL node as root
- Create the databases for cinder service
- Provide the proper access to the cinder database and set the password for the cinder DB.
- Now we need to access the admin commands using source the admin.rc file
- Create the service credentials for cinder using keystone command . We need to create cinder user
- Add admin role to the cinder user
- Create the cinder service entities for both cinder API V1 and V2
(currently the block storage API versions are up to 3 and we will only use 2)
- Create the API storage end points for version1 and version 2
keystone endpoint-create --service-id $(keystone service-list | awk '/ volume / {print $2}') --publicurl http://CTRL:8776/v1/%\(tenant_id\)s --internalurl http://CTRL:8776/v1/%\(tenant_id\)s --adminurl http://CTRL:8776/v1/%\(tenant_id\)s --region regionOne
keystone endpoint-create --service-id $(keystone service-list | awk '/ volume / {print $2}') --publicurl http://CTRL:8776/v2/%\(tenant_id\)s --internalurl http://CTRL:8776/v2/%\(tenant_id\)s --adminurl http://CTRL:8776/v2/%\(tenant_id\)s --region regionOne
- Next we have to install the cinder components
- Edit the /etc/cinder/cinder.conf file and configure as below
Database session
Rabbit MQ configuration
Update the auth_stratery in default session
Update the keystone credentials
Update the my_ip option to access the management IP of the controller node
Enable the verbose
Populate the configuration in cinder database
Restart the cinder services once the database update finishes
I will cover the storage part on next session
Hi,
ReplyDeleteThank you very much for information about on Configuring the block storage (cinder) in openstack, And i hope this will be useful for many people. Keep on updating these kinds of knowledgeable things. I would like to share some more interesting info about OpenStack Administration Online Training.
Thank you Gracy for the update .
DeleteThis concept is a good way to enhance the knowledge.thanks for sharing.. Great article ...Thanks for your great information, the contents are quiet interesting.
ReplyDeleteOpenstack Training
Openstack Training Online
Openstack Training in Hyderabad
Nice Article it is very interesting to read, i like it Cinder work flow
ReplyDeleteGreat blog post! I appreciate how you've explained the role and responsibilities of an AWS Solutions Architect. It's a very clear and concise overview. Please visit our website:- aws solution architect
ReplyDelete