Thursday, October 11, 2018

openstack command line administration PART-2 (glance/cinder/manila)



Manage Images (Glance)


cloud operator assigns the roles to users and determines who can upload and manage the images .Only cloud operators or administrators can upload and manage images.
We can upload the images through glance client or image service API


  • To list the available images 


$ glance image-list
************************************
+--------------------------------------+---------------------------------
+-------------+------------------+----------+--------+
| ID
| Name
|
Disk Format | Container Format | Size  Status |
+--------------------------------------+---------------------------------
+-------------+------------------+----------+--------+
| 397e713c-b95b-4186-ad46-6126863ea0a9 | cirros-0.3.1-x86_64-uecami
| ami 25165824 | active |
| df430cc2-3406-4061-b635-a51c16e488ac | cirros-0.3.1-x86_64-uec-kernel |
aki
| aki
| 4955792 | active |
| 3cf852bd-2332-48f4-9ae4-7d926d50945e | cirros-0.3.1-x86_64-uec-ramdisk |
ari
| ari
| 3714968 | active |
| 7e5142af-1253-4634-bcc6-89482c5f2e8a | myCirrosImage
|
ami
| ami
| 14221312 | active |
+--------------------------------------+---------------------------------


  • To filter only the cirrors image 


$ glance image-list | grep 'cirros'
*********************************
| 397e713c-b95b-4186-ad46-6126863ea0a9 | cirros-0.3.1-x86_64-uec
|
ami
| ami
| 25165824 | active |
| df430cc2-3406-4061-b635-a51c16e488ac | cirros-0.3.1-x86_64-uec-kernel |
aki
| aki
| 4955792 | active |
| 3cf852bd-2332-48f4-9ae4-7d926d50945e | cirros-0.3.1-x86_64-uec-ramdisk |
ari
| ari
| 3714968 | active |


  • To get the image details by name or id

$ glance image-show myCirrosImage
+---------------------------------------+---------------------------------
-----+
| Property
| Value
|
+---------------------------------------+---------------------------------
-----+
| Property 'base_image_ref'
| 397e713c-b95b-4186-ad46-
6126863ea0a9 |
| Property 'image_location'
| snapshot
|
| Property 'image_state'
| available
|
| Property 'image_type'
| snapshot
|
| Property 'instance_type_ephemeral_gb' | 0
|
| Property 'instance_type_flavorid'
| 2
|
| Property 'instance_type_id'
| 5
|
| Property 'instance_type_memory_mb'
| 2048
|
| Property 'instance_type_name'
| m1.small
|
| Property 'instance_type_root_gb'
| 20
|
| Property 'instance_type_rxtx_factor' | 1
|
| Property 'instance_type_swap'
| 0
|
| Property 'instance_type_vcpu_weight' | None
|
| Property 'instance_type_vcpus'
| 1
|
| Property 'instance_uuid'
| 84c6e57d-a6b1-44b6-81eb-
fcb36afd31b5 |
| Property 'kernel_id'
| df430cc2-3406-4061-b635-
a51c16e488ac |
| Property 'owner_id'
| 66265572db174a7aa66eba661f58eb9e
|
| Property 'ramdisk_id'
| 3cf852bd-2332-48f4-9ae4-
7d926d50945e |
| Property 'user_id'
| 376744b5910b4b4da7d8e6cb483b06a8
|
| checksum
| 8e4838effa1969ad591655d6485c7ba8
|
| container_format
| ami
|
| created_at
| 2013-07-22T19:45:58
|
| deleted
| False
|
| disk_format
| ami
|
| id
| 7e5142af-1253-4634-bcc6-
89482c5f2e8a |
| is_public
| False
min_ram
| 0
|
| name
| myCirrosImage
|
| owner
| 66265572db174a7aa66eba661f58eb9e
|
| protected
| False
|
| size
| 14221312
|
| status
| active
|
| updated_at
| 2013-07-22T19:46:42
|
+---------------------------------------+---------------------------------

  • The location metadata of the images can be defined in following method

edit the /etc/glance/glance.conf as below 
show_multiple_locations = True
filesystem_store_metadata_file = filePath, where filePath points to a JSON
file that defines the mount point for OpenStack images on your system and a unique ID.
For example:
[{
"id": "b9d69795-5951-4cb0-bb5c-29491e1e2daf",
"mountpoint": "/var/lib/glance/images/"
}]
After you restart the Image Service, you can use the following syntax to view the image's
location information:

$ glance --os-image-api-version=2 image-show 2d9bb53f-70ea-4066-
a68b-67960eaae673

  • To upload a cent Os 6.3 image in qcow format and configure the public access 
$ glance image-create --name centos63-image --disk-format=qcow2 \ --
container-format=bare --is-public=True --file=./centos63.qcow2

  • To update the image using name or ID
Below are the parameters which we can update on images 

--name NAME The name of the image.
--disk-format DISK_FORMAT The disk format of the image. Acceptable formats are
ami, ari, aki, vhd, vmdk, raw, qcow2, vdi, and iso.
--container-format
CONTAINER_FORMAT The container format of the image. Acceptable formats
are ami, ari, aki, bare, and ovf.
--owner TENANT_ID The tenant who should own the image.
--size SIZE The size of image data, in bytes.
--min-disk DISK_GB The minimum size of disk needed to boot image, in
gigabytes.
--min-ram DISK_RAM The minimum amount of ram needed to boot image, in
megabytes.
--location IMAGE_URL The URL where the data for this image resides. For
example, if the image data is stored in swift, you could
specify
swift://account:key@example.com/container/ob
j.
--file FILE Local file that contains disk image to be uploaded during
update. Alternatively, you can pass images to the client
through stdin.
--checksum CHECKSUM Hash of image data to use for verification.
--copy-from IMAGE_URL Similar to --location in usage, but indicates that the
Glance server should immediately copy the data and
store it in its configured image store.
--is-public [True|False] Makes an image accessible to the public.
--is-protected [True|False] Prevents an image from being deleted.
--property KEY=VALUE Arbitrary property to associate with image. Can be used
multiple times.
--purge-props Deletes all image properties that are not explicitly set in
the update request. Otherwise, those properties not
referenced are preserved.
--human-readable Prints image size in a human-friendly format.

$ glance image-update \ --property hw_vif_model=e1000 f16-x86_64-
openstack-sda

Below are the valid vif models 

libvirt_type setting  |  Supported model values
qemu or kvm            virtio , ne2k_pci, pcnet, rtl8139, e1000

Create images using nova

  • To create an image first we need to list out the instances 

$ nova list
+--------------------------------------+----------------------+--------+--
----------+-------------+------------------+
| ID
| Name
| Status |
Task State | Power State | Networks
|
+--------------------------------------+----------------------+--------+--
----------+-------------+------------------+
| 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 | myCirrosServer
| ACTIVE |
None
| Running
| private=10.0.0.3 |
+--------------------------------------+----------------------+--------+--

  • To create a snapshot of the instance 

$ nova image-create myCirrosServer myCirrosImage

  • We can check the details of the image created using below command 

$ nova image-show myCirrosImage
+-------------------------------------+-----------------------------------
---+
| Property
| Value
|
+-------------------------------------+-----------------------------------
---+
| metadata owner_id
| 66265572db174a7aa66eba661f58eb9e
|
| minDisk
| 0
|
| metadata instance_type_name
| m1.small
|
| metadata instance_type_id
| 5
|
| metadata instance_type_memory_mb
| 2048
|
| id
| 7e5142af-1253-4634-bcc6-
89482c5f2e8a |
| metadata instance_type_root_gb
| 20
|
| metadata instance_type_rxtx_factor | 1
|
| metadata ramdisk_id
| 3cf852bd-2332-48f4-9ae4-
7d926d50945e |
| metadata image_state
| available
|
| metadata image_location
| snapshot
|
| minRam
| 0
|
| metadata instance_type_vcpus
| 1
|
| status
| ACTIVE
|
| updated
| 2013-07-22T19:46:42Z
|
| metadata instance_type_swap
| 0
|
| metadata instance_type_vcpu_weight | None
|
| metadata base_image_ref
| 397e713c-b95b-4186-ad46-
6126863ea0a9 |
| progress
| 100
|
| metadata instance_type_flavorid
| 2
|
| OS-EXT-IMG-SIZE:size
| 14221312
|
| metadata image_type
| snapshot
|
| metadata user_id
| 376744b5910b4b4da7d8e6cb483b06a8
|
| name
| myCirrosImage
|
| created
| 2013-07-22T19:45:58Z
|
| metadata instance_uuid
| 84c6e57d-a6b1-44b6-81eb-
fcb36afd31b5 |
| server
| 84c6e57d-a6b1-44b6-81eb-

  • To launch an instance from the image 

$ nova boot newServer --image 7e5142af-1253-4634-bcc6-89482c5f2e8a \ --
flavor 3
+-------------------------------------+-----------------------------------
---+
| Property
| Value
|
+-------------------------------------+-----------------------------------
---+
| OS-EXT-STS:task_state
| scheduling
|
| image
| myCirrosImage
|
| OS-EXT-STS:vm_state
| building
|
| OS-EXT-SRV-ATTR:instance_name
| instance-00000007
|
| flavor
| m1.medium
|
| id
| d7efd3e4-d375-46d1-9d57-
372b6e4bdb7f |
| security_groups
| [{u'name': u'default'}]
|
| user_id
| 376744b5910b4b4da7d8e6cb483b06a8
|
| OS-DCF:diskConfig
| MANUAL
|
| accessIPv4
|
|
| accessIPv6
|
|
| progress
| 0
|
| OS-EXT-STS:power_state
| 0
|
| OS-EXT-AZ:availability_zone
| nova
|
| config_drive
|
|
| status
| BUILD
|
| updated
| 2013-07-22T19:58:33Z
|
| hostId
|
|
| OS-EXT-SRV-ATTR:host
| None
|
| key_name
| None
|
| OS-EXT-SRV-ATTR:hypervisor_hostname | None
|
| name
| newServer
|
| adminPass
| jis88nN46RGP
|
| tenant_id
| 66265572db174a7aa66eba661f58eb9e
|
| created
| 2013-07-22T19:58:33Z
|
| metadata
| {}

(important we cannot create a snapshot form an instance which has an attached volume. We need to detach the volume first and create the snapshot)


  • Below is the procedure for snapshot creation of an instance 



Shut down the source VM before you take the snapshot to ensure that all data is flushed to disk. If necessary, list the instances to view the instance name:

$ nova list
+--------------------------------------+------------+--------+------------------------------+
| ID                                   | Name       | Status | Networks                     |
+--------------------------------------+------------+--------+------------------------------+
| c41f3074-c82a-4837-8673-fa7e9fea7e11 | myInstance | ACTIVE | private=10.0.0.3             |
+--------------------------------------+------------+--------+------------------------------+

Use the nova stop command to shut down the instance:

$ nova stop myInstance

Use the nova list command to confirm that the instance shows a SHUTOFF status:
$ nova list
+--------------------------------------+------------+---------+------------------+
| ID                                   | Name       | Status  | Networks         |
+--------------------------------------+------------+---------+------------------+
| c41f3074-c82a-4837-8673-fa7e9fea7e11 | myInstance | SHUTOFF | private=10.0.0.3 |
+--------------------------------------+------------+---------+------------------+

Use the nova image-create command to take a snapshot:

$ nova image-create --poll myInstance myInstanceSnapshot
Instance snapshotting... 50% complete

Use the nova image-list command to check the status until the status is ACTIVE:

$ nova image-list
+--------------------------------------+---------------------------------+--------+--------+
| ID                                   | Name                            | Status | Server |
+--------------------------------------+---------------------------------+--------+--------+
| 657ebb01-6fae-47dc-986a-e49c4dd8c433 | cirros-0.3.2-x86_64-uec         | ACTIVE |        |
| 72074c6d-bf52-4a56-a61c-02a17bf3819b | cirros-0.3.2-x86_64-uec-kernel  | ACTIVE |        |
| 3c5e5f06-637b-413e-90f6-ca7ed015ec9e | cirros-0.3.2-x86_64-uec-ramdisk | ACTIVE |        |
| f30b204e-1ce6-40e7-b8d9-b353d4d84e7d | myInstanceSnapshot              | ACTIVE |        |
+--------------------------------------+---------------------------------+--------+--------+

Download the snapshot as an image 

$ glance image-download --file snapshot.raw f30b204e-1ce6-40e7-b8d9-b353d4d84e7d

Copy the image to the new machine/environment using scp or http method and import the snapshot

$ glance --os-image-api-version 1 image-create \
  --container-format bare --disk-format qcow2 --copy-from IMAGE_URL

Boot a new instance from the snapshot 

$nova boot --flavor m1.tiny --image myInstanceSnapshot myNewInstance

Manage volumes (cinder)


  • Migrate a volume


According to the below cases we can migrate a volume
  1. Bring down a physical storage device for maintenance without disrupting workloads.
  2. Modify the properties of a volume.
  3. Free up space in a thinly-provisioned back end.
$ cinder migrate volumeID destinationHost --force-host-copy True|False


  • To create a volume 


List the images and note the id of the imagesfrom which we are creating the volume

$nova image-list

+-----------------------+---------------------------------+--------+--------------------------+
| ID                    | Name                            | Status | Server                   |
+-----------------------+---------------------------------+--------+--------------------------+
| 397e713c-b95b-4186... | cirros-0.3.2-x86_64-uec         | ACTIVE |                          |
| df430cc2-3406-4061... | cirros-0.3.2-x86_64-uec-kernel  | ACTIVE |                          |
| 3cf852bd-2332-48f4... | cirros-0.3.2-x86_64-uec-ramdisk | ACTIVE |                          |
| 7e5142af-1253-4634... | myCirrosImage                   | ACTIVE | 84c6e57d-a6b1-44b6-81... |
| 89bcd424-9d15-4723... | mysnapshot                      | ACTIVE | f51ebd07-c33d-4951-87... |
+-----------------------+---------------------------------+--------+--------------------------+

List the availability zones and note the id of the AZ

$ cinder availability-zone-list

+------+-----------+
| Name |   Status  |
+------+-----------+
| nova | available |
+------+-----------+

Now create a volume with 8GB size

 cinder create 8 --display-name my-new-volume --image-id 397e713c-b95b-4186-ad46-6126863ea0a9 --availability-zone nova


+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2013-07-25T17:02:12.472269      |
| display_description |                 None                 |
|     display_name    |            my-new-volume             |
|          id         | 573e024d-5235-49ce-8332-be1576d323f8 |
|       image_id      | 397e713c-b95b-4186-ad46-6126863ea0a9 |
|       metadata      |                  {}                  |
|         size        |                  8                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+


  • Verify that cinder volume is created successfully 


$ cinder list

+-----------------+-----------+-----------------+------+-------------+----------+-------------+
|    ID           |   Status  |   Display Name  | Size | Volume Type | Bootable | Attached to |
+-----------------+-----------+-----------------+------+-------------+----------+-------------+
| 573e024d-523... | available |  my-new-volume  |  8   |     None    |   true   |             |
| bd7cf584-45d... | available | my-bootable-vol |  8   |     None    |   true   |             |
+-----------------+-----------+-----------------+------+-------------+----------+-------------+

(if the status of the volume is error, we need to check the quota details we may exceeded the quota)


  • To attach a volume to an instance 


$ nova volume-attach 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 573e024d-5235-49ce-8332-be1576d323f8 /dev/vdb

+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| serverId | 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 |
| id       | 573e024d-5235-49ce-8332-be1576d323f8 |
| volumeId | 573e024d-5235-49ce-8332-be1576d323f8 |
+----------+--------------------------------------+

To get the information about the attached volume

cinder show 573e024d-5235-49ce-8332-be1576d323f8
+------------------------------+------------------------------------------+
|           Property           |                Value                     |
+------------------------------+------------------------------------------+
|         attachments          |         [{u'device': u'/dev/vdb',        |
|                              |        u'server_id': u'84c6e57d-a        |
|                              |           u'id': u'573e024d-...          |
|                              |        u'volume_id': u'573e024d...       |
|      availability_zone       |                  nova                    |
|           bootable           |                  true                    |
|          created_at          |       2013-07-25T17:02:12.000000         |
|     display_description      |                  None                    |
|         display_name         |             my-new-volume                |
|              id              |   573e024d-5235-49ce-8332-be1576d323f8   |
|           metadata           |                   {}                     |
|    os-vol-host-attr:host     |                devstack                  |
| os-vol-tenant-attr:tenant_id |     66265572db174a7aa66eba661f58eb9e     |
|             size             |                   8                      |
|         snapshot_id          |                  None                    |
|         source_volid         |                  None                    |
|            status            |                 in-use                   |
|    volume_image_metadata     |       {u'kernel_id': u'df430cc2...,      |
|                              |        u'image_id': u'397e713c...,       |
|                              |        u'ramdisk_id': u'3cf852bd...,     |
|                              |u'image_name': u'cirros-0.3.2-x86_64-uec'}|
|         volume_type          |                  None                    |
+------------------------------+------------------------------------------+



  • To resize the volume we need to follow the below steps 



  1. First detach the volume using below command 


$ nova volume-detach 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5   573e024d-5235-49ce-8332-be1576d323f8

      2.  Now list the volumes using below command  (we can see volume is in available status)

$ cinder list
+----------------+-----------+-----------------+------+-------------+----------+-------------+
|       ID       |   Status  |   Display Name  | Size | Volume Type | Bootable | Attached to |
+----------------+-----------+-----------------+------+-------------+----------+-------------+
| 573e024d-52... | available |  my-new-volume  |  8   |     None    |   true   |             |
| bd7cf584-45... | available | my-bootable-vol |  8   |     None    |   true   |             |
+----------------+-----------+-----------------+------+-------------+----------+-------------+

     3. Extend the volume to 10GB

$ cinder extend 573e024d-5235-49ce-8332-be1576d323f8 10


  • To delete a volume 


  1. To delete a volume we need to detach the volume from the instance first as explained above then delete the volume using below command 


$ cinder delete my-new-volume

      2. We can see the status of the volume as deleting

$ cinder list
+-----------------+-----------+-----------------+------+-------------+----------+-------------+
|        ID       |   Status  |   Display Name  | Size | Volume Type | Bootable | Attached to |
+-----------------+-----------+-----------------+------+-------------+----------+-------------+
| 573e024d-523... |  deleting |  my-new-volume  |  8   |     None    |   true   |             |
| bd7cf584-45d... | available | my-bootable-vol |  8   |     None    |   true   |             |
+-----------------+-----------+-----------------+------+-------------+----------+-------------+


       3. Finally it got removed from the cinder list command

$ cinder list
+-----------------+-----------+-----------------+------+-------------+----------+-------------+
|       ID        |   Status  |   Display Name  | Size | Volume Type | Bootable | Attached to |
+-----------------+-----------+-----------------+------+-------------+----------+-------------+
| bd7cf584-45d... | available | my-bootable-vol |  8   |     None    |   true   |             |
+-----------------+-----------+-----------------+------+-------------+----------+-------------+


  • To transfer a volume 

We can create a request to transfer the volume from one user to another user by sharing the transfer ID and authorisation key to centrepient.

This will be useful when we have bulk imports  to the cloud or when needs to create a custom bootable volume

     1. Login to the controller node and identify the volume needs to be transferred

$ cinder list
+-----------------+-----------+--------------+------+-------------+----------+-------------+
|        ID       |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+-----------------+-----------+--------------+------+-------------+----------+-------------+
| 72bfce9f-cac... |   error   |     None     |  1   |     None    |  false   |             |
| a1cdace0-08e... | available |     None     |  1   |     None    |  false   |             |
+-----------------+-----------+--------------+------+-------------+----------+-------------+

      2. Request the volume authorisation key and ID

$ cinder transfer-create a1cdace0-08e4-4dc7-b9dc-457e9bcfe25f

+------------+--------------------------------------+
|  Property  |                Value                 |
+------------+--------------------------------------+
|  auth_key  |           b2c8e585cbc68a80           |
| created_at |      2013-10-14T15:20:10.121458      |
|     id     | 6e4e9aa4-bed5-4f94-8f76-df43232f44dc |
|    name    |                 None                 |
| volume_id  | a1cdace0-08e4-4dc7-b9dc-457e9bcfe25f |
+------------+--------------------------------------+

     3. Send the transfer ID and authorisation key to recipient. When we check the transfer list we can          see the pending transfer

$ cinder transfer-list
+--------------------------------------+--------------------------------------+------+
|               ID                     |             VolumeID                 | Name |
+--------------------------------------+--------------------------------------+------+
| 6e4e9aa4-bed5-4f94-8f76-df43232f44dc | a1cdace0-08e4-4dc7-b9dc-457e9bcfe25f | None |
+--------------------------------------+--------------------------------------+------+

   4. Once the recipient accept the volume transfer he above command will give an empty output


  •     Accept a volume transfer 

   1. To accept a request

$ cinder transfer-accept 6e4e9aa4-bed5-4f94-8f76-df43232f44dc   b2c8e585cbc68a80
+-----------+--------------------------------------+
|  Property |                Value                 |
+-----------+--------------------------------------+
|     id    | 6e4e9aa4-bed5-4f94-8f76-df43232f44dc |
|    name   |                 None                 |
| volume_id | a1cdace0-08e4-4dc7-b9dc-457e9bcfe25f |


   2. To check the status of the accept we can execute the cinder transfer list command which will give an empty output


   

      Create and manage file shares using manila client


  • To create a share network (We need to pass nutron id and subnet id along with this to configure the network properties of the share)

$ manila share-network-create \
    --name mysharenetwork \
    --description "My Manila network" \
    --neutron-net-id dca0efc7-523d-43ef-9ded-af404a02b055 \
    --neutron-subnet-id 29ecfbd5-a9be-467e-8b4a-3415d1f82888
+-------------------+--------------------------------------+
| Property          | Value                                |
+-------------------+--------------------------------------+
| name              | mysharenetwork                       |
| segmentation_id   | None                                 |
| created_at        | 2016-03-24T14:13:02.888816           |
| neutron_subnet_id | 29ecfbd5-a9be-467e-8b4a-3415d1f82888 |
| updated_at        | None                                 |
| network_type      | None                                 |
| neutron_net_id    | dca0efc7-523d-43ef-9ded-af404a02b055 |
| ip_version        | None                                 |
| nova_net_id       | None                                 |
| cidr              | None                                 |
| project_id        | 907004508ef4447397ce6741a8f037c1     |
| id                | c895fe26-92be-4152-9e6c-f2ad230efb13 |
| description       | My Manila network                    |
+-------------------+--------------------------------------+

        2. List the share networks

$ manila share-network-list
+--------------------------------------+----------------+
| id                                   | name           |
+--------------------------------------+----------------+
| c895fe26-92be-4152-9e6c-f2ad230efb13 | mysharenetwork |
+--------------------------------------+----------------+


       3. To create a share 

$ manila create NFS 1 \
    --name myshare \
    --description "My Manila share" \
    --share-network mysharenetwork \
    --share-type default
+-----------------------------+--------------------------------------+
| Property                    | Value                                |
+-----------------------------+--------------------------------------+
| status                      | creating                             |
| share_type_name             | default                              |
| description                 | My Manila share                      |
| availability_zone           | None                                 |
| share_network_id            | c895fe26-92be-4152-9e6c-f2ad230efb13 |
| share_server_id             | None                                 |
| host                        |                                      |
| access_rules_status         | active                               |
| snapshot_id                 | None                                 |
| is_public                   | False                                |
| task_state                  | None                                 |
| snapshot_support            | True                                 |
| id                          | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 |
| size                        | 1                                    |
| name                        | myshare                              |
| share_type                  | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf |
| has_replicas                | False                                |
| replication_type            | None                                 |
| created_at                  | 2016-03-24T14:15:34.000000           |
| share_proto                 | NFS                                  |
| consistency_group_id        | None                                 |
| source_cgsnapshot_member_id | None                                 |
 source_cgsnapshot_member_id | None                                 |
 project_id                  | 907004508ef4447397ce6741a8f037c1     |
| metadata                    | {}                                   |
+-----------------------------+--------------------------------------+

         4. To show the share

$ manila show myshare
+-----------------------------+---------------------------------------------------------------+
| Property                    | Value                                                         |
+-----------------------------+---------------------------------------------------------------+
| status                      | available                                                     |
| share_type_name             | default                                                       |
| description                 | My Manila share                                               |
| availability_zone           | nova                                                          |
| share_network_id            | c895fe26-92be-4152-9e6c-f2ad230efb13                          |
| export_locations            |                                                               |
|                             | path = 10.254.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d |
|                             | preferred = False                                             |
|                             | is_admin_only = False                                         |
|                             | id = b6bd76ce-12a2-42a9-a30a-8a43b503867d                     |
|                             | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d      |
|                             | path = 10.0.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d   |
|                             | preferred = False                                             |
|                             | is_admin_only = True                                          |
|                             | id = 6921e862-88bc-49a5-a2df-efeed9acd583                     |
|                             | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d      |
| share_server_id             | 2e9d2d02-883f-47b5-bb98-e053b8d1e683                          |
| host                        | nosb-devstack@london#LONDON                                   |
| access_rules_status         | active                                                        |
| snapshot_id                 | None                                                          |
| is_public                   | False                                                         |
| task_state                  | None                                                          |
| snapshot_support            | True                                                          |
| id                          | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400                          |
| size                        | 1                                                             |
| name                        | myshare                                                       |
| share_type                  | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf                          |
| has_replicas                | False                                                         |
| replication_type            | None                                                          |
| created_at                  | 2016-03-24T14:15:34.000000                                    |
| share_proto                 | NFS                                                           |
| consistency_group_id        | None                                                          |
| source_cgsnapshot_member_id | None                                                          |
| project_id                  | 907004508ef4447397ce6741a8f037c1                              |
| metadata                    | {}         

          5. To list the share

$ manila list
+--------------------------------------+---------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+
| ID                                   | Name    | Size | Share Proto | Status    | Is Public | Share Type Name | Host                        | Availability Zone |
+--------------------------------------+---------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+
| 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | myshare | 1    | NFS         | available | False     | default         | nosb-devstack@london#LONDON | nova              |
+--------------------------------------+---------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------


        6. To list the export locations

$ manila share-export-location-list myshare
+--------------------------------------+--------------------------------------------------------+-----------+
| ID                                   | Path                                                   | Preferred |
+--------------------------------------+--------------------------------------------------------+-----------+
| 6921e862-88bc-49a5-a2df-efeed9acd583 | 10.0.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d   | False     |
| b6bd76ce-12a2-42a9-a30a-8a43b503867d | 10.254.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | False     |
+--------------------------------------+--------------------------------------------------------+-----------+
      
       7. Allow read write access to the share

$ manila access-allow myshare ip 10.0.0.0/24
+--------------+--------------------------------------+
| Property     | Value                                |
+--------------+--------------------------------------+
| share_id     | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 |
| access_type  | ip                                   |
| access_to    | 10.0.0.0/24                          |
| access_level | rw                                   |
| state        | new                                  |
| id           | 0c8470ca-0d77-490c-9e71-29e1f453bf97 |
+--------------+--------------------------------------+

       We can check the access details by using below command

$ manila access-list myshare
+--------------------------------------+-------------+-------------+--------------+--------+
| id                                   | access_type | access_to   | access_level | state  |
+--------------------------------------+-------------+-------------+--------------+--------+
| 0c8470ca-0d77-490c-9e71-29e1f453bf97 | ip          | 10.0.0.0/24 | rw           | active |
+--------------------------------------+-------------+-------------+--------------+--------+

         8. To give read only access to the share

$ manila access-allow myshare ip 20.0.0.0/24 --access-level ro
+--------------+--------------------------------------+
| Property     | Value                                |
+--------------+--------------------------------------+
| share_id     | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 |
| access_type  | ip                                   |
| access_to    | 20.0.0.0/24                          |
| access_level | ro                                   |
| state        | new                                  |
| id           | f151ad17-654d-40ce-ba5d-98a5df67aadc |
+--------------+--------------------------------------+

      We can check the access list using below command

$ manila access-list myshare
+--------------------------------------+-------------+-------------+--------------+--------+
| id                                   | access_type | access_to   | access_level | state  |
+--------------------------------------+-------------+-------------+--------------+--------+
| 0c8470ca-0d77-490c-9e71-29e1f453bf97 | ip          | 10.0.0.0/24 | rw           | active |
| f151ad17-654d-40ce-ba5d-98a5df67aadc | ip          | 20.0.0.0/24 | ro           | active |
+--------------------------------------+-------------+-------------+--------------+--------+

                 9. To create a snapshot of a share

$ manila snapshot-create --name mysnapshot --description "My Manila snapshot" myshare
+-------------------+--------------------------------------+
| Property          | Value                                |
+-------------------+--------------------------------------+
| status            | creating                             |
| share_id          | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 |
| description       | My Manila snapshot                   |
| created_at        | 2016-03-24T14:39:58.232844           |
| share_proto       | NFS                                  |
| provider_location | None                                 |
| id                | e744ca47-0931-4e81-9d9f-2ead7d7c1640 |
| size              | 1                                    |
| share_size        | 1                                    |
| name              | mysnapshot                           |
+-------------------+--------------------------------------+

We can check the snapshot details from list snapshot command

$ manila snapshot-list
+--------------------------------------+--------------------------------------+-----------+------------+------------+
| ID                                   | Share ID                             | Status    | Name       | Share Size |
+--------------------------------------+--------------------------------------+-----------+------------+------------+
| e744ca47-0931-4e81-9d9f-2ead7d7c1640 | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | available | mysnapshot | 1          |
+--------------------------------------+--------------------------------------+-----------+------------+------------+

      10. To create a share from the snapshot

$ manila create NFS 1 \
    --snapshot-id e744ca47-0931-4e81-9d9f-2ead7d7c1640 \
    --share-network mysharenetwork \
    --name mysharefromsnap
+-----------------------------+--------------------------------------+
| Property                    | Value                                |
+-----------------------------+--------------------------------------+
| status                      | creating                             |
| share_type_name             | default                              |
| description                 | None                                 |
| availability_zone           | nova                                 |
| share_network_id            | c895fe26-92be-4152-9e6c-f2ad230efb13 |
| share_server_id             | None                                 |
| host                        | nosb-devstack@london#LONDON          |
| access_rules_status         | active                               |
| snapshot_id                 | e744ca47-0931-4e81-9d9f-2ead7d7c1640 |
| is_public                   | False                                |
| task_state                  | None                                 |
| snapshot_support            | True                                 |
| id                          | e73ebcd3-4764-44f0-9b42-fab5cf34a58b |
| size                        | 1                                    |
| name                        | mysharefromsnap                      |
| share_type                  | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf |
| has_replicas                | False                                |
| replication_type            | None                                 |
| created_at                  | 2016-03-24T14:41:36.000000           |
| share_proto                 | NFS                                  |
| consistency_group_id        | None                                 |
| source_cgsnapshot_member_id | None                                 |
| project_id                  | 907004508ef4447397ce6741a8f037c1     |
| metadata                    | {}                                   |
+-----------------------------+--------------------------------------+

  List shares

$ manila list
+--------------------------------------+-----------------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+
| ID                                   | Name            | Size | Share Proto | Status    | Is Public | Share Type Name | Host                        | Availability Zone |
+--------------------------------------+-----------------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+
| 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | myshare         | 1    | NFS         | available | False     | default         | nosb-devstack@london#LONDON | nova              |
| e73ebcd3-4764-44f0-9b42-fab5cf34a58b | mysharefromsnap | 1    | NFS         | available | False     | default         | nosb-devstack@london#LONDON | nova              |
+--------------------------------------+-----------------+------+-------------+-----------+-----------+-----------------+-----------------------------
   


     More commands will be provided on next session

Thank you for reading

 






Tuesday, October 2, 2018

openstack command line administration - PART1





Here we are going through some openstack administration commands which is used in day to day operations , even though we have horizon graphical user interface many of us prefer to use commands for a  better visibility .. let's start

Each openstack services has its own command line client in some commands we can put debug parameter to identify the API requests for the command

Following are the command line clients are available for the respective API's

  • ceilometer (python-ceilometerclient). Client for the Telemetry API that lets you create and collect measurements across OpenStack.
  • cinder (python-cinderclient). Client for the Block Storage Service API that lets you create and manage volumes.
  • glance (python-glanceclient). Client for the Image Service API that lets you create and manage images.
  • heat (python-heatclient). Client for the Orchestration API that lets you launch stacks from templates, view details of running stacks including events and resources, and update and delete stacks.
  • keystone (python-keystoneclient). Client for the Identity Service API that lets you create and manage users, tenants, roles, endpoints, and credentials.
  • neutron (python-neutronclient). Client for the Networking API that lets you configure networks for guest servers. This client was previously known as quantum.
  • nova (python-novaclient). Client for the Compute API and its extensions. Use to create and manage images, instances, and flavors.
  • swift (python-swiftclient). Client for the Object Storage API that lets you gather statistics, list items,update metadata, upload, download and delete files stored by the Object Storage service. Provides access to a swift installation for ad hoc processing.

  • To get the version of the above mentioned clients 
$ nova --version
2.14.1.17

$ keystone --version
0.3.1.73

  • To get the help for client commands 
$ swift help

Usage: swift [--version] [--help] [--snet] [--verbose]
[--debug] [--quiet] [--auth <auth_url>]
[--auth-version <auth_version>] [--user <username>]
[--key <api_key>] [--retries <num_retries>]
[--os-username <auth-user-name>] [--os-password <auth-password>]
[--os-tenant-id <auth-tenant-id>]
[--os-tenant-name <auth-tenant-name>]
[--os-auth-url <auth-url>] [--os-auth-token <auth-token>]
[--os-storage-url <storage-url>] [--os-region-name <region-name>]
[--os-service-type <service-type>]
[--os-endpoint-type <endpoint-type>]
[--os-cacert <ca-certificate>] [--insecure]
[--no-ssl-compression]
<subcommand> ...
Command-line interface to the OpenStack Swift API.
Positional arguments:
<subcommand>
delete Delete a container or objects within a container
download Download objects from containers
list Lists the containers for the account or the objects
for a container
post Updates meta information for the account, container,
or object
stat Displays information for the account, container,
or object
upload Uploads files or directories to the given container
Examples:
swift -A https://auth.api.rackspacecloud.com/v1.0 -U user -K api_key stat -v
swift --os-auth-url https://api.example.com/v2.0 --os-tenant-name tenant \
--os-username user --os-password password list
swift --os-auth-token 6ee5eb33efad4e45ab46806eac010566 \
--os-storage-url https://10.1.5.2:8080/v1/AUTH_ced809b6a4baea7aeab61a \
list
swift list --lh

Users/Tenants/Roles - keystone commands 
  • To get the service details 
$keystone service-list

  • To get the details of a service 
$ keystone service-get 08741d8ed88242ca88d1f61484a0fe3b

  • To delete a service 
$ keystone service-delete 08741d8ed88242ca88d1f61484a0fe3b


  • To list the tanents ( projects) with their id,name and status 
$ keystone tenant-list
+----------------------------------+--------------------+---------+
| id | name | enabled |
+----------------------------------+--------------------+---------+
| f7ac731cc11f40efbc03a9f9e1d1d21f | admin | True |
| c150ab41f0d9443f8874e32e725a4cc8 | alt_demo | True |
| a9debfe41a6d4d09a677da737b907d5e | demo | True |
| 9208739195a34c628c58c95d157917d7 | invisible_to_admin | True |
| 3943a53dc92a49b2827fae94363851e1 | service | True |
| 80cab5e1f02045abad92a2864cfd76cb | test_project | True |
+----------------------------------+--------------------+---------+

  • To create a new project 
$ keystone tenant-create --name new-project --description 'my new project'

+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | my new project |
| enabled | True |
| id | 1a4a0618b306462c9830f876b0bd6af2 |
| name | new-project |
+-------------+----------------------------------+

  • To temporarily disable a project 
$ keystone tenant-update PROJECT_ID --enabled false
  • To enable a disabled project 
$ keystone tenant-update PROJECT_ID --enabled true
  • To update the name of the project 
$ keystone tenant-update PROJECT_ID --name project-new
  • To delete a project 
$ keystone tenant-delete PROJECT_ID
  • To verify the changes above 
$ keystone tenant-get 1a4a0618b306462c9830f876b0bd6af2

+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | my new project |
| enabled | True |
| id | 1a4a0618b306462c9830f876b0bd6af2 |
| name | project-new |
+-------------+----------------------------------+ 

  • To list all users 
$keystone user-list

+----------------------------------+----------+---------+-----------------
-----+
| id | name | enabled | email
|
+----------------------------------+----------+---------+-----------------
-----+
| 352b37f5c89144d4ad0534139266d51f | admin | True |
admin@example.com |
| 86c0de739bcb4802b8dc786921355813 | demo | True |
demo@example.com |
| 32ec34aae8ea432e8af560a1cec0e881 | glance | True |
glance@example.com |
| 7047fcb7908e420cb36e13bbd72c972c | nova | True |
nova@example.com |
+----------------------------------+----------+---------+-----------------
-----+

  • To create a user 
$ keystone user-create --name USER_NAME --tenant_id TENANT_ID --pass
PASSWORD

$ keystone user-create --name demo --tenant_id
1a4a0618b306462c9830f876b0bd6af2 --pass myPASS

+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | 6e5140962b424cb9814fb172889d3be2 |
| name | new-user |
| tenantId | 1a4a0618b306462c9830f876b0bd6af2 |
+----------+----------------------------------+

  • To temporarily disable a user ID
$ keystone user-update USER_ID --enabled false

  • To enable a disabled user account 
$ keystone user-update USER_ID --enabled true

Users can be part of multiple projects with the help of assigned roles.


  • To list the available roles 

$ keystone role-list

+----------------------------------+---------------+
|
id
|
name
|
+----------------------------------+---------------+
| 71ccc37d41c8491c975ae72676db687f |
Member
|
| 149f50a1fe684bfa88dae76a48d26ef7 | ResellerAdmin |
| 9fe2ff9ee4384b1894a90878d3e92bab |
_member_
|
| 6ecf391421604da985db2f141e46a7c8 |
admin
|
| deb4fffd123c4d02a907c2c74559dccf | anotherrole |
+----------------------------------+---------------+

  • To create a new role
$ keystone role-create --name new-role


+----------+----------------------------------+
| Property |Value
|
+----------+----------------------------------+
|
id          bef1f95537914b1295da6aa038ef4de6 |
|
name     new-role
|
+----------+----------------------------------+


  • To assign a a role to a user tenant pair


$ keystone user-role-add --user demo --role new-role --tenant test-
project
(here we will add role new-role to demo user and tenant pair)

$ keystone user-role-list --user demo --tenant test-project

+----------------------------------+----------+---------------------------
-------+----------------------------------+
|
id                                 name            user_id         tenant_id
|
+----------------------------------+----------+---------------------------
-------+----------------------------------+
| bef1f95537914b1295da6aa038ef4de6 | new-role |86c0de739bcb4802b8dc786921355813 | 80cab5e1f02045abad92a2864cfd76cb |
+----------------------------------+----------+---------------------------
-------+----------------------------------+

Manage Project Security&Networking ( Nutron commands) 

Security groups are the IP rules which will control the in and out traffic to projects. Whether the security rules needs to be applied to all projects which will share the network or to individual projects depends up on the allow_same_net_traffic option in the /etc/nova/nova.conf.


  • True (default), hosts on the same subnet are not filtered and are allowed to pass all types of trafficbetween them. On a flat network, this allows all instances from all projects unfiltered communication.With VLAN networking, this allows access between instances within the same project. You can alsosimulate this setting by configuring the default security group to allow all traffic from the subnet.
  • False, security groups are enforced for all connections

Before proceeding the security group commands please make sure system variables are set for the user and the tenant which we will check the security rules

export OS_USERNAME=demo00
export OS_TENANT_NAME=tenant01


  • To list security groups


$nova secgroup-list
+---------+-------------+
| Name  Description |
+---------+-------------+
| default | default
|
| open    all ports
|
+---------+----------


  • To view the details of a security group

$ nova secgroup-list-rules open
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp              -1              255         0.0.0.0/0 |
|
| tcp                  1               65535     0.0.0.0/0 |
|
|
| udp                  1              65535      0.0.0.0/0 |
|
----------------------------------------------------------------------------


  • To create a security group
neutron security-­group-­create [securitygroupname]

$ neutron secgroup-create global_http "Allows Web traffic anywhere on the
Internet."
+--------------------------------------+-------------+--------------------
--------------------------+
| Id                         Name              Description
|
+--------------------------------------+-------------+--------------------
--------------------------+
| 1578a08c-5139-4f3e-9012-86bd9dd9f23b | global_http | Allows Web traffic
anywhere on the Internet. |
+--------------------------------------+-------------+--------------------


  • To create a security group rule

neutron security­-group-­rule­-create [Security-groupID]

$ nutron security­-group-­rule­-create global_http tcp 80 80 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp                 80              80          0.0.0.0/0 |
|
+-------------+-----------+---------+-----------+--------------+

$ nutron security-group-add-rule global_http tcp 443 443 0.0.0.0/0

$ nutron security-group-list-rules global_http
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp                 80               80            0.0.0.0/0 |
|
| tcp                 443            443            0.0.0.0/0 |
|
+-------------+-----------+---------+-----------+--------------+

  • To delete a security group

neutron security-­group­-delete [securitygroupID/name]

neutron secrity-group-delete global_http


  • To delete a security group rule

neutron security-group-rule-delete [secrity group name-rule name]

$neutron security-group-rule-delete global_http tcp 443 443 0.0.0.0/0

$ nutron security-group-list-rules global_http
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp                 80               80            0.0.0.0/0 |
----------------------------------------------------------------------


  •     To update a security group

neutron security-­group-­update [securitygroupname]

$neutron security-group-update global_http tcp 443 80 0.0.0.0/0
$ nutron security-group-list-rules global_http
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp                 443               80            0.0.0.0/0 |
----------------------------------------------------------------------
  • To list the routers 

$neutron router­-list

+----------------------- +--------------------------------------+
| Id                    | name                               |
+-----------------------+--------------------------------------+
1e4g48d3-a9d0-3567-3f1c-29cd8b83345d
tuxfixer-router 
---------------------------------------------------------------------
  • To create a router 
$neutron router-create tuxfixer-router
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | 1e4g48d3-a9d0-3567-3f1c-29cd8b83345d |
| name                  | tuxfixer-router                       |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id             | dbe3cf30f46b446fcfe84b205459780d     |

  • To add an interface to a router 
$neutron router-interface-add tuxfixer-router tuxfixer-subnet
Added interface 445d79cb-3dcf-5f88-963c-aa054f7ce758 to router tuxfixer-router
  • To set the gateway for the router 
$neutron router-gateway-set tuxfixer-router ext-net
Set gateway for router tuxfixer-router
  • To view the details of newly created router 
$neutron router-port-list tuxfixer-router
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                          |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
|  |1e4g48d3-a9d0-3567-3f1c-29cd8b83345d       | fc:16:4d:13:32:21 | {"subnet_id": "f6523637-7162-449d-b12c-e1f0eda6196d", "ip_address": "192.168.5.1"} |

  • To create the network 
$ neutron net-create tuxfixer-net
Created a new network:
+-----------------+--------------------------------------+
| Field           | Value                                |
+-----------------+--------------------------------------+
| admin_state_up  | True                                 |
| id              | 2c0dh763-3fd4-2f8c-743f-7h0j35cv6cde |
| name            | tuxfixer-net                          |
| router:external | False                                |
| shared          | False                                |
| status          | ACTIVE                               |
| subnets         |                                      |
| tenant_id       | dbe3cf30f46b446fcfe84b205459780d     |
+-----------------+--------------------------------------

  • To create a subnet for the network
neutron subnet-create --name <SUBNET-NAME> <NET-NAME> <GATEWAY IP><SUBNET-CIDR>

$neutron subnet-create tuxfixer-net --name tuxfixer-subnet --gateway 192.168.5.1 192.168.5.0/24
Created a new subnet:
+-------------------+--------------------------------------------------+
| Field             | Value                                            |
+-------------------+--------------------------------------------------+
| allocation_pools  | {"start": "192.168.5.2", "end": "192.168.5.254"} |
| cidr              | 192.168.5.0/24                                   |
| dns_nameservers   |                                                  |
| enable_dhcp       | True                                             |
| gateway_ip        | 192.168.5.1                                      |
| host_routes       |                                                  |
| id                | ac05bc74-eade-4811-8e7b-8de021abe0c1             |
| ip_version        | 4                                                |
| ipv6_address_mode |                                                  |
| ipv6_ra_mode      |                                                  |
| name              | tuxfixer-subnet                                   |
| network_id        | 2c0dh763-3fd4-2f8c-743f-7h0j35cv6cde            |
| tenant_id         | dbe3cf30f46b446fcfe84b205459780d                |

Network quotas 

Quota related details like network,subnet,port will be mentioned in openstack network configuration file nutron.conf. If we need to remove any particular item from the quota we have to remove from quota_items in nutron.conf

**********************************************************
[quotas]
# resource name(s) that are supported in quota features
quota_items = network,subnet,port
# number of networks allowed per tenant, and minus means unlimited
quota_network = 10
# number of subnets allowed per tenant, and minus means unlimited
quota_subnet = 10
# number of ports allowed per tenant, and minus means unlimited
quota_port = 50
# default driver to use for quota checks
quota_driver = neutron.quota.ConfDriver
***********************************************************
For L3 quotas like router quotas we need to define as below in nutron.conf

************************************************************
[quotas]
# number of routers allowed per tenant, and minus means unlimited
quota_router = 10
# number of floating IPs allowed per tenant, and minus means unlimited
quota_floatingip = 50
************************************************************
For security group quotas
*************************************************
[quotas]
# number of security groups per tenant, and minus means unlimited
quota_security_group = 10
# number of security rules allowed per tenant, and minus means unlimited
quota_security_group_rule = 100
***********************************************


  • To show the quota details per tenant 

$ neutron quota-list
+------------+---------+------+--------+--------+------------------------------
----+
| floatingip | network | port | router | subnet | tenant_id
|
+------------+---------+------+--------+--------+-----------------------------
|            20               5          20           10               5        6f88036c45344d9999a1f971e4882723 |
              25                  10              30              10                 10    bff5c9455ee24231b5bc713c1b96d422 |
+------------+---------+------+--------+--------+------------------------------


  • To get the quota detail of a particular tenant id


$ neutron quota-show --tenant_id 6f88036c45344d9999a1f971e4882723
+------------+-------+
| Field
| Value |
+------------+-------+
| floatingip | 20
|
| network      5
|
| port           20
|
| router        10
|
| subnet        5
|+------------+-------+
  • To update the quota of a particular tenant

$ neutron quota-update --tenant_id 6f88036c45344d9999a1f971e4882723 --network 5
+------------+-------+
| Field          Value |
+------------+-------+
| floatingip | 50
|
| network     5
|
| port          50
|
| router      10
|
| subnet      10
|
+------------+-------+


  • update the multiple values of the quota

$ neutron quota-update --tenant_id 6f88036c45344d9999a1f971e4882723 --network 3
--subnet 3 --port 3 -- --floatingip 3 --router 3
+------------+-------+
| Field          Value |
+------------+-------+
| floatingip | 3
|
| network     3
|
| port           3
|
| router        3
|
| subnet       3
|
+------------+-------+

Floating IP details


  • To list floatingIP for this tenant

$nova floating-­ip-­list

+--------------+--------------------------------------+----------+--------+
| Ip           | Instance Id                          | Fixed Ip | Pool   |
+--------------+--------------------------------------+----------+--------+
| 172.24.4.225 | 4a60ff6a-7a3c-49d7-9515-86ae501044c6 | 10.0.0.2 | public |
| 172.24.4.226 | None                                 | None     | public |
+--------------+--------------------------------------+----------+--------+
| tenant_id         | dbe3cf30f46b446fcfe84b205459780d                |
+-------------------+--------------------------------------------------+


  • To list all floatingIP pools

$ nova floating-ip-pool-list
+--------+
| name   |
+--------+
| public |
| test   |
+--------+
  • To allocating a new floatingIP
$ nova floating-ip-create pblic
+--------------+-------------+----------+--------+
| IP           | Instance Id | Fixed IP | Pool   |
+--------------+-------------+----------+--------+
| 172.24.4.225 | None        | None     | public |
+--------------+-------------+----------+--------+

  • To disallocate a floating ip 

$nova floating-ip-delete 172.24.4.255

  • To attach a floating ip to an instance 
$nova floating-ip-associate VM1 172.24.4.225
 $nova list
+------------------+------+--------+------------+-------------+-------------------------------+
| ID               | Name | Status | Task State | Power State | Networks                      |
+------------------+------+--------+------------+-------------+-------------------------------+
| d5c854f9-d3e5... | VM1  | ACTIVE | -          | Running     | private=10.0.0.3, 172.24.4.225|
| 42290b01-0968... | VM2  | SHUTOFF| -          | Shutdown    | private=10.0.0.4              |
+------------------+------+--------+------------+-------------+-------------------------------+

  • To detach a floating ip from an instance 
$ nova floating-ip-delete FLOATING_IP_ADDRESS

$nova floating-ip-delete 172.24.4.255
$nova list
+------------------+------+--------+------------+-------------+-------------------------------+
| ID               | Name | Status | Task State | Power State | Networks                      |
+------------------+------+--------+------------+-------------+-------------------------------+
| d5c854f9-d3e5... | VM1  | ACTIVE | -          | Running     | private=10.0.0.3
| 42290b01-0968... | VM2  | SHUTOFF| -          | Shutdown    | private=10.0.0.4              |
+------------------+------+--------+------------+-------------+-------------------------------+

More commands will be explained in another session

Thank you for reading