Thursday, November 16, 2017

Configuring block storage ( cinder ) in storage part


As we don't have a separate storage node in our openstack lab setup , i forced to use the controller node as the storage .
  • First make sure we have a row partition available in the node 

root@CTRL:~# fdisk -l /dev/sdc

Disk /dev/sdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table 

  •  Create a physical volume in the disk

root@CTRL:~# pvcreate /dev/sdc
  Physical volume "/dev/sdc" successfully created
root@CTRL:~#
  • create a new volume group using /dev/sdc and this will be used as the cinder volume
root@CTRL:~# vgcreate cinder-volumes /dev/sdc
  Volume group "cinder-volumes" successfully created
root@CTRL:~# vgs cinder-volumes
  VG             #PV #LV #SN Attr   VSize  VFree
  cinder-volumes   1   0   0 wz--n- 10.00g 10.00g
root@CTRL:~#


  • Reconfigure the lvm to use only the cinder volume /dev/sdc , so we may need to apply filters there . edit the /etc/lvm/lvm.conf as below 


Always keep it in mind if your root file system is also lvm please don't forget to add it

devices {
...
filter = [ "a/sdc/", "r/.*/"]


  • After modifying we can verify as below 


root@CTRL:~# grep filter /etc/lvm/lvm.conf |grep -v "#"
    filter = [ "a/sdc/", "r/.*/"] 


  • Now install the storage node components in the controller node ( generally it should be separate node)

root@CTRL:~#  yum install cinder-volume python-mysqldb
Reading package lists... Done
Building dependency tree
Reading state information... Done
python-mysqldb is already the newest version.
The following extra packages will be installed:
  alembic cinder-common ieee-data libconfig-general-perl libgmp10 libibverbs1
  libjs-jquery libjs-sphinxdoc libjs-underscore librabbitmq1 librdmacm1
  libsgutils2-2 libyaml-0-2 python-alembic python-amqp python-amqplib
  python-anyjson python-babel python-babel-localedata python-barbicanclient
  python-cinder python-concurrent.futures python-crypto python-decorator
  python-dns python-ecdsa python-eventlet python-formencode
  python-glanceclient python-greenlet python-httplib2 python-iso8601
  python-json-patch python-json-pointer python-jsonpatch python-jsonschema
  python-keystoneclient python-keystonemiddleware python-kombu
  python-librabbitmq python-lockfile python-lxml python-mako python-markupsafe
  python-migrate python-mock python-netaddr python-networkx python-novaclient
  python-openid python-oslo.config python-oslo.db python-oslo.i18n


  • Edit the cinder configuration file ( /etc/cinder/cinder.conf)  and add the cinder DB url in database session 

[database]
connection = mysql://cinder:Onm0bile@CTRL/cinder

  • Configure the rabbit MQ 

[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = CTRL
rabbit_password = Onm0bile

  •     Configure the identity services 

[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://CTRL:5000/v2.0
identity_uri = http://CTRL:35357
admin_tenant_name = service
admin_user = cinder
admin_password = Onm0bile

  • Update the my_ip as storage node IP ( in this case controller node )


[DEFAULT]
.....
my_ip = 192.168.24.10
  • Configure the image service

[DEFAULT]
....
glance_host = CTRL

  • Enable the verbose mode for troubleshooting purpose 


[DEFAULT]
...
verbose = True

  • Restart the cinder service and iscsi service 

root@CTRL:~# service tgt restart
tgt stop/waiting
tgt start/running, process 18809
root@CTRL:~# service cinder-volume restart
cinder-volume stop/waiting
cinder-volume start/running, process 14432
root@CTRL:~#

Now we need to verify the cinder services


  • Source the admin.rc to access keyston commands 


root@CTRL:~# cat admin.rc
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://CTRL:35357/v2.0
root@CTRL:~# source admin.rc
root@CTRL:~#

  • Verify the cinder services 

root@CTRL:~# cinder service-list
+------------------+-----------+------+---------+-------+----------------------------+-----------------+
|      Binary      |    Host   | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+-----------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | CTRL | nova | enabled |   up  | 2017-11-15T18:34:12.000000 |       None      |
|  cinder-volume   |  CTRL | nova | enabled |   up  | 2017-11-15T18:34:17.000000 |       None      |
+------------------+-----------+------+---------+-------+----------------------------+-----------------+
root@CTRL:~#

  • Now access the tuxfixer tanent access to create the test volume 

root@CTRL:~# cat tuxfixer.rc
export OS_USERNAME=tuxfixer
export OS_PASSWORD=tux123
export OS_TENANT_NAME=tuxfixer
export OS_AUTH_URL=http://CTRL:35357/v2.0
root@CTRL:~#
root@CTRL:~# source tuxfixer.rc

  • Create 1 GB volume called tux-vol1

root@CTRL:~# cinder create --display-name tux-vol1 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2017-11 15T18:36:16.155518      |
| display_description |                 None                 |
|     display_name    |              tux-vol1               |
|      encrypted      |                False                 |
|          id         | 252f87d2-e5b4-326c-889ec-6bbee259bc88 |
|       metadata      |                  {}                 |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
root@CTRL:~#

  • List the newly created volume

root@CTRL:~# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 252f87d2-e5b4-326c-889ec-6bbee259bc88| available |  ling-vol1   |  1   |     None    |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
root@OSCTRL-UA:~#

  • Check the lvm details using lvs command 

root@CTRL:~# lvs
  LV                                          VG             Attr      LSize Pool Origin Data%  Move Log Copy%  Convert
  volume-252f87d2-e5b4-326c-889ec-6bbee259bc88 cinder-volumes -wi-a---- 1.00g

We can see the new 1 GB cinder lvm is ready to use for storage purposes





Tuesday, November 7, 2017

Configuring the block storage (cinder) in openstack - controller part

Cinder is the block storage server in openstack configuration . This setup is designed to consume storage devices either from local storage through  LVM or third party devices for the computer node NOVA.

In our case we will use the default LVM as a storage device which will share to the instances which we created earlier . ( as we don't have separate storage node configured we will use controller node as storage) 

The cinder  architecture 














Cinder-API - This is a wsgi based api which will routes and authenticate the requests to the block storage service. It supports openstack API which will call as a cinder client ( Nova EC2 instances also supports this as an alternate)

Cinder -Scheduler - This will schedule the  requests and route it to appropriate volume service as per your configuration .

Cinder Volume - This is the back end storage devices . typically the devices which are supported are given below

  • Ceph RADOS Block Device (RBD)
  • Coraid AoE driver configuration
  • Dell EqualLogic volume driver
  • EMC VMAX iSCSI and FC drivers
  • EMC VNX direct driver
  • EMC XtremIO OpenStack Block Storage driver guide
  • GlusterFS driver
  • HDS HNAS iSCSI and NFS driver
  • HDS HUS iSCSI driver
  • Hitachi storage volume driver
  • HP 3PAR Fibre Channel and iSCSI drivers
  • HP LeftHand/StoreVirtual driver
  • HP MSA Fibre Channel driver
  • Huawei storage driver
  • IBM GPFS volume driver
  • IBM Storwize family and SVC volume driver
  • IBM XIV and DS8000 volume driver
  • LVM
  • NetApp unified driver
  • Nexenta drivers
  • NFS driver
  • ProphetStor Fibre Channel and iSCSI drivers
  • Pure Storage volume driver
  • Sheepdog driver
  • SolidFire
  • VMware VMDK driver
  • Windows iSCSI volume driver
  • XenAPI Storage Manager volume driver
  • XenAPINFS
  • Zadara
  • Oracle ZFSSA iSCSI Driver
Cinder backup - Provides the cinder back up to various targets.

Cinder work flow 






















  • A volume is created through the cinder create command. This command creates an LV into the volume group (VG) “cinder-volumes.”
  • The volume is attached to an instance through the nova volume-attach command. This command creates a unique iSCSI IQN that is exposed to the compute node.
  • The compute node, which runs the instance, now has an active iSCSI session and new local storage (usually a /dev/sdX disk).
  • Libvirt uses that local storage as storage for the instance. The instance get a new disk, usually a /dev/vdX disk.

While entering to the configuration side we have two parts for the configuration . 
configuration of the controller side and configuration of the storage node side. But as i mentioned earlier we have no storage node configured separately due to the limitation in my lab and we will configure both in controller node  

Configuring the controller node for cinder setup

  •  Login to the CTRL node as root
  • Create the databases for cinder service 
root@CTRL:~# mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 27
Server version: 5.5.44-0ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE DATABASE cinder;
Query OK, 1 row affected (0.00 sec)
  • Provide the proper access to the cinder database and set the password for the cinder DB.
mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'Onm0bile';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%'  IDENTIFIED BY 'Onm0bile';
Query OK, 0 rows affected (0.00 sec)

mysql> exit
Bye
root@CTRL:~#

  • Now we need to access the admin commands using source the admin.rc file 
root@CTRL:~# cat admin.rc
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://CTRL:35357/v2.0
root@CTRL:~# source admin.rc
root@CTRL:~#
  • Create the service credentials for cinder using keystone command . We need to create cinder user
root@CTRL:~# keystone user-create --name cinder --pass Onm0bile
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |                                  |
| enabled  |               True               |
|    id    | c1791460385745f79015a4ee40f94db8 |
|   name   |              cinder              |
| username |              cinder              |
+----------+----------------------------------+
root@CTRL:~#

  • Add admin role to the cinder user 
root@CTRL:~# keystone user-role-add --user cinder --tenant service --role admin
root@CTRL:~#
  • Create the cinder service entities for both cinder API V1 and V2 
(currently the block storage API versions are up to 3 and we will only use 2)

root@CTRL:~# keystone service-create --name cinder --type volume --description "OpenStack Block Storage"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |     OpenStack Block Storage      |
|   enabled   |               True               |
|      id     | 6c91s86b3acb23d2b1294171c14fed68 |
|     name    |              cinder              |
|     type    |              volume              |
+-------------+----------------------------------+
root@CTRL:~#
root@CTRL:~# keystone service-create --name cinderv2 --type volumev2 --description "OpenStack Block Storage"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |     OpenStack Block Storage      |
|   enabled   |               True               |
|      id     | 414d7125e8e44314ce58beb8fc4ca781|
|     name    |             cinderv2             |
|     type    |             volumev2             |
+-------------+----------------------------------+
root@OSCTRL-UA:~#

  • Create the API storage end points for version1 and version 2
keystone endpoint-create --service-id $(keystone service-list | awk '/ volume / {print $2}') --publicurl http://CTRL:8776/v1/%\(tenant_id\)s --internalurl http://CTRL:8776/v1/%\(tenant_id\)s --adminurl http://CTRL:8776/v1/%\(tenant_id\)s --region regionOne


keystone endpoint-create --service-id $(keystone service-list | awk '/ volume / {print $2}') --publicurl http://CTRL:8776/v2/%\(tenant_id\)s --internalurl http://CTRL:8776/v2/%\(tenant_id\)s --adminurl http://CTRL:8776/v2/%\(tenant_id\)s --region regionOne


root@CTRL:~# keystone endpoint-create --service-id $(keystone service-list | awk '/ volume / {print $2}') --publicurl http://CTRL:8776/v1/%\(tenant_id\)s --internalurl http://CTRL:8776/v1/%\(tenant_id\)s --adminurl http://CTRL:8776/v1/%\(tenant_id\)s --region regionOne
+-------------+----------------------------------------+
|   Property  |                 Value                  |
+-------------+----------------------------------------+
|   adminurl  | http://CTRL:8776/v1/%(tenant_id)s |
|      id     |    6c91s86b3acb23d2b1294171c14fed68   |
| internalurl | http://CTRL:8776/v1/%(tenant_id)s |
|  publicurl  | http://CTRL:8776/v1/%(tenant_id)s |
|    region   |               regionOne                |
|  service_id |    7a90b86b3aab43d2b1194172a14fed79    |
+-------------+----------------------------------------+
root@CTRL:~#
root@CTRL:~# keystone endpoint-create --service-id $(keystone service-list | awk '/ volumev2 / {print $2}') --publicurl http://CTRL:8776/v1/%\(tenant_id\)s --internalurl http://CTRL:8776/v1/%\(tenant_id\)s --adminurl http://CTRL:8776/v1/%\(tenant_id\)s --region regionOne
+-------------+----------------------------------------+
|   Property  |                 Value                  |
+-------------+----------------------------------------+
|   adminurl  | http://CTRL:8776/v1/%(tenant_id)s |
|      id     |       414d7125e8e44314ce58beb8fc4ca781
| internalurl | http://CTRL:8776/v1/%(tenant_id)s |
|  publicurl  | http://CTRL:8776/v1/%(tenant_id)s |
|    region   |               regionOne                |
|  service_id |    716e7125e8e44414ad58deb9fc4ca682    |
+-------------+----------------------------------------+
root@OSCTRL-UA:~#

  • Next we have to install the cinder components 
root@CTRL:~# yum install cinder-api cinder-scheduler python-cinderclient
Reading package lists... Done
Building dependency tree
Reading state information... Done
python-cinderclient is already the newest version.
python-cinderclient set to manually installed.
The following extra packages will be installed:
  cinder-common python-barbicanclient python-cinder python-networkx
  python-taskflow
Suggested packages:
  python-ceph python-hp3parclient python-scipy python-pydot
The following NEW packages will be installed:
  cinder-api cinder-common cinder-scheduler python-barbicanclient
  python-cinder python-networkx python-taskflow
0 upgraded, 7 newly installed, 0 to remove and 37 not upgraded.
Need to get 1,746 kB of archives.
After this operation, 14.0 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
  • Edit the /etc/cinder/cinder.conf file and configure as below 
Database session 

[database]
connection = mysql://cinder:Onm0bile@CTRL/cinder
Rabbit MQ configuration 

[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = CTRL
rabbit_password = Onm0bile
Update the auth_stratery in default session 

[DEFAULT]
auth_strategy = keystone
Update the keystone credentials 

[keystone_authtoken]
auth_uri = http://CTRL:5000/v2.0
identity_uri = http://CTRL:35357
admin_tenant_name = service
admin_user = cinder
admin_password = Onm0bile
Update the my_ip option to access the management IP of the controller node 

[DEFAULT]
.....
my_ip = 192.168.24.10
Enable the verbose 

[DEFAULT]
.....
verbose = True
Populate the configuration in cinder database 

root@CTRL:~# su -s /bin/sh -c "cinder-manage db sync" cinder
2017-11-07 04:37:00.143 9423 INFO migrate.versioning.api [-] 0 -> 1...
2017-11-07 04:37:00.311 9423 INFO migrate.versioning.api [-] done
2017-11-07 04:37:00.312 9423 INFO migrate.versioning.api [-] 1 -> 2...
2017-11-07 04:37:00.424 9423 INFO migrate.versioning.api [-] done
....out put is omitted....
Restart the cinder services once the database update finishes 

root@CTRL:~# service cinder-scheduler restart
cinder-scheduler stop/waiting
cinder-scheduler start/running, process 9444
root@CTRL:~# service cinder-api restart
cinder-api stop/waiting
cinder-api start/running, process 9466
root@CTRL:~#

I will cover the storage part on next session 


Thursday, November 2, 2017

Configuring the neutron in openstack

Neutron is the networking component of the openstack setup which will manage the inward and outward traffic to and from the instances to the external network. The benefits of neutron is it will act as a network connectivity service and supports many L2 and L3 technologies. It is easy to manage as we can deploy in centralised setup also it can be deployed as distributed setup.The advanced technologies which includes in neutron  are like load balancing,  VPN, firewall etc.

The basic neutron process is given below 
  1. Boot VM start.
  2. Create a port and notify the DHCP of the new port.
  3. Create a new device (virtualization library – libvirt).
  4. Wire port (connect the VM to the new port).
  5. Complete boot.
Neutron server contains mainly 3 components 

  • REST API - This is a http based API service which includes methods, url's, media types , responses etc. It also exposes logical resources like subnets & ports 
  • Queue - This handles bi directional communication between agents and the neutron server. 
  • Plugin - This component will communicate with plugin agents installed in instances to manage vswitch configuration . Also this will help neutron server to access the database persistently using the AMQP protocol.
Basic architecture of a neutron server is below 








Detailed architecture of neutron setup is below


















Steps to configure the neutron 

  • First we need to create an external network which is called provider network in controller node
command format is below 

neutron net-create <NET-NAME> --provider:physical_network=<LABEL-PHYSICAL-INTERFACE> --provider:network_type=<flat or vlan> --shared --router:external=True


root@CTRL:~# neutron net-create ext-net --router:external --provider:physical_network external --provider:network_type flat
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | f26chf4c-5c46-2881-c0h0-0845918d6536 |
| name                      | ext-net                              |
| provider:network_type     | flat                                 |
| provider:physical_network | external                             |
| provider:segmentation_id  |                                      |
| router:external           | True                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | c15d8b07h462481348c3f3c4e8d581c7    |
+---------------------------+--------------------------------------+
root@CTRL:~#

  • Next step is to assign the IP pool for the external network router and interfaces to avoid the IP conflict . In our case i am assigning the IP pool starting from 192.168.24.10 to 192.168.24.30 and the default gateway is 192.168.24.2 
command format 


neutron subnet-create --name <SUBNET-NAME> <NET-NAME> <SUBNET-CIDR> --gateway <GATEWAY-IP> --allocation-pool start=<STARTING-IP>,end=<ENDING-IP> --dns-nameservers list=true <DNS-1 DNS-2>

root@CTRL:~# neutron subnet-create ext-net --name ext-subnet --allocation-pool start=192.168.24.10,end=192.168.24.30 --disable-dhcp --gateway 192.168.24.2 192.168.24.0/24
Created a new subnet:
+-------------------+--------------------------------------------------------+
| Field             | Value                                                  |
+-------------------+--------------------------------------------------------+
| allocation_pools  | {"start": "192.168.24.10", "end": "192.168.24.30"} |
| cidr              | 192.168.24.0/24                                       |
| dns_nameservers   |                                                        |
| enable_dhcp       | False                                                  |
| gateway_ip        | 192.168.24.2                                          |
| host_routes       |                                                        |
| id                |  f26chf4c-5c46-2881-c0h0-0845918d6536                  
| ip_version        | 4                                                      |
| ipv6_address_mode |                                                        |
| ipv6_ra_mode      |                                                        |
| name              | ext-subnet                                             |
| network_id        | 2d188736-5877-77df-bc8c-eb1964c4a74a                   |
| tenant_id         | c15d8b07h462481348c3f3c4e8d581c7                      |
+-------------------+--------------------------------------------------------+
root@CTRL:~# 

  • Next step is to create a Tenant network , we have created a tenant earlier as tuxfixer . We have to source that tuxfixer.rc file 

root@CTRL:~# cat tuxfixer.rc
export OS_USERNAME=tuxfixer
export OS_PASSWORD=tux123
export OS_TENANT_NAME=tuxfixer
export OS_AUTH_URL=http://CTRL:35357/v2.0
root@CTRL:~#
root@CTRL:~# source tuxfixer.rc
command format

neutron net-create <NET-NAME>
neutron subnet-create --name <SUBNET-NAME> <NET-NAME> <SUBNET-CIDR>

root@CTRL:~# neutron net-create tuxfixer-net
Created a new network:
+-----------------+--------------------------------------+
| Field           | Value                                |
+-----------------+--------------------------------------+
| admin_state_up  | True                                 |
| id              | 2c0dh763-3fd4-2f8c-743f-7h0j35cv6cde |
| name            | tuxfixer-net                          |
| router:external | False                                |
| shared          | False                                |
| status          | ACTIVE                               |
| subnets         |                                      |
| tenant_id       | dbe3cf30f46b446fcfe84b205459780d     |
+-----------------+--------------------------------------+
Now create a subnet for the tenant tuxfixer

The tuxfixer tanent can use the ip starting from 192.168.5.2 to 192.168.5.254

root@CTRL:~# neutron subnet-create tuxfixer-net --name tuxfixer-subnet --gateway 192.168.5.1 192.168.5.0/24
Created a new subnet:
+-------------------+--------------------------------------------------+
| Field             | Value                                            |
+-------------------+--------------------------------------------------+
| allocation_pools  | {"start": "192.168.5.2", "end": "192.168.5.254"} |
| cidr              | 192.168.5.0/24                                   |
| dns_nameservers   |                                                  |
| enable_dhcp       | True                                             |
| gateway_ip        | 192.168.5.1                                      |
| host_routes       |                                                  |
| id                | ac05bc74-eade-4811-8e7b-8de021abe0c1             |
| ip_version        | 4                                                |
| ipv6_address_mode |                                                  |
| ipv6_ra_mode      |                                                  |
| name              | tuxfixer-subnet                                   |
| network_id        | 2c0dh763-3fd4-2f8c-743f-7h0j35cv6cde            |
| tenant_id         | dbe3cf30f46b446fcfe84b205459780d                |
+-------------------+--------------------------------------------------+

  • We have to create a tanent router and add the internal and external interfaces to that .


command details

neutron router-create <ROUTER-NAME>
neutron router-interface-add <ROUTER-NAME> <SUBNET-NAME>
neutron router-gateway-set <ROUTER-NAME> <NET-NAME>

root@CTRL:~# neutron router-create tuxfixer-router
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | 1e4g48d3-a9d0-3567-3f1c-29cd8b83345d |
| name                  | tuxfixer-router                       |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id             | dbe3cf30f46b446fcfe84b205459780d     |
root@CTRL:~# neutron router-interface-add tuxfixer-router tuxfixer-subnet
Added interface 445d79cb-3dcf-5f88-963c-aa054f7ce758 to router tuxfixer-router.

root@CTRL:~# neutron router-gateway-set tuxfixer-router ext-net
Set gateway for router tuxfixer-router
Now we need to list the newly created router details. We have 2 subnets configured where 1 will use for tanent and other will be for external

root@CTRL:~# neutron router-port-list tuxfixer-router
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                          |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
|  |1e4g48d3-a9d0-3567-3f1c-29cd8b83345d       | fc:16:4d:13:32:21 | {"subnet_id": "f6523637-7162-449d-b12c-e1f0eda6196d", "ip_address": "192.168.5.1"} |
We can verify the work by pinging to the external and tanent IP from controller node 

First we need to verify the router interfaces which we created for tuxfixer 

root@CTRL:/var/log/neutron# ip netns
efccp-49ff7852-07c4-30d2-82cb-e6f7daf673a4
qrouter-43681237-d673-5e1b-ca04-7e4672274992
Now ping the external IP using below command 

root@CTRL:~# ip netns exec qrouter-43681237-d673-5e1b-ca04-7e4672274992 ping 192.168.24.30
PING 192.168.24.30 (192.168.24.30) 56(84) bytes of data.
64 bytes from 192.168.24.30: icmp_seq=1 ttl=64 time=0.165 ms
64 bytes from 192.168.24.30: icmp_seq=2 ttl=64 time=0.126 ms
64 bytes from 192.168.24.30: icmp_seq=3 ttl=64 time=0.082 ms
^C
Ping the Tanent IP 

root@CTRL:~# ip netns exec qrouter-43681237-d673-5e1b-ca04-7e4672274992 ping 192.168.5.1
PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data.
64 bytes from 192.168.5.1: icmp_seq=1 ttl=64 time=0.165 ms
64 bytes from 192.168.5.1: icmp_seq=2 ttl=64 time=0.126 ms
64 bytes from 192.168.5.1: icmp_seq=3 ttl=64 time=0.082 ms
^C
Basic neutron configuration is completed except the security groups which i will discuss as separately