Monday, December 5, 2016

AWS- sample Linux instance creation and ssh configuration

In this session i am providing you the steps to create a simple EC2 LInux instance and it's putty configuration .

1.      Login to https://aws.amazon.com  and sign in using the credentials as below














        2. You will get a console as below which contains multiple services available for AWS 

1.       











     3. Click on EC2 and you will get the current status of the instance . In this case no instances is running now











1.     4.  Click Launch instance and select Redhat ( select only eligible for free tier to avoid charges)












   5. Here  the choose instance type tab select the one which is free tire eligible













6.Configure the instance details as below as

Number of instances 1
Purchasing option ( don’t enable request spot instances )
Network/subnet/Auto assign IP ( all these details will come automatically)
Shutdown behaviour you can select as stop and tick enable termination protection 












7.Add storage details

(for free tier default storage size available will be 30GB) 













8. 8. We can tag the instance as below, here I am giving the instance name as “webserver 





9  Then we have an option to configure the security groups ( which means the firewall rules for specific ports need to be opened) 




.  10 Click on review and launch ( all the above mentioned configured options are available in this tab also this will allow you to edit the configurations before you are launching the instance . Also there will be an option to configure the key pair for security purpose which contains a public and private key’s. Please mention any key name “ in this example I have mentioned the key name as unixchips1. This key pair will be saved in your local system for future use as a .pem file.


11. you will get the below screen while launching the instances


12. Finally our instances are ready which is names as webserver


.

13. Once you click in connect option you will get below prompt which will describe the connection methods. There is 2 methods to access a linux instance, java enabled ssh method and standard putty method. Here I am using standard putty connect





14 Also you need to download “puttygen”( http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html) before connecting using putty to configure public & private key for the instance



Launch putty gen and click on “load” option , select all files from the drop down list and locate the .pem file ( unixchips1.pem) which we saved earlier . If correct file is located you will get a message as below .






15. There is an option to save the private key (.ppk) format and save that in your desktop. Here I have saved as unixchips1.ppk in my desktop


16 .Load the putty.exe , in the hostname tab mention the details as below
where ec2-user is the default username while creating each instance and the other one is public DNS name of the server . 




17. Go to connection tab and expand SSH then select Auth there is an option to brows and select the private key ( ppk) file which we saved earlier and then open



It will connect to the instance using the ssh client







 

 





















































Wednesday, July 6, 2016

Thursday, March 31, 2016

XFS file system









XFS is the default file system in RHEL 7 . This file system allows to create files up to 500TB where in RHEL 6 (ext4) it is limited up to 50TB.

XFS brings the following benefits

  • ability to manage up to 500TB file systems with files up to 50TB in size,
  • best performance for most workloads (especially with high speed storage and larger number of cores),
  • less CPU usage than most of the other file systems 
  • the most robust at large scale (has been run at hundred plus TB sizes for many years),
  • the most common file system in multiple key upstream communities: most common base for ceph, gluster and openstack more broadly,
  • pioneered most of the techniques now in Ext4 for performance (like delayed allocation --where blocks will be allocated as late after the files are fully written in disk),
  • no file system check at boot time,
  • CRC checksum on all metadata blocks.
Also one more important thing is we cannot shrink the XFS file system,we can only extent it

XFS basic management 

 In our case we have 2 GB iscsi partition in the server and same need to be configured as xfs along with lvm

1. Check the disk 
[root@redhat7-test ~]# fdisk -l

...........................................................output is omitted .............

Disk /dev/sdb: 2153 MB, 2153398272 bytes, 4205856 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

2. Format the disk as usual (follow the same procedure as normal disk in linux)

[root@redhat7-test ~]# fdisk  /dev/sdb
Welcome to fdisk (util-linux 2.23.2).
...................................................................................

[root@redhat7-test ~]# fdisk -l

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048     4205855     2101904   8e  Linux LVM

3. Create the physical volume 

[root@redhat7-test ~]# pvcreate /dev/sdb1
  Physical volume "/dev/sdb1" successfully created

4. Extent the volume group 

[root@redhat7-test ~]# vgextend rhel /dev/sdb1
  Volume group "rhel" successfully extended

5. Create the lvm 

[root@redhat7-test ~]# lvcreate -L 1G -n xfslv rhel
  Logical volume "xfslv" created.

6. Now we have to create the xfs file system using below command 

[root@redhat7-test ~]# mkfs.xfs /dev/rhel/xfslv
meta-data=/dev/rhel/xfslv        isize=256    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

7. Mount the xfs file system 

[root@redhat7-test ~]# mount /dev/rhel/xfslv /xfs/

[root@redhat7-test ~]# df -h
Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root    14G  802M   14G   6% /
devtmpfs                488M     0  488M   0% /dev
tmpfs                   497M     0  497M   0% /dev/shm
tmpfs                   497M  6.6M  491M   2% /run
tmpfs                   497M     0  497M   0% /sys/fs/cgroup
/dev/sda1               4.7G  125M  4.6G   3% /boot
/dev/mapper/rhel-xfslv 1014M   33M  982M   4% /xfs

8. To increase the xfs file system 

[root@redhat7-test ~]# lvextend --size +50M /dev/rhel/xfslv
  Rounding size to boundary between physical extents: 52.00 MiB
  Size of logical volume rhel/xfslv changed from 1.00 GiB (256 extents) to 1.05 GiB (269 extents).
  Logical volume xfslv successfully resized


[root@redhat7-test ~]# xfs_growfs /xfs/
meta-data=/dev/mapper/rhel-xfslv isize=256    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 262144 to 275456

XFS Advanced Management 

Repairing XFS file system 

Suppose if we observed some issue in the file system and need to be repaired we can use below command 

[root@redhat7-test ~]# xfs_repair /dev/rhel/xfslv
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done

Also the other option is called "Force Log Zeroing" which will be used as a last option if nothing works out . This option has no guaranty and chances for a data corruption . Before setting the label don't forget to unmount the file system

[root@redhat7-test ~]# umount /xfs/
[root@redhat7-test ~]# xfs_admin -L "testlabel" /dev/mapper/rhel-xfslv
writing all SBs
new label = "testlabel"

To read the label use below command

[root@redhat7-test ~]# xfs_admin -l /dev/mapper/rhel-xfslv
label = "testlabel"


XFS Backup Management

 We can take the backup of the xfs file system using xfsdump command . This will not be available in RHEL as default and we have to install it manually

[root@redhat7-test ~]# xfsdump -F -f /root/dump.xfs /xfs
xfsdump: using file dump (drive_simple) strategy
xfsdump: version 3.1.4 (dump format 3.0)
xfsdump: WARNING: no session label specified
xfsdump: level 0 dump of redhat7-test:/xfs
xfsdump: dump date: Fri Apr  1 02:27:51 2016
xfsdump: session id: 204db9c4-c7b6-456b-88c4-a428509c340b
xfsdump: session label: ""
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 20800 bytes
xfsdump: /var/lib/xfsdump/inventory created
xfsdump: WARNING: no media label specified
xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories
xfsdump: dumping non-directory files
xfsdump: ending media file
xfsdump: media file size 21016 bytes
xfsdump: dump size (non-dir files) : 0 bytes
xfsdump: dump complete: 0 seconds elapsed
xfsdump: Dump Summary:
xfsdump:   stream 0 /root/dump.xfs OK (success)
xfsdump: Dump Status: SUCCESS


We can see the dump file inside the /root directory 

[root@redhat7-test ~]# ls -lrt
total 28
-rw-------. 1 root root  1428 Jan 20  2015 anaconda-ks.cfg
drwxr-xr-x. 2 root root    20 Jul 14  2015 python
-rw-r--r--. 1 root root 21016 Apr  1 02:27 dump.xfs


Also below is the restore command from the dump.xfs file

[root@redhat7-test ~]# xfsrestore -f /root/dump.xfs /xfs/
xfsrestore: using file dump (drive_simple) strategy
xfsrestore: version 3.1.4 (dump format 3.0) - type ^C for status and control
xfsrestore: searching media for dump
xfsrestore: examining media file 0
xfsrestore: dump description:
xfsrestore: hostname: redhat7-test
xfsrestore: mount point: /xfs
xfsrestore: volume: /dev/mapper/rhel-xfslv
xfsrestore: session time: Fri Apr  1 02:27:51 2016
xfsrestore: level: 0
xfsrestore: session label: ""
xfsrestore: media label: ""
xfsrestore: file system id: af5e3fab-bab6-416a-8d3e-02c64fdba19a
xfsrestore: session id: 204db9c4-c7b6-456b-88c4-a428509c340b
xfsrestore: media id: 3385959f-0ce5-4bd0-a16d-d883460556d0
xfsrestore: using online session inventory
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsrestore: 1 directories and 0 entries processed
xfsrestore: directory post-processing
xfsrestore: restore complete: 0 seconds elapsed
xfsrestore: Restore Summary:
xfsrestore:   stream 0 /root/dump.xfs OK (success)
xfsrestore: Restore Status: SUCCESS


So these are the details about XFS file system . 

Thank you for reading this blog and post your valuable comments below 






  













Saturday, March 19, 2016

VCS ( Veritas Cluster Server)

Veritas cluster server is the high availability cluster solution for open platform  including Unix and MS.  This can be used as high availability cluster and high performance cluster depending up on the requirement where high availability cluster (HAC) improve application availability by failing them over or switching them over in a group of systems  and high performance cluster (HPC) which improve application performance by running them on multiple systems simultaneously.

VCS consist of multiple systems connected to a shared storage in different formations. VCS monitors and controls applications (resources) running in the system  and restarts it by switching to defferent nodes to ensure minimum or zero downtime.
















I am mentioning different VCS terms which will be used through out this blog for your understanding
Resources - Resources includes hardware and software entities which includes ip address, disks, scrips , network interfaces etc. controlling this resources means starting/stopping/monitoring the resources as per the requirement . These resources are controlled by resource dependencies where there will be specific procedure to make this resources online or offline.

  Resources have 3 categories
on-off - These type of resources will be on or off with in the cluster as per the requirement . example scripts
on-only - This resources will be always on and VCS cannot stop them . example os NFS file systems
persistent - Persistent resources will not be make online or offline  by VCS.  example is network interface.

Service groups - Service groups are logical group of resources and dependencies  for better management. Each node in the VCS might host multiple service groups which can be monitored separately. If any resource in a service group is failed VCS will restart the service group in same node itself or by moving to other node.

service groups are 3 types called

Failover Service group - running in one node at a time

Parallel Service group - running in multiple nodes at a time

Hybrid Service group - This is a combination of Failover & Parallel  where with in the system zones it will act as Failover and Parallel across the zones .

Also there is one more service group called cluster service group which will contain main cluster resources like cluster manager,Notification,Wide area connectors (WAC) etc. This service group can switch to any nodes and it is the first service group came as online






























High Availability Daemon (HAD)

HAD is the main daemon running in the VCS system which is responsible for running the cluster with cluster configuration , it will distribute the information when the nodes will join/leaves the cluster also it will respond to operator input and take corrective action when something fails . This daemon is generally known as VCS engine. HAD will use agents to understand the status of the resources  and the daemon running in one system will replicate the details to all other nodes in VCS system

Low Latency Transport (LLT)

LLT provides faster communication between the cluster nodes which means kernel to kernel communication. LLT has two functions

Traffic distribution - LLT helps to load balance the traffic between the nodes and it supports maximum up to eight links. If one link is down communication will shift automatically to another node.

Heart Beat - LLT is responsible for sending and receiving heart beat communication between the nodes to make sure system health status is fine.This heartbeat is analysed by GAB to identify the system status  

Main configuration files - /etc/llthosts & /etc/llttab

1. /etc/llthosts - This file contains hosts details with respective id's

ex: bash-3.00# cat /etc/llthosts
0 solaris-test2
1 solaris-test3


2. /etc/llttab - This file contains LLT network links related to each nodes in the system

ex:
bash-3.00# cat /etc/llttab
set-node solaris-test2
set-cluster 100
link e1000g0 /dev/e1000g:0 - ether - -
link e1000g1 /dev/e1000g:1 - ether - -


Group Membership Services/Atomic Broadcast (GAB)

GAB provides a global message order to provide synchronized system state. This also monitors disk communications  which is required for VCS heartbeat utility .  The main functions of GAB is to maintain cluster membership ( by receiving input of the status of the servers through LLT) and helps for reliable cluster communication

Main configuration file /etc/gabtab

This file contains the information needed by gabdriver and it is accessed by gabconfig utility
ex:
bash-3.00# cat /etc/gabtab
/sbin/gabconfig -c -n2


where 2 is the number of nodes available in the cluster system 

Configuring disk heartbeat 

This is same as qdisk configuration in RHEL cluster. We require at least 64kb of a disk for this configuration 

Below commands will initialize the disk region (s - start block, S- signature)

#/sbin/gabdiskconf - i /dev/dsk/c1t2d0s2 -s 16 -S 1123

#/sbin/gabdiskconf - i /dev/dsk/c1t2d0s2 -s 144 -S 1124

Adding the disk configuration 

#sbin/gabdiskhb -a /dev/dsk/c1t2d0s2 -s 16 -p a -s 1123

#sbin/gabdiskhb -a /dev/dsk/c1t2d0s2 -s 144 -p h -s 1124

configure the disks to use the driver and initialize 

#sbin/gabconfig -c -n2

LLT commands

lltstat -n command will show the llt link status in each node

in server1-
******************************************************
bash-3.00# /sbin/lltstat -n
LLT node information:
    Node                 State    Links
   * 0 solaris-test2     OPEN        2
     1 solaris-test3     OPEN        2

**************************************************************
in server2
*****************************************************
bash-3.00# /sbin/lltstat -n
LLT node information:
    Node                 State    Links
     0 solaris-test2     OPEN        2
   * 1 solaris-test3     OPEN        2

***********************************************************
lltstat -nvv |more command will show in verbose format 
***************************************************
bash-3.00# /sbin/lltstat -nvv
LLT node information:
    Node                 State    Link  Status  Address
   * 0 solaris-test2     OPEN
                                  e1000g0   UP      08:00:27:F2:FD:1B
                                  e1000g1   UP      08:00:27:A4:52:AF
     1 solaris-test3     OPEN
                                  e1000g0   UP      08:00:27:19:48:CB
                                  e1000g1   UP      08:00:27:76:37:96
     2                   CONNWAIT
                                  e1000g0   DOWN
                                  e1000g1   DOWN
     3                   CONNWAIT
                                  e1000g0   DOWN
                                  e1000g1   DOWN

***************************************************
To start & stop llc  use below commands 
***************************************************
#lltconfig -c

#lltconfig -U  ( Keep it in mind that GAB has to be stopped  before this command) 

GAB commands 

1. Define the group membership and atomic broadcast (GAB) configuration

# more /etc/gabtab
/sbin/gabconfig -c -n2

2. start GAB

# sh /etc/gabtab
starting GAB done.

3. Display the GAB details 

bash-3.00# /sbin/gabconfig -a
GAB Port Memberships
===============================================================
Port a gen    f2503 membership 01
Port h gen    f2505 membership 01


VCS configuration 

Configuring VCS means communicating VCS engine about the name & definition of the cluster,service groups, and resource dependencies  The VCS configuration file is located /etc/VRTSvcs/conf/config named main.cf & types.cf .Here the main.cf file will define entire cluster where types.cf includes resource types .In VCS the first system comes online will reads the configuration file and keeps the entire configuration in memory and system which are online later will derive the information . 

sample main.cf file

********************************************************************************
bash-3.00# cat /etc/VRTSvcs/conf/config/main.cf
include "OracleASMTypes.cf"
include "types.cf"
include "Db2udbTypes.cf"
include "OracleTypes.cf"
include "SybaseTypes.cf"

cluster unixchips (
        UserNames = { admin = dqrJqlQnrMrrPzrLqo }
        Administrators = { admin }
        )

system solaris-test2 (
        )

system solaris-test3 (
        )

group unixchipssg (
        SystemList = { solaris-test2 = 0, solaris-test3 = 1 }
        AutoStartList = { solaris-test2 }
        )

        DiskGroup unixchipsdg (
                Critical = 0
                DiskGroup = unixchipsdg
                )

        Mount unixchipsmount (
                Critical = 0
                MountPoint = "/vcvol1/"
                BlockDevice = "/dev/vx/dsk/unixchipsdg/vcvol1"
                FSType = vxfs
                FsckOpt = "-y"
                )

        Volume unixchipsvol (
                Critical = 0
                Volume = vcvol1
                DiskGroup = unixchipsdg
                )

        unixchipsmount requires unixchipsvol
        unixchipsvol requires unixchipsdg


        // resource dependency tree
        //
        //      group unixchipssg
        //      {
        //      Mount unixchipsmount
        //          {
        //          Volume unixchipsvol
        //              {
        //              DiskGroup unixchipsdg
        //              }
        //          }
        //      }

*********************************************************************************
Types. cf file
****************

bash-3.00# cat /etc/VRTSvcs/conf/config/types.cf
type AlternateIO (
        static str AgentFile = "bin/Script51Agent"
        static str ArgList[] = { StorageSG, NetworkSG }
        str StorageSG{}
        str NetworkSG{}
)

type Apache (
        static boolean IntentionalOffline = 0
        static keylist SupportedActions = { "checkconffile.vfd" }
        static str ArgList[] = { ResLogLevel, State, IState, httpdDir, SharedObjDir, EnvFile, PidFile, HostName, Port, User, SecondLevelMonitor, SecondLevelTimeout, ConfigFile, EnableSSL, DirectiveAfter, DirectiveBefore }
        static int ContainerOpts{} = { RunInContainer=1, PassCInfo=0 }
        str ResLogLevel = INFO
        str httpdDir
        str SharedObjDir
        str EnvFile
        str PidFile
        str HostName
        int Port = 80
        str User
        boolean SecondLevelMonitor = 0
        int SecondLevelTimeout = 30
        str ConfigFile
        boolean EnableSSL = 0
        str DirectiveAfter{}
        str DirectiveBefore{}
)
................. output is omitted 
*********************************************************************************

The setup



As per the above architecture diagram we have 2 solaris servers and openfiler storage for the sample set up. Here i am not covering the installation part which will be explaining in a separate post by. soon.  The sample service will be a vxfs mount point which is configured as a service in VCS setup .

1. Once the HA installation is completed we will get the status as both the nodes are online mode as given below 

bash-3.00# hastatus -sum

-- SYSTEM STATE
-- System               State                Frozen

A  solaris-test2        RUNNING              0
A  solaris-test3        RUNNING              0

 Now we need to create the diskgroup using the vxfs file system . 


1. First check the available disks in the server using below command  ( identical output for both the nodes) 

bash-3.00# echo |format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
       0. c0d0 <DEFAULT cyl 2085 alt 2 hd 255 sec 63>
          /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
       1. c0d1 <VBOX HAR-34e30776-506a21a-0001-1.01GB>
          /pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0
       2. c1d1 <VBOX HAR-b00f936a-669c3f0-0001-1.01GB>
          /pci@0,0/pci-ide@1,1/ide@1/cmdk@1,0
       3. c2t0d0 <VBOX-HARDDISK-1.0-1.17GB>
          /pci@0,0/pci1000,8000@14/sd@0,0
       4. c2t1d0 <VBOX-HARDDISK-1.0-1.06GB>
          /pci@0,0/pci1000,8000@14/sd@1,0
       5. c3t3d0 <OPNFILER-VIRTUAL-DISK-0 cyl 44598 alt 2 hd 16 sec 9>
          /iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.7d10272607c20001,0
       6. c3t4d0 <OPNFILER-VIRTUAL-DISK-0 cyl 63258 alt 2 hd 16 sec 9>
          /iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0a610808dede0001,0


bash-3.00# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
c0d0s2       auto:none       -            -            online invalid
c0d1         auto:ZFS        -            -            ZFS
c1t1d0s2     auto            -            -            error
c2t0d0       auto:ZFS        -            -            ZFS
c2t1d0       auto:ZFS        -            -            ZFS
disk_0       auto:cdsdisk    iscsi1       ----------------  online invalid 
disk_1       auto:cdsdisk    iscsi2       ----------------  online invalid 

As per the above command we can see 2 iscsi disks from the storage which is shown as online but invalid . These disks we will configure for the diskgroup .

2. Bring open filer disks under vxvm

bash-3.00 # /etc/vx/bin/vxdisksetup -i disk_0
bash-3.00# /etc/vx/bin/vxdisksetup -i disk_1

Now we can see both the disks are online status

3. bash-3.00# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
c0d0s2       auto:none       -            -            online invalid
c0d1         auto:ZFS        -            -            ZFS
c1t1d0s2     auto            -            -            error
c2t0d0       auto:ZFS        -            -            ZFS
c2t1d0       auto:ZFS        -            -            ZFS
disk_0       auto:cdsdisk    iscsi1       ----------------  online 
disk_1       auto:cdsdisk    iscsi2       ----------------  online

4. We have to create the diskgroup with thease online disks now

bash-3.00#vxdg init unixchipsdg iscsi1=disk_0 iscsi2=disk_1

We can see the disk group is created (unixchipsdg) for these iscsi disks

bash-3.00# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
c0d0s2       auto:none       -            -            online invalid
c0d1         auto:ZFS        -            -            ZFS
c1t1d0s2     auto            -            -            error
c2t0d0       auto:ZFS        -            -            ZFS
c2t1d0       auto:ZFS        -            -            ZFS
disk_0       auto:cdsdisk    iscsi1              unixchipsdg  online 
disk_1       auto:cdsdisk    iscsi2              unixchipsdg   online


5. We need to create the volume using this diskgroup

bash-3.00#vxassist -g unixchipsdg make vcvol1 2g

6. Format the volume to create the file system

bash-3.00#mkfs -F vxfs /dev/vx/rdsk/unixchipsdg/vcvol1

version 7 layout 6291456 sectors, 
3145728 blocks of size 1024, log size 16384 blocks 
largefiles supported

7.Now you can try to mount it for confirmation after creating the mount point 

bash-3.00#mkdir /vcvol1

bash-3.00#mount -F vxfs /dev/vx/dsk/unixchipsdg/vcvol1 /vcvol1

bash-3.00#df -h /vcvol1
Filesystem size used avail capacity Mounted on 
/dev/vx/dsk/unixchipsdg/vcvol12.0G 18M 1.8G 1% /vcvol1 

8. The next step is to create the service group/volume/diskgroup . The order of creation will be service group/volume/mount

Creating the servicegroup
**********************
# haconf -makerw  (this will make the configuration file ie main.cf to read write mode)
#hagrp -add unixchipssg
# hagrp -modify unixchipssg SystemList solar is-test2 0 solaris-test3 1
# hagrp -modify unixchipssg AutoStartList solaris-test2
# hagrp -display unixchipssg
#Group       Attribute             System        Value
unixchipssg  AdministratorGroups   global
unixchipssg  Administrators        global
unixchipssg  Authority             global        0
unixchipssg  AutoFailOver          global        1
unixchipssg  AutoRestart           global        1
unixchipssg  AutoStart             global        1
unixchipssg  AutoStartIfPartial    global        1
unixchipssg  AutoStartList         global        solaris-test2
unixchipssg  AutoStartPolicy       global        Order
unixchipssg  ClusterFailOverPolicy global        Manual
unixchipssg  ClusterList           global
unixchipssg  ContainerInfo         global
unixchipssg  DisableFaultMessages  global        0
unixchipssg  Evacuate              global        1
unixchipssg  ExtMonApp             global
unixchipssg  ExtMonArgs            global
unixchipssg  FailOverPolicy        global        Priority
unixchipssg  FaultPropagation      global        1
unixchipssg  Frozen                global        0
unixchipssg  GroupOwner            global

.........output will be omitted 

#haconf -dump ( to update the configuration in main.cf file)
#view /etc/VRTSvcs/conf/config/main.cf

# cat /etc/VRTSvcs/conf/config/main.cf
include "OracleASMTypes.cf"
include "types.cf"
include "Db2udbTypes.cf"
include "OracleTypes.cf"
include "SybaseTypes.cf"

cluster unixchips (
        UserNames = { admin = dqrJqlQnrMrrPzrLqo }
        Administrators = { admin }
        )

system solaris-test2 (
        )

system solaris-test3 (
        )

group unixchipssg (
        SystemList = { solaris-test2 = 0, solaris-test3 = 1 }
        AutoStartList = { solaris-test2 }
        )

Adding the Diskgroup
********************
# hares -add unixchipsdg DiskGroup unixchipssg
# hares -modify unixchipsdg Critical 0
# hares -modify unixchipsdg DiskGroup unixchipsdg
# hares -modify unixchipsdg  Enabled 1
# hares -online unixchipsdg -sys solaris-test2
# hares -state unixchipsdg
#Resource    Attribute             System        Value
unixchipsdg  State                 solaris-test2 ONLINE
unixchipsdg  State                 solaris-test3 OFFLINE

#haconf -dump

Adding the volume resource
***********************
# hares -add unixchipsvol Volume unixchipssg
# hares -modify unixchipsvol Critical 0
# hares -modify unixchipsvol Volume vcvol1  ( this is vxfs volume we have created earler )
# hares -modify unixchipsvol DiskGroup unixchipsdg
# hares -modify unixchipsvol Enabled 1
# hares -display unixchipsvol (display the volume status with attributes) 
#Resource    Attribute             System        Value
unixchipsvol Group                 global        unixchipssg
unixchipsvol Type                  global        Volume
unixchipsvol AutoStart             global        1
unixchipsvol Critical              global        0
unixchipsvol Enabled               global        1
unixchipsvol LastOnline            global        solaris-test3
unixchipsvol MonitorOnly           global        0
unixchipsvol ResourceOwner         global
unixchipsvol TriggerEvent          global        0
unixchipsvol ArgListValues         solaris-test2 Volume 1       vcvol1  DiskGroup       1       unixchipsdg
unixchipsvol ArgListValues         solaris-test3 Volume 1       vcvol1  DiskGroup       1       unixchipsdg
unixchipsvol ConfidenceLevel       solaris-test2 0
unixchipsvol ConfidenceLevel       solaris-test3 100
unixchipsvol ConfidenceMsg         solaris-test2
unixchipsvol ConfidenceMsg         solaris-test3
unixchipsvol Flags                 solaris-test2
unixchipsvol Flags                 solaris-test3
unixchipsvol IState                solaris-test2 not waiting
unixchipsvol IState                solaris-test3 not waiting
unixchipsvol MonitorMethod         solaris-test2 Traditional
unixchipsvol MonitorMethod         solaris-test3 Traditional

................................out put is omitted 

# hares -online unixchipsvol -sys solaris-test2
# hares -state unixchipsvol
#Resource    Attribute             System        Value
unixchipsvol State                 solaris-test2 ONLINE
unixchipsvol State                 solaris-test3 OFFLINE
#haconf -dump 

Adding the mount point resource
*******************************
# hares -add unixchipsmount mount unixchipssg
# hares -modify unixchipsmount Critical 0
# hares -modify unixchipsmount MountPoint /vcvol1
# hares -modify unixchipsmount BlockDevice /dev/vx/dsk/unixchipsdg/vcvol1
# hares -modify unixchipsmount FSType vxfs
# hares -modify unixchipsmount  FSCKopt %-y
# hares -modify unixchipsmount Enabled 1
# hares -display unixchipsmount
#Resource      Attribute             System        Value
unixchipsmount Group                 global        unixchipssg
unixchipsmount Type                  global        Mount
unixchipsmount AutoStart             global        1
unixchipsmount Critical              global        0
unixchipsmount Enabled               global        1
unixchipsmount LastOnline            global        solaris-test2
unixchipsmount MonitorOnly           global        0
unixchipsmount ResourceOwner         global
unixchipsmount TriggerEvent          global        0
unixchipsmount ArgListValues         solaris-test2 MountPoint   1       /vcvol1/        BlockDevice     1       /dev/vx/dsk/unixchipsdg/vcvol1  FSType  1       vxfs    MountOpt        1       ""      FsckOpt 1       -y      SnapUmount      1       0       CkptUmount      1       1       OptCheck        1       0       CreateMntPt     1       0       MntPtPermission 1       ""      MntPtOwner      1       ""      MntPtGroup      1       ""      AccessPermissionChk     1       0       RecursiveMnt    1       0       VxFSMountLock   1       1
unixchipsmount ArgListValues         solaris-test3 MountPoint   1       /vcvol1/        BlockDevice     1       /dev/vx/dsk/unixchipsdg/vcvol1  FSType  1       vxfs    MountOpt        1       ""      FsckOpt 1       -y      SnapUmount      1       0       CkptUmount      1       1       OptCheck        1       0       CreateMntPt     1       0       MntPtPermission 1       ""      MntPtOwner      1       ""      MntPtGroup      1       ""      AccessPermissionChk     1       0       RecursiveMnt    1       0       VxFSMountLock   1       1
...............................................output will be omitted

# hares -online unixchipsmount -sys solaris-test2
# hares -state unixchipsmount
#Resource      Attribute             System        Value
unixchipsmount State                 solaris-test2 ONLINE
unixchipsmount State                 solaris-test3 OFFLINE

Linking the resources to the service group
********************************
# hares -link unixchipsvol unixchipsdg
# hares -link unixchipsmount  unixchipsvol


9. Testing the resources 

checking the status

# hastatus -sum

-- SYSTEM STATE
-- System               State                Frozen

A  solaris-test2        RUNNING              0
A  solaris-test3        RUNNING              0

-- GROUP STATE
-- Group           System               Probed     AutoDisabled    State

B  unixchipssg     solaris-test2        Y          N               ONLINE
B  unixchipssg     solaris-test3        Y          N               OFFLINE

checking the service status ( in this case mount point)

# df -h |grep vcvol1
/dev/vx/dsk/unixchipsdg/vcvol1   2.0G    19M   1.9G     1%    /vcvol1

switching the service to another node

# hagrp -switch unixchipssg -to solaris-test3


# hastatus -sum

-- SYSTEM STATE
-- System               State                Frozen

A  solaris-test2        RUNNING              0
A  solaris-test3        RUNNING              0

-- GROUP STATE
-- Group           System               Probed     AutoDisabled    State

B  unixchipssg     solaris-test2        Y          N               OFFLINE
B  unixchipssg     solaris-test3        Y          N               ONLINE


(imp: the default location of the vcs commands will be /opt/VRTSvcs/bin/ and we need to export the path  to avoid the full path details during execution in /etc/profile)

PATH=$PATH:/usr/sbin:/opt/VRTS/bin
export PATH

Thank you for reading the article. please feel free to post your comments and suggestions 





























 

































Monday, February 1, 2016

solaris ZFS

ZFS is a combined file system and logical volume manager designed by sun microsystems . The features of ZFS includes its high scalability,maximum integrity,drive pooling and multiple
RAID levels. ZFS uses concept of storage pools to manage physical storage, instead of volume manager ZFS aggregates devices in to storage pool.The storage pool describes physical characteristics of the storage and act as an arbitrary data store from which file systems can be created.Also you don't need to predefine the size of the file system and file systems inside the ZFS pools will be automatically grow with in the disk space allocated to the storage pool.











High scalability 

ZFS is the 128 bit file system that is capable of zettabites of data ( 1 billion terabytes) of data , no matter how much harddrive you have zfs is capable to manage it.

Maximum Integrity 

All the data inside the ZFS occupied with a checksum which will ensure it's data integrity . You can ensure that your data will not encounter silent data corruption. 

Drive pooling

ZFS is just like our system RAM, when ever you need more space, you need to insert the new HDD only and it will automatically added in to the pool . So no need of headache as formating, initializing, partitioning etc. 

Capability of different RAID levels 

We can configure multiple RAID levels using ZFS and performance wise it is upto the mark with hardware raids. 

Configuration Part

                                                                        Stripped pool

In this case data is stripped between multiple HDD's and no backup is available. The data access speed is high in this case

1. In our server we have below HDD's attached 

bash-3.00# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c0d0 <DEFAULT cyl 2085 alt 2 hd 255 sec 63>
          /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
       1. c0d1 <VBOX HAR-34e30776-506a21a-0001-1.01GB>
          /pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0
       2. c1d1 <DEFAULT cyl 513 alt 2 hd 128 sec 32>
          /pci@0,0/pci-ide@1,1/ide@1/cmdk@1,0

2. Create the stripped pool

bash-3.00# zpool create unixpool c0d1 c1d1

3. Check the pool status

bash-3.00# zpool status unixpool
  pool: unixpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        unixpool    ONLINE       0     0     0
          c0d1         ONLINE       0     0     0
          c1d1         ONLINE       0     0     0

errors: No known data errors

bash-3.00# zpool list unixpool
NAME       SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
unixpool  1.98G    78K  1.98G     0%  ONLINE  -
bash-3.00# zfs list unixpool
NAME       USED  AVAIL  REFER  MOUNTPOINT
unixpool  73.5K  1.95G    21K  /unixpool

                                                                  Mirrored Pool

As the name pointed this will have mirrored configuration and occupies backup. Data Read speed is high in this case, but write speed is slow

1. Creating the mirrored pool

bash-3.00# zpool create unixpool mirror c0d1 c1d1

2. Verifying the pool status 

bash-3.00# zpool status unixpool
  pool: unixpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        unixpool     ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
             c0d1       ONLINE       0     0     0
            c1d1        ONLINE       0     0     0

errors: No known data errors

bash-3.00# zpool list unixpool
NAME       SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
unixpool  1016M   108K  1016M     0%  ONLINE  -

bash-3.00# zfs list unixpool
NAME       USED  AVAIL  REFER  MOUNTPOINT
unixpool  73.5K   984M    21K  /unixpool

Mirroring multiple disks 
*****************
1. Creating the mirrored pool

bash-3.00# zpool create unixpool2m mirror c0d1 c1d1 mirror c2t0d0 c2t1d0

2. verifying the pool status 

bash-3.00# zpool status unixpool2m
  pool: unixpool2m
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        unixpool2m  ONLINE       0     0     0
          mirror-0      ONLINE       0     0     0
            c0d1          ONLINE       0     0     0
            c1d1          ONLINE       0     0     0
          mirror-1      ONLINE       0     0     0
            c2t0d0       ONLINE       0     0     0
            c2t1d0       ONLINE       0     0     0

errors: No known data errors

bash-3.00# zpool list unixpool2m
NAME         SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
unixpool2m  2.04G    81K  2.04G     0%  ONLINE  -

bash-3.00# zfs list unixpool2m
NAME         USED  AVAIL  REFER  MOUNTPOINT
unixpool2m  76.5K  2.01G    21K  /unixpool2m 


                                                           Raid 5 (RaidZ) spool

This needs minimum 3 HDD's  and able to sustain single HDD failure . If the disks are different sizes we need to use -f option while creating 

1. Creating the raidz

bash-3.00# zpool create -f  unixpoolraidz raidz c0d1 c1d1 c2t0d0

2. Checking the status

  bash-3.00# zpool status unixpoolraidz
  pool: unixpoolraidz
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        unixpoolraidz  ONLINE       0     0     0
          raidz1-0         ONLINE       0     0     0
            c0d1             ONLINE       0     0     0
            c1d1             ONLINE       0     0     0
            c2t0d0          ONLINE       0     0     0

errors: No known data errors

bash-3.00# zpool list unixpoolraidz
NAME            SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
unixpoolraidz  2.98G   150K  2.98G     0%  ONLINE  -

bash-3.00# zfs list unixpoolraidz
NAME            USED  AVAIL  REFER  MOUNTPOINT
unixpoolraidz  93.9K  1.96G  28.0K  /unixpoolraidz


                                                     Raid 6 (RaidZ2)

This needs minimum 4 HDD's and can sustain 2 HDD failures 

1. Creating the raidz2

bash-3.00# zpool create -f unixpoolraidz2 raidz2 c0d1 c1d1 c2t0d0 c2t1d0

2. Checking the status 

bash-3.00# zpool status unixpoolraidz2
  pool: unixpoolraidz2
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        unixpoolraidz2  ONLINE       0     0     0
          raidz2-0           ONLINE       0     0     0
            c0d1              ONLINE       0     0     0
            c1d1              ONLINE       0     0     0
            c2t0d0           ONLINE       0     0     0
            c2t1d0           ONLINE       0     0     0

errors: No known data errors

bash-3.00# zpool list unixpoolraidz2
NAME             SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
unixpoolraidz2  3.97G   338K  3.97G     0%  ONLINE  -

bash-3.00# zfs list unixpoolraidz2
NAME             USED  AVAIL  REFER  MOUNTPOINT
unixpoolraidz2   101K  1.95G  31.4K  /unixpoolraidz2


                                                         Destroying a zpool 

We can destroy the zpool even it is mounted status also 

bash-3.00# zpool destroy unixpoolraidz2

bash-3.00# zpool list unixpoolraidz2
cannot open 'unixpoolraidz2': no such pool

                                                   

                                         Importing and exporting the pool


As a unix administrator you might be facing some situation for storage migration. In that case we have a option called storage pool migration from one storage system to other 
1. We have a pool called unixpool now

bash-3.00# zpool status unixpool
  pool: unixpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        unixpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c0d1    ONLINE       0     0     0
            c1d1    ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0
            c2t1d0  ONLINE       0     0     0

errors: No known data errors
2. Exporting the unixpool

bash-3.00# zpool export unixpool
3. Checking the status 
bash-3.00# zpool status unixpool
cannot open 'unixpool': no such pool

Once you imported you can see there is no pool is available now. Now we need to import the pool to a new system

4. Check the pool need to be imported using zpool command 

 bash-3.00# zpool import
  pool: unixpool
    id: 13667943168172491796
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        unixpool    ONLINE
          mirror-0  ONLINE
            c0d1    ONLINE
            c1d1    ONLINE
          mirror-1  ONLINE
            c2t0d0  ONLINE
            c2t1d0  ONLINE

5. zpool import unixpool

bash-3.00# zpool status unixpool
  pool: unixpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        unixpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c0d1    ONLINE       0     0     0
            c1d1    ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0
            c2t1d0  ONLINE       0     0     0

errors: No known data errors

We have successfully imported the unixpool now.

             Zpool scrub option for integity check and repairing 

We have a option for zpool integrity check and repairing just like our fsck in conventional unix file system . We can use zpool scrub for to achive this. From below command you can see the scrub option completed status and timing also

bash-3.00# zpool scrub unixpool
bash-3.00# zpool status unixpool
  pool: unixpool
 state: ONLINE
 scrub: scrub completed after 0h0m with 0 errors on Mon Feb  1 18:30:31 2016
config:

        NAME        STATE     READ WRITE CKSUM
        unixpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c0d1    ONLINE       0     0     0
            c1d1    ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0
            c2t1d0  ONLINE       0     0     0

errors: No known data errors



Thank you . Please post your comments and suggestions