LVM Partition
-------------------------Storage Allocation: add a new LUN using EMC powerpath
--------------------------
- Process is raise a new storage request for additional disks. On allocation of the disks from storage team.
----------------------------------------------------
1.Pre-Backup
----------------------------------------------------
------take powermt backup pre---
-------take fdisk backup pre----
----Take various backup of all
- fdisk -l 2>/dev/null | egrep ^Disk | egrep -v 'dm-' >/home/nyada01/fdisk.pre
- powermt display dev=all >/home/n/powermt.pre
- df -Ph > df.Ph.pre
- fdisk -l > fdisk.l.pre
- pvs > pvs.pre
- vgs > vgs.pre
- lvs > lvs.pre
- cat /etc/fstab >fstab.pre
--------------------------------------------
2.Scan the server for newly allocated disks
--------------------------------------------
--A.)Find out how many SCSI controller configured
- ls /sys/class/scsi_host/
--B.)Use systool command to see active HBA link
- systool -c fc_host -v
- tail /var/log/messages in duplicate window to see any changes to disk detected in logs simultaneously.
--C.)Scan the SCSI disks
- echo "- - -" > /sys/class/scsi_host/host*/scan
Please use the SCSI controller number in place of * above & run the command seperately.
--D.)Take fdisk & powermt backup post
- fdisk -l 2>/dev/null | egrep ^Disk | egrep -v 'dm-' >/home/nyada01/fdisk.post
- powermt display dev=all >/home/n/powermt.post
- powermt config
- powermt save
--E.)check the difference between pre & post for fdisk & powermt
- cd /home/n/
- diff fdisk.pre fdisk.post
- diff powermt.pre powermt.post
Note:If you are still not able to find the new disk then issue the following command.
- echo "1" > /sys/class/fc_host/host/issue_lip
- Issue fdisk –l command to see if any new disks are detected.
--------------------------------------------
3.LVM Creation
--------------------------------------------
A.)Now create a physical volume
- pvcreate /dev/emcpowere
B.)Volume Group create
- vgcreate ny01vg /dev/emcpowere
Use 'vgextend' to add an initialized physical volume to an existing volume group.
# vgextend my_volume_group /dev/hdc1
C.)Logical Volume Creation
- lvcreate -L 980G -n ny01vol ny01vg
D.)Convert the volume to gfs2 file system
- mkfs.gfs2 -p lock_dlm -t nylinux-clus1:nyata -j 8 /dev/ny01vg/ny01vol
--------------------------------------------
4.Mount the file system
--------------------------------------------
A.)Mount the gfs2 file system
- mkdir -p /oracle/ny01/
- mount /dev/ny01vg/ny01vol /oracle/ny01/
B.)After mounting do /etc/mtab
You should see the below mount point entries, copy the below line and update in /etc/fstab
/dev/mapper/ny01vg-ny01vol /oracle/ny01 gfs2 rw,relatime,hostdata=jid=0 0 0
C.)On ny02 node scan for newly created gfs2 file system
- vgscan
- lvscan
Do, df –h and you should see the newly created gfs2 file system visible.
----------------------------------------------------
5.Post-Backup
----------------------------------------------------
----Take various backup of all post
- df -Ph > df.Ph.post
- fdisk -l > fdisk.l.post
- pvs > pvs.post
- vgs > vgs.post
- lvs > lvs.post
- cat /etc/fstab >fstab.post
----------------------------------------------------------------
1.What are LVM1 and LVM2?
LVM1 and LVM2 are the versions of LVM.
LVM2 uses device mapper driver contained in 2.6 kernel version.
LVM 1 was included in the 2.4 series kernels.
2.What is the maximum size of a single LV?
For 2.4 based kernels, the maximum LV size is 2TB.
For 32-bit CPUs on 2.6 kernels, the maximum LV size is 16TB.
For 64-bit CPUs on 2.6 kernels, the maximum LV size is 8EB.
3.List of important LVM related files and Directories?
## Directories
/etc/lvm - default lvm directory location
/etc/lvm/backup - where the automatic backups go
/etc/lvm/cache - persistent filter cache
/etc/lvm/archive - where automatic archives go after a volume group change
/var/lock/lvm - lock files to prevent metadata corruption
# Files
/etc/lvm/lvm.conf - main lvm configuration file
$HOME/.lvm - lvm history
4.What is the steps to create LVM in Linux?
Create a physical volume by using pvcreate command
consider the disk is local.
#fdisk -l
#fdisk /dev/sda
Press "n" to create new partition. And mention the size / allocate whole disk to single partition. and assign the partition number also.
#press "t" to change the partition as LVM partition.
#enter "8e" ( 8e - is Hex decimal code for LVM )
#Enter "w" to write tghe information on Disk.
#fdisk -l ( Now you will get newly created disk numbers)
#pvcreate /dev/sda2
Add physical volume to volume group by “vgcreate” command
#vgcreate VLG0 /dev/sda2
Create logical volume from volume group by “lvcreate” command.
#lvcreate -L 1G -n LVM1 VG0
Now create file system on /dev/sda2 partition by “mke2fs” or "mkfs.ext3" command.
#mke2fs -j /dev/VG0/LVM1
or
#mkfs.ext3 /dev/vg0/LVM1
How to mount this as file system
#mkdir /test
#mount /dev/VG0/LVM1 /test
5.How to extend a File system in Linux?
Check the free space on vg
#vgdisplay -v VG1
Now extend the FS
# lvextend -L+1G /dev/VG1/lvol1
# resize2fs /dev/VG1/lvol1
6.How to reduce the File system size in Linux?
1.First we need to reduce the file system size using "resize2fs"
2.Then reduce the lvol size using "lvreduce"
#resize2fs -f /dev/VolGroup00/LogVol00 3G
#lvreduce -L 5G /dev/VG1/Lvol1
7.How to add new LUN from storage to Linux server?
Step 1: Get the list of HBA and exisiting disk details.
#ls /sys/class/fc_host
#fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' | wc -l
Step 2: Scan the HBA ports (Need to scan all HBA port)
#echo "1" > /sys/class/fc_host/host??/issue_lip
# echo "- - -" > /sys/class/scsi_host/host??/scan
Do this above steps for all HBA cards
Step3 : Check the newly added Lun
# cat /proc/scsi/scsi | egrep -i 'Host:' | wc -l
# fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' | wc -l
Once found the disk then do below steps to add to VolumeGroup
#pvcreate /dev/diskpath
#vgextend /dev/vg1 /dev/diskpath
#vgs or #vgdisplay /dev/vg1
8.How to resize root file system on RHEL 6?
Here is the list of steps to reduce the root file system (lv_root) on a RHEL 6 Linux server:
Boot the system into rescue mode. Do not mount the file systems (select the option to 'Skip' in the rescue mode and start a shell)
Bring the Volume Group online
#lvm vgchange -a -y
Run fsck on the FS
#e2fsck -f /dev/vg_myhost/lv_root
Resize the file system with new size
#resize2fs -f /dev/vg00/lv_root 20G
Reduce the Logical Volume of the FS with the new size
#lvreduce -L20G /dev/vg00/lv_root
Run fsck to make sure the FS is still ok
#e2fsck -f /dev/vg00/lv_root
Optionally mount the file system in the rescue mode
#mkdir -p /mnt/sysimage/root
#mount -t ext4 /dev/mapper/vg00-lv_root /mnt/sysimage/root
#cd /mnt/sysimage/root
Unmount the FS
#cd
#umount /mnt/sysimage/root
Exit rescue mode and boot the system from the hard disk
#exit
Select the reboot option from the recue mode
9.How to find server is configured with LVM RAID ?
1.How to check linux LVM RAID ?
check the RAID status in /proc/mdstat
#cat /proc/mdstat
or
# mdadm --detail /dev/mdx
or
# lsraid -a /dev/mdx
2.Check the Volume group disks
#vgdisplay -v vg01
In disk we will get the device names like /dev/md1 , /dev/md2 . It means LVM RAID disks are configured and its added to Volume Group.
10.How to check Linux server is configured with power path disks?
1.Check power path is installed on server?
#rpm -qa |grep -i emc
2.Check the power path status on server?
#/etc/init.d/PowerPath status
#chkconfig --list PowerPath
# lsmod |grep -i emc
3.Check the Volume group disks
#vgdisplay -v vg01
In disk we will get the device names like /dev/emcpowera , /dev/emcpowerb . It means powerpath disks are configured and its added to Volume Group.
4.Check the power path disk status using below command
#powermt display dev=all
11.How to check server is configured with Multipath disks??
# ls -lrt /dev/mapper //To View the Mapper disk paths and Lvols
#dmsetup table
#dmsetup ls
#dmsetup status
2.Using Multipathd Command ( Daemon )
#echo 'show paths' |multipathd -k
#echo 'show maps' |multipathd -k
3.Check multipath Daemon is running or not
#ps -eaf |grep -i multipathd
4.check the VG disk paths
#vgs or vgdisplay -v vg01
If multipath disks are added and configured with VG then we will get disk paths like /dev/mpath0 , /dev/mpath1.
5.If you want to check the disk path status u can use below command also
#multipathd -k
#multipathd> show multipaths status
#multipathd> show topology
How to Configure Linux Cluster with 2 Nodes on RedHat and CentOS
In an active-standby Linux cluster configuration, all the critical services including IP, filesystem will failover from one node to another node in the cluster.
The following are the high-level steps involved in configuring Linux cluster on Redhat or CentOS:
- Install and start RICCI cluster service
- Create cluster on active node
- Add a node to cluster
- Add fencing to cluster
- Configure failover domain
- Add resources to cluster
- Sync cluster configuration across nodes
- Start the cluster
- Verify failover by shutting down an active node
1. Required Cluster Packages
First make sure the following cluster packages are installed. If you don’t have these packages install them using yum command.
[root@rh1 ~]# rpm -qa | egrep -i "ricci|luci|cluster|ccs|cman"
modcluster-0.16.2-28.el6.x86_64
luci-0.26.0-48.el6.x86_64
ccs-0.16.2-69.el6.x86_64
ricci-0.16.2-69.el6.x86_64
cman-3.0.12.1-59.el6.x86_64
clusterlib-3.0.12.1-59.el6.x86_64
2. Start RICCI service and Assign Password
Next, start ricci service on both the nodes.
[root@rh1 ~]# service ricci start
Starting oddjobd: [ OK ]
generating SSL certificates... done
Generating NSS database... done
Starting ricci: [ OK ]
You also need to assign a password for the RICCI on both the nodes.
[root@rh1 ~]# passwd ricci
Changing password for user ricci.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Also, If you are running iptables firewall, keep in mind that you need to have appropriate firewall rules on both the nodes to be able to talk to each other.
3. Create Cluster on Active Node
From the active node, please run the below command to create a new cluster.
The following command will create the cluster configuration file /etc/cluster/cluster.conf. If the file already exists, it will replace the existing cluster.conf with the newly created cluster.conf.
[root@rh1 ~]# ccs -h rh1.mydomain.net --createcluster mycluster
rh1.mydomain.net password:
[root@rh1 ~]# ls -l /etc/cluster/cluster.conf
-rw-r-----. 1 root root 188 Sep 26 17:40 /etc/cluster/cluster.conf
Also keep in mind that we are running these commands only from one node on the cluster and we are not yet ready to propagate the changes to the other node on the cluster.
4. Initial Plain cluster.conf File
After creating the cluster, the cluster.conf file will look like the following:
[root@rh1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="1" name="mycluster">
<fence_daemon/>
<clusternodes/>
<cman/>
<fencedevices/>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>5. Add a Node to the Cluster
Once the cluster is created, we need to add the participating nodes to the cluster using the ccs command as shown below.
First, add the first node rh1 to the cluster as shown below.
[root@rh1 ~]# ccs -h rh1.mydomain.net --addnode rh1.mydomain.net
Node rh1.mydomain.net added.
Next, add the second node rh2 to the cluster as shown below.
[root@rh1 ~]# ccs -h rh1.mydomain.net --addnode rh2.mydomain.net
Node rh2.mydomain.net added.
Once the nodes are created, you can use the following command to view all the available nodes in the cluster. This will also display the node id for the corresponding node.
[root@rh1 ~]# ccs -h rh1 --lsnodes
rh1.mydomain.net: nodeid=1
rh2.mydomain.net: nodeid=2
6. cluster.conf File After Adding Nodes
This above will also add the nodes to the cluster.conf file as shown below.
[root@rh1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="3" name="mycluster">
<fence_daemon/>
<clusternodes>
<clusternode name="rh1.mydomain.net" nodeid="1"/>
<clusternode name="rh2.mydomain.net" nodeid="2"/>
</clusternodes>
<cman/>
<fencedevices/>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>
7. Add Fencing to Cluster
Fencing is the disconnection of a node from shared storage. Fencing cuts off I/O from shared storage, thus ensuring data integrity.
A fence device is a hardware device that can be used to cut a node off from shared storage.
This can be accomplished in a variety of ways: powering off the node via a remote power switch, disabling a Fiber Channel switch port, or revoking a host’s SCSI 3 reservations.
A fence agent is a software program that connects to a fence device in order to ask the fence device to cut off access to a node’s shared storage (via powering off the node or removing access to the shared storage by other means).
Execute the following command to enable fencing.
[root@rh1 ~]# ccs -h rh1 --setfencedaemon post_fail_delay=0
[root@rh1 ~]# ccs -h rh1 --setfencedaemon post_join_delay=25
Next, add a fence device. There are different types of fencing devices available. If you are using virtual machine to build a cluster, use fence_virt device as shown below.
[root@rh1 ~]# ccs -h rh1 --addfencedev myfence agent=fence_virt
Next, add fencing method. After creating the fencing device, you need to created the fencing method and add the hosts to the fencing method.
[root@rh1 ~]# ccs -h rh1 --addmethod mthd1 rh1.mydomain.net
Method mthd1 added to rh1.mydomain.net.
[root@rh1 ~]# ccs -h rh1 --addmethod mthd1 rh2.mydomain.net
Method mthd1 added to rh2.mydomain.net.
Finally, associate fence device to the method created above as shown below:
[root@rh1 ~]# ccs -h rh1 --addfenceinst myfence rh1.mydomain.net mthd1
[root@rh1 ~]# ccs -h rh1 --addfenceinst myfence rh2.mydomain.net mthd1
8. cluster.conf File after Fencing
Your cluster.conf will look like below after the fencing devices, methods are added.
[root@rh1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="10" name="mycluster">
<fence_daemon post_join_delay="25"/>
<clusternodes>
<clusternode name="rh1.mydomain.net" nodeid="1">
<fence>
<method name="mthd1">
<device name="myfence"/>
</method>
</fence>
</clusternode>
<clusternode name="rh2.mydomain.net" nodeid="2">
<fence>
<method name="mthd1">
<device name="myfence"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman/>
<fencedevices>
<fencedevice agent="fence_virt" name="myfence"/>
</fencedevices>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>
9. Types of Failover Domain
A failover domain is an ordered subset of cluster members to which a resource group or service may be bound.
The following are the different types of failover domains:
- Restricted failover-domain: Resource groups or service bound to the domain may only run on cluster members which are also members of the failover domain. If no members of failover domain are availables, the resource group or service is placed in stopped state.
- Unrestricted failover-domain: Resource groups bound to this domain may run on all cluster members, but will run on a member of the domain whenever one is available. This means that if a resource group is running outside of the domain and member of the domain transitions online, the resource group or
- service will migrate to that cluster member.
- Ordered domain: Nodes in the ordered domain are assigned a priority level from 1-100. Priority 1 being highest and 100 being the lowest. A node with the highest priority will run the resource group. The resource if it was running on node 2, will migrate to node 1 when it becomes online.
- Unordered domain: Members of the domain have no order of preference. Any member may run in the resource group. Resource group will always migrate to members of their failover domain whenever possible.
10. Add a Filover Domain
To add a failover domain, execute the following command. In this example, I created domain named as “webserverdomain”,
[root@rh1 ~]# ccs -h rh1 --addfailoverdomain webserverdomain ordered
Once the failover domain is created, add both the nodes to the failover domain as shown below:
[root@rh1 ~]# ccs -h rh1 --addfailoverdomainnode webserverdomain rh1.mydomain.net priority=1
[root@rh1 ~]# ccs -h rh1 --addfailoverdomainnode webserverdomain rh2.mydomain.net priority=2
You can view all the nodes in the failover domain using the following command.
[root@rh1 ~]# ccs -h rh1 --lsfailoverdomain
webserverdomain: restricted=0, ordered=1, nofailback=0
rh1.mydomain.net: 1
rh2.mydomain.net: 2
11. Add Resources to Cluster
Now it is time to add a resources. This indicates the services that also should failover along with ip and filesystem when a node fails. For example, the Apache webserver can be part of the failover in the Redhat Linux Cluster.
When you are ready to add resources, there are 2 ways you can do this.
You can add as global resources or add a resource directly to resource group or service.
The advantage of adding it as global resource is that if you want to add the resource to more than one service group you can just reference the global resource on your service or resource group.
In this example, we added the filesystem on a shared storage as global resource and referenced it on the service.
[root@rh1 ~]# ccs –h rh1 --addresource fs name=web_fs device=/dev/cluster_vg/vol01 mountpoint=/var/www fstype=ext4
To add a service to the cluster, create a service and add the resource to the service.
[root@rh1 ~]# ccs -h rh1 --addservice webservice1 domain=webserverdomain recovery=relocate autostart=1
Now add the following lines in the cluster.conf for adding the resource references to the service. In this example, we also added failover IP to our service.
<fs ref="web_fs"/>
<ip address="192.168.1.12" monitor_link="yes" sleeptime="10"/>
In the 2nd part of this tutorial (tomorrow), we’ll explain how to sync the configurations across multiple nodes in a cluster, and how to verify the failover scenario in a cluster setup.