Application Pre-Deployment Requirements
1.1. Application Pre-Deployment Requirements
· Root access to the OS
· Access to a SAN disk which needs to utilise PowerPath for Fault Tolerance
Check if EMC PowerPath is installed
# powermt check_registration
Key ABCD-EFGH-1234-IJKL-5678-MNOP
Product: PowerPath
Capabilities: All
Key ABCD-EFGH-1234-IJKL-5678-MNOP
Product: PowerPath
Capabilities: All
If you can't run the command or don't see the license listed, please submit a TEM ticket to have PowerPath installed and configured.
1.3. Configure PowerPath
1. Log into the target host with a domain account then su to root.
# sudo su -
2. check powerpath devices
# powermt display dev=all
Pseudo name=emcpower0a
Pseudo name=emcpower0a
CLARiiON ID=APM00063103821 [solaris]
Logical device ID=600601601C0XXXXXXXXXXXXXXXXXXX11 [LUN 356]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0;
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3074 pci@7c0/pci@0/pci@9/SUNW,emlxs@0/fp@0,0 c3t500601234567890Bd0s0 SP A1
active alive 0 1
3074 pci@7c0/pci@0/pci@9/SUNW,emlxs@0/fp@0,0 c3t500601234567891Bd0s0 SP B1
active alive 0 0
3075 pci@7c0/pci@0/pci@9/SUNW,emlxs@0,1/fp@0,0 c3t500601234567892Bd0s0 SP A3
active alive 0 1
3075 pci@7c0/pci@0/pci@9/SUNW,emlxs@0,1/fp@0,0 c3t500601234567893Bd0s0 SP B3
active alive 0 0
If you don't see anything, you would need to reset powerpath
# powercf -q
# powermt config
# powermt set policy=co dev=all
# powermt save
# powermt display dev=all
If you still don't see anything, submit a TEM ticket.
3. use format command and make sure emcpower devices are listed
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w500000e011f7a661,0
1. c1t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w500000e011f7f4a1,0
2. c1t2d0 <FUJITSU-MAX3147FCSUN146G-1103 cyl 14087 alt 2 hd 24 sec 848>
/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w500000e011ac9601,0
3. c1t3d0 <FUJITSU-MAX3147FCSUN146G-1103 cyl 14087 alt 2 hd 24 sec 848>
/pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w500000e011ac7251,0
4. c6t5006016A44600750d0 <DGC-RAID5-0428 cyl 61438 alt 2 hd 256 sec 40>
/pci@9,600000/SUNW,emlxs@1/fp@0,0/ssd@w5006016a44600750,0
5. c6t5006016A44600750d1 <DGC-RAID5-0428 cyl 43518 alt 2 hd 256 sec 32>
/pci@9,600000/SUNW,emlxs@1/fp@0,0/ssd@w5006016a44600750,1
6. c6t5006016A44600750d2 <DGC-VRAID-0428 cyl 58878 alt 2 hd 256 sec 32>
/pci@9,600000/SUNW,emlxs@1/fp@0,0/ssd@w5006016a44600750,2
7. c6t5006016344600750d0 <DGC-RAID5-0428 cyl 61438 alt 2 hd 256 sec 40>
/pci@9,600000/SUNW,emlxs@1/fp@0,0/ssd@w5006016344600750,0
8. c6t5006016344600750d1 <DGC-RAID5-0428 cyl 43518 alt 2 hd 256 sec 32>
/pci@9,600000/SUNW,emlxs@1/fp@0,0/ssd@w5006016344600750,1
9. c6t5006016344600750d2 <DGC-VRAID-0428 cyl 58878 alt 2 hd 256 sec 32>
/pci@9,600000/SUNW,emlxs@1/fp@0,0/ssd@w5006016344600750,2
10. c7t5006016B44600750d0 <DGC-RAID5-0428 cyl 61438 alt 2 hd 256 sec 40>
/pci@9,600000/SUNW,emlxs@2/fp@0,0/ssd@w5006016b44600750,0
11. c7t5006016B44600750d1 <DGC-RAID5-0428 cyl 43518 alt 2 hd 256 sec 32>
/pci@9,600000/SUNW,emlxs@2/fp@0,0/ssd@w5006016b44600750,1
12. c7t5006016B44600750d2 <DGC-VRAID-0428 cyl 58878 alt 2 hd 256 sec 32>
/pci@9,600000/SUNW,emlxs@2/fp@0,0/ssd@w5006016b44600750,2
13. c7t5006016244600750d0 <DGC-RAID5-0428 cyl 61438 alt 2 hd 256 sec 40>
/pci@9,600000/SUNW,emlxs@2/fp@0,0/ssd@w5006016244600750,0
14. c7t5006016244600750d1 <DGC-RAID5-0428 cyl 43518 alt 2 hd 256 sec 32>
/pci@9,600000/SUNW,emlxs@2/fp@0,0/ssd@w5006016244600750,1
15. c7t5006016244600750d2 <DGC-VRAID-0428 cyl 58878 alt 2 hd 256 sec 32>
/pci@9,600000/SUNW,emlxs@2/fp@0,0/ssd@w5006016244600750,2
16. emcpower0a <DGC-RAID5-0428 cyl 61438 alt 2 hd 256 sec 40>
/pseudo/emcp@0
17. emcpower1a <DGC-RAID5-0428 cyl 43518 alt 2 hd 256 sec 32>
/pseudo/emcp@1
18. emcpower2a <DGC-VRAID-0428 cyl 58878 alt 2 hd 256 sec 32>
/pseudo/emcp@2
Specify disk (enter its number):
Press Ctrl-C to get out.
4. Check if the disks are already configured
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
disk_0 auto:cdsdisk - - online
disk_1 auto:cdsdisk - - online
disk_2 auto:SVM - - SVM
disk_3 auto:SVM - - SVM
emcpower0s2 auto:cdsdisk FCSQAgr01 FCSQAgr online nohotuse
emcpower1s2 auto:cdsdisk FCSQAgr02 FCSQAgr online nohotuse
emcpower2s2 auto:cdsdisk FCSQAgr03 FCSQAgr online nohotuse
If you see a listing similar to the above, the disks are configured. SVM is Solaris Volume Manager and FCSQAgr is the volume group with three SAN disks inside.
5. Check if partitions are configured.
# df -k
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d0 15493250 4742037 10596281 31% /
/devices 0 0 0 0% /devices
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 21779944 1336 21778608 1% /etc/svc/volatile
objfs 0 0 0 0% /system/object
sharefs 0 0 0 0% /etc/dfs/sharetab
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap2.so.1
15493250 4742037 10596281 31% /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
15493250 4742037 10596281 31% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd 0 0 0 0% /dev/fd
/dev/md/dsk/d3 12396483 5211506 7061013 43% /var
swap 524288 0 524288 0% /tmp
swap 21778616 8 21778608 1% /var/run
swap 21778608 0 21778608 0% /dev/vx/dmp
swap 21778608 0 21778608 0% /dev/vx/rdmp
/dev/vx/dsk/FCSQAgr/muhiltemp
103218991 40823849 61362953 40% /MuhilTemp
MD means multidisk which is Raid. As you can see MuhilTemp is cut from the Powerpath VG FCSQAgr.
If you don't see the Powerpath VG being used, you have to backup, delete and create them using PowerPath VG.
If you don't see the Powerpath VG being used, you have to backup, delete and create them using PowerPath VG.
# format
The syntax is straightforward, but if you are not sure about how to configure the disk, ask a team member or submit a TEM ticket to have the disk configured.
1.4. Installation Verification
1. Once the installation is completed, issue the following to verify that PowerPath is active.
# powermt display
Symmetrix logical device count=0
CLARiiON logical device count=1
Hitachi logical device count=0
HP xp logical device count=0
Ess logical device count=0
Invista logical device count=0
==============================================================================
----- Host Bus Adapters --------- ------ I/O Paths ----- ------ Stats ------
### HW Path Summary Total Dead IO/Sec Q-IOs Errors
==============================================================================
1 qla2xxx optimal 2 0 - 0 0
3 qla2xxx optimal 2 0 - 0 0
2. Execute the following to verify paths are working.
# powermt display dev=all
Pseudo name=emcpowera
CLARiiON ID=APM00111103781 [FCMIBCCLPFDB01]
Logical device ID=60060160C5302E00D4B3082F6868E111 [LUN 1388]
state=alive; policy=BasicFailover; priority=0; queued-IOs=0;
Owner: default=SP A, current=SP A Array failover mode: 4
==============================================================================
--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
1 qla2xxx sda SP A7 active alive 0 0
1 qla2xxx sdb SP B5 active alive 0 0
3 qla2xxx sdc SP A5 unlic alive 0 0
3 qla2xxx sdd SP B7 unlic alive 0 0
If you see only one active path, please contact TEM. If you see none or the following, Powerpath is not working.
Bad dev value emcpowera, or not under Powerpath control.
1.5. Back-out/Roll-back Procedures
Remove the PowerPath package and reboot.
# yum remove EMCpower.LINUX