whyexe
作者whyexe·2017-11-27 14:40
存储架构师·中国金融电子化公司

HACMP 启动报错,鬼盘做的怪——重新修改第三方磁盘认证

字数 19136阅读 1604评论 0赞 1

日志类似如下:
Res1:cl_disk_available[113] : customizations
+Res1:cl_disk_available[154] disktype=FSCSI
+Res1:cl_disk_available[200] : Check to see if the device is available. We can tell this by
+Res1:cl_disk_available[201] : calling lsdev with the logical name of the device -l and
+Res1:cl_disk_available[202] : asking to see the devices that have a status of Available '-S A'
+Res1:cl_disk_available[203] : we limit the output to the status field '-F status.'
+Res1:cl_disk_available[205] lsdev -Cc disk -l hdisk29 -S A -F status
+Res1:cl_disk_available[205] [[ -n Available ]]
+Res1:cl_disk_available[208] : Break the reserve on an available disk. Where supported, do so in
+Res1:cl_disk_available[209] : a background task that will run asynchronously to this one.
+Res1:cl_disk_available[243] : Collect the fibre SCSI disks by owning adapter, for
+Res1:cl_disk_available[244] : later processing in parallel.
+Res1:cl_disk_available[246] fscarray[fscsi4]='hdisk8 hdisk9 hdisk10 hdisk17 hdisk18 hdisk19 hdisk26 hdisk27 hdisk28 hdisk29'
+Res1:cl_disk_available[1743] : Restore resource group list
+Res1:cl_disk_available[1745] HDISKS=hdisk8,hdisk9,hdisk10,hdisk17,hdisk18,hdisk19,hdisk26,hdisk27,hdisk28,hdisk29
+Res1:cl_disk_available[1746] VOLUME_GROUPS=datavg1,datavg2
+Res1:cl_disk_available[1747] GROUPNAME=Res1
+Res1:cl_disk_available[1747] export GROUPNAME
+Res1:cl_disk_available[1779] : Take care of all the fibre scsi disks, doing all those associated
+Res1:cl_disk_available[1780] : with a specific adapter at one time
+Res1:cl_disk_available[1783] make_disktypes_available fscsi4 FSCSI hdisk8 hdisk9 hdisk10 hdisk17 hdisk18 hdisk19 hdisk26 hdisk27 hdisk28 hdisk29
+Res1:cl_disk_available[5] parent=fscsi4
+Res1:cl_disk_available[5] typeset parent
+Res1:cl_disk_available[6] disktype=FSCSI
+Res1:cl_disk_available[6] typeset disktype
+Res1:cl_disk_available[7] shift 2
+Res1:cl_disk_available[8] disks='hdisk8 hdisk9 hdisk10 hdisk17 hdisk18 hdisk19 hdisk26 hdisk27 hdisk28 hdisk29'
+Res1:cl_disk_available[8] typeset disks
+Res1:cl_disk_available[11] : Scan through the list of given disks, checking each one
+Res1:cl_disk_available[12] : for possible ghost disks, which must be later removed
+Res1:cl_disk_available[17] : The disk is not available
+Res1:cl_disk_available[19] lsdev -Cc disk -l hdisk8 -S A -F status
+Res1:cl_disk_available[19] [[ -z Available ]]
+Res1:cl_disk_available[17] : The disk is not available
+Res1:cl_disk_available[19] lsdev -Cc disk -l hdisk9 -S A -F status
+Res1:cl_disk_available[19] [[ -z Available ]]
+Res1:cl_disk_available[17] : The disk is not available
+Res1:cl_disk_available[19] lsdev -Cc disk -l hdisk10 -S A -F status
+Res1:cl_disk_available[19] [[ -z Available ]]
+Res1:cl_disk_available[17] : The disk is not available
+Res1:cl_disk_available[19] lsdev -Cc disk -l hdisk17 -S A -F status
+Res1:cl_disk_available[19] [[ -z Available ]]
+Res1:cl_disk_available[17] : The disk is not available
+Res1:cl_disk_available[19] lsdev -Cc disk -l hdisk18 -S A -F status
+Res1:cl_disk_available[19] [[ -z Available ]]
+Res1:cl_disk_available[17] : The disk is not available
+Res1:cl_disk_available[19] lsdev -Cc disk -l hdisk19 -S A -F status
+Res1:cl_disk_available[19] [[ -z Available ]]
+Res1:cl_disk_available[17] : The disk is not available
+Res1:cl_disk_available[19] lsdev -Cc disk -l hdisk26 -S A -F status
+Res1:cl_disk_available[19] [[ -z Available ]]
+Res1:cl_disk_available[17] : The disk is not available
+Res1:cl_disk_available[19] lsdev -Cc disk -l hdisk27 -S A -F status
+Res1:cl_disk_available[19] [[ -z Available ]]
+Res1:cl_disk_available[17] : The disk is not available
+Res1:cl_disk_available[19] lsdev -Cc disk -l hdisk28 -S A -F status
+Res1:cl_disk_available[19] [[ -z Available ]]
+Res1:cl_disk_available[17] : The disk is not available
+Res1:cl_disk_available[19] lsdev -Cc disk -l hdisk29 -S A -F status
+Res1:cl_disk_available[19] [[ -z Available ]]
+Res1:cl_disk_available[51] : Break the reserve - pass the given list of disks having the given parent
+Res1:cl_disk_available[56] : Perform a LUN reset on a SCSI-3 device
+Res1:cl_disk_available[58] cl_fscsilunreset fscsi4 hdisk8 hdisk9 hdisk10 hdisk17 hdisk18 cl_fscsilunreset[969]: ioctl SCIOLSTART id=0X10000 lun=0X1000000000000: Invalid argument
cl_fscsilunreset[969]: ioctl SCIOLSTART id=0X10000 lun=0X2000000000000: Invalid argument
cl_fscsilunreset[969]: ioctl SCIOLSTART id=0X10000 lun=0X3000000000000: Invalid argument
cl_fscsilunreset[969]: ioctl SCIOLSTART id=0X10000 lun=0XA000000000000: Invalid argument
cl_fscsilunreset[969]: ioctl SCIOLSTART id=0X10000 lun=0XB000000000000: Invalid argument
cl_fscsilunreset[969]: ioctl SCIOLSTART id=0X10000 lun=0XC000000000000: Invalid argument
cl_fscsilunreset[143]: version 1.12
cl_fscsilunreset[182]: open(/dev/fscsi4, O_RDWR)
cl_fscsilunreset[230]: get_scsi_version(fscsi4)
cl_fscsilunreset[563]: odm_get_first name=fscsi4 AND attribute=dyntrk AND value=yes
cl_fscsilunreset[613]: fscsi4 dyntrk=1
cl_fscsilunreset[258]: get_sid_lun(hdisk
cl_fscsilunreset[676]: odm_get_first name=hdisk8 AND attribute=scsi_id
cl_fscsilunreset[727]: odm_get_first name=hdisk8 AND attribute=lun_id
cl_fscsilunreset[783]: odm_get_first name=hdisk8 AND attribute=ww_name
cl_fscsilunreset[833]: odm_get_first name=hdisk8 AND attribute=node_name
cl_fscsilunreset[905]: hdisk8 id=0X10000 lun=0X1000000000000 ww_name=0X200888CF98B64DFF node_name=0X210088CF98B64DFF
cl_fscsilunreset[308]: device_connect(hdisk
cl_fscsilunreset[431]: openx(hdisk8, O_RDWR, 0, SC_RETAIN_RESERVATION + SC_FORCED_OPEN_LUN)
cl_fscsilunreset[475]: close(/dev/hdisk
cl_fscsilunreset[258]: get_sid_lun(hdisk9)
cl_fscsilunreset[676]: odm_get_first name=hdisk9 AND attribute=scsi_id
cl_fscsilunreset[727]: odm_get_first name=hdisk9 AND attribute=lun_id
cl_fscsilunreset[783]: odm_get_first name=hdisk9 AND attribute=ww_name
cl_fscsilunreset[833]: odm_get_first name=hdisk9 AND attribute=node_name
cl_fscsilunreset[905]: hdisk9 id=0X10000 lun=0X2000000000000 ww_name=0X200888CF98B64DFF node_name=0X210088CF98B64DFF
cl_fscsilunreset[308]: device_connect(hdisk9)
cl_fscsilunreset[431]: openx(hdisk9, O_RDWR, 0, SC_RETAIN_RESERVATION + SC_FORCED_OPEN_LUN)
cl_fscsilunreset[475]: close(/dev/hdisk9)
cl_fscsilunreset[258]: get_sid_lun(hdisk10)
cl_fscsilunreset[676]: odm_get_first name=hdisk10 AND attribute=scsi_id
cl_fscsilunreset[727]: odm_get_first name=hdisk10 AND attribute=lun_id
cl_fscsilunreset[783]: odm_get_first name=hdisk10 AND attribute=ww_name
cl_fscsilunreset[833]: odm_get_first name=hdisk10 AND attribute=node_name
cl_fscsilunreset[905]: hdisk10 id=0X10000 lun=0X3000000000000 ww_name=0X200888CF98B64DFF node_name=0X210088CF98B64DFF
cl_fscsilunreset[308]: device_connect(hdisk10)
cl_fscsilunreset[431]: openx(hdisk10, O_RDWR, 0, SC_RETAIN_RESERVATION + SC_FORCED_OPEN_LUN)
cl_fscsilunreset[475]: close(/dev/hdisk10)
cl_fscsilunreset[258]: get_sid_lun(hdisk17)
cl_fscsilunreset[676]: odm_get_first name=hdisk17 AND attribute=scsi_id
cl_fscsilunreset[727]: odm_get_first name=hdisk17 AND attribute=lun_id
cl_fscsilunreset[783]: odm_get_first name=hdisk17 AND attribute=ww_name
cl_fscsilunreset[833]: odm_get_first name=hdisk17 AND attribute=node_name
cl_fscsilunreset[905]: hdisk17 id=0X10000 lun=0XA000000000000 ww_name=0X200888CF98B64DFF node_name=0X210088CF98B64DFF
cl_fscsilunreset[308]: device_connect(hdisk17)
cl_fscsilunreset[431]: openx(hdisk17, O_RDWR, 0, SC_RETAIN_RESERVATION + SC_FORCED_OPEN_LUN)
cl_fscsilunreset[475]: close(/dev/hdisk17)
cl_fscsilunreset[258]: get_sid_lun(hdisk1
cl_fscsilunreset[676]: odm_get_first name=hdisk18 AND attribute=scsi_id
cl_fscsilunreset[727]: odm_get_first name=hdisk18 AND attribute=lun_id
cl_fscsilunreset[783]: odm_get_first name=hdisk18 AND attribute=ww_name
cl_fscsilunreset[833]: odm_get_first name=hdisk18 AND attribute=node_name
cl_fscsilunreset[905]: hdisk18 id=0X10000 lun=0XB000000000000 ww_name=0X200888CF98B64DFF node_name=0X210088CF98B64DFF
cl_fscsilunreset[308]: device_connect(hdisk1
cl_fscsilunreset[431]: openx(hdisk18, O_RDWR, 0, SC_RETAIN_RESERVATION + SC_FORCED_OPEN_LUN)
cl_fscsilunreset[475]: close(/dev/hdisk1
cl_fscsilunreset[258]: get_sid_lun(hdisk19)
cl_fscsilunreset[676]: odm_get_first name=hdisk19 AND attribute=scsi_id
cl_fscsilunreset[727]: odm_get_first name=hdisk19 AND attribute=lun_id
cl_fscsilunreset[783]: odm_get_first name=hdisk19 AND attribute=ww_name
cl_fscsilunreset[833]: odm_get_first name=hdisk19 AND attribute=node_name
cl_fscsilunreset[905]: hdisk19 id=0X10000 lun=0XC000000000000 ww_name=0X200888CF98B64DFF node_name=0X210088CF98B64DFF
cl_fscsilunreset[308]: device_connect(hdisk19)
cl_fscsilunreset[431]: openx(hdisk19, O_RDWR, 0, SC_RETAIN_RESERVATION + SC_FORCED_OPEN_LUN)
cl_fscsilunreset[475]: close(/dev/hdisk19)
cl_fscsilunreset[258]: get_sid_lun(hdisk26)
cl_fscsilucl_fscsilunreset[969]: ioctl SCIOLSTART id=0X10000 lun=0X13000000000000: Invalid argument
cl_fscsilunreset[969]: ioctl SCIOLSTART id=0X10000 lun=0X14000000000000: Invalid argument
cl_fscsilunreset[969]: ioctl SCIOLSTART id=0X10000 lun=0X15000000000000: Invalid argument
cl_fscsilunreset[969]: ioctl SCIOLSTART id=0X10000 lun=0X16000000000000: Invalid argument
nreset[676]: odm_get_first name=hdisk26 AND attribute=scsi_id
cl_fscsilunreset[727]: odm_get_first name=hdisk26 AND attribute=lun_id
cl_fscsilunreset[783]: odm_get_first name=hdisk26 AND attribute=ww_name
cl_fscsilunreset[833]: odm_get_first name=hdisk26 AND attribute=node_name
cl_fscsilunreset[905]: hdisk26 id=0X10000 lun=0X13000000000000 ww_name=0X200888CF98B64DFF node_name=0X210088CF98B64DFF
cl_fscsilunreset[308]: device_connect(hdisk26)
cl_fscsilunreset[431]: openx(hdisk26, O_RDWR, 0, SC_RETAIN_RESERVATION + SC_FORCED_OPEN_LUN)
cl_fscsilunreset[475]: close(/dev/hdisk26)
cl_fscsilunreset[258]: get_sid_lun(hdisk27)
cl_fscsilunreset[676]: odm_get_first name=hdisk27 AND attribute=scsi_id
cl_fscsilunreset[727]: odm_get_first name=hdisk27 AND attribute=lun_id
cl_fscsilunreset[783]: odm_get_first name=hdisk27 AND attribute=ww_name
cl_fscsilunreset[833]: odm_get_first name=hdisk27 AND attribute=node_name
cl_fscsilunreset[905]: hdisk27 id=0X10000 lun=0X14000000000000 ww_name=0X200888CF98B64DFF node_name=0X210088CF98B64DFF
cl_fscsilunreset[308]: device_connect(hdisk27)
cl_fscsilunreset[431]: openx(hdisk27, O_RDWR, 0, SC_RETAIN_RESERVATION + SC_FORCED_OPEN_LUN)
cl_fscsilunreset[475]: close(/dev/hdisk27)
cl_fscsilunreset[258]: get_sid_lun(hdisk2
cl_fscsilunreset[676]: odm_get_first name=hdisk28 AND attribute=scsi_id
cl_fscsilunreset[727]: odm_get_first name=hdisk28 AND attribute=lun_id
cl_fscsilunreset[783]: odm_get_first name=hdisk28 AND attribute=ww_name
cl_fscsilunreset[833]: odm_get_first name=hdisk28 AND attribute=node_name
cl_fscsilunreset[905]: hdisk28 id=0X10000 lun=0X15000000000000 ww_name=0X200888CF98B64DFF node_name=0X210088CF98B64DFF
cl_fscsilunreset[308]: device_connect(hdisk2
cl_fscsilunreset[431]: openx(hdisk28, O_RDWR, 0, SC_RETAIN_RESERVATION + SC_FORCED_OPEN_LUN)
cl_fscsilunreset[475]: close(/dev/hdisk2
cl_fscsilunreset[258]: get_sid_lun(hdisk29)
cl_fscsilunreset[676]: odm_get_first name=hdisk29 AND attribute=scsi_id
cl_fscsilunreset[727]: odm_get_first name=hdisk29 AND attribute=lun_id
cl_fscsilunreset[783]: odm_get_first name=hdisk29 AND attribute=ww_name
cl_fscsilunreset[833]: odm_get_first name=hdisk29 AND attribute=node_name
cl_fscsilunreset[905]: hdisk29 id=0X10000 lun=0X16000000000000 ww_name=0X200888CF98B64DFF node_name=0X210088CF98B64DFF
cl_fscsilunreset[308]: device_connect(hdisk29)
cl_fscsilunreset[431]: openx(hdisk29, O_RDWR, 0, SC_RETAIN_RESERVATION + SC_FORCED_OPEN_LUN)
cl_fscsilunreset[475]: close(/dev/hdisk29)
cl_fscsilunreset[499]: close(/dev/fscsi4)
+Res1:cl_disk_available[1787] : Take care of all the parallel scsi disks, doing all those associated
+Res1:cl_disk_available[1788] : with a specific adapter at one time
+Res1:cl_disk_available[1795] : Take care of all the iscsi disks, doing all those associated
+Res1:cl_disk_available[1796] : with a specific adapter at one time
+Res1:cl_disk_available[1803] : Take care of all the RAID scsi disks, doing all those associated
+Res1:cl_disk_available[1804] : with a specific adapter at one time
+Res1:cl_disk_available[1811] : Take care of all SAS disks, doing all those associated with a
+Res1:cl_disk_available[1812] : specific adapter at one time
+Res1:cl_disk_available[1819] : wait to sync any background processes still breaking reserves
+Res1:cl_disk_available[1821] [[ -n '' ]]
+Res1:cl_disk_available[1827] : If there were ghost disk profiles queued up to be removed, do so now.
+Res1:cl_disk_available[1836] : If there were disks to make available, do so now
+Res1:cl_disk_available[1849] : Having finally cleaned up any reserves or ghost disks on the underlying
+Res1:cl_disk_available[1850] : hdisks, break any persistent reserve on vpath devices.
+Res1:cl_disk_available[1857] : Go back and check to see if the various disks came on line, and update
+Res1:cl_disk_available[1858] : the cluster manager status as appropriate
+Res1:cl_disk_available[1860] [[ true == true ]]
+Res1:cl_disk_available[1862] export GROUPNAME
+Res1:cl_disk_available[1864] : Extract the disk list and the corresponding volume group list
+Res1:cl_disk_available[1865] : for the disks in this resource group
+Res1:cl_disk_available[1867] print hdisk8,hdisk9,hdisk10,hdisk17,hdisk18,hdisk19,hdisk26,hdisk27
+Res1:cl_disk_available[1867] read LIST_OF_HDISKS_FOR_RG HDISKS
+Res1:cl_disk_available[1867] IFS=:
+Res1:cl_disk_available[1868] print datavg1 datavg2
+Res1:cl_disk_available[1868] read LIST_OF_VOLUME_GROUPS_FOR_RG VOLUME_GROUPS
+Res1:cl_disk_available[1868] IFS=:
+Res1:cl_disk_available[1871] : Do each of the disks in the list for this resource group
+Res1:cl_disk_available[1871] IFS=', '
+Res1:cl_disk_available[1871] set -- hdisk8,hdisk9,hdisk10,hdisk17,hdisk18,hdisk19,
+Res1:cl_disk_available[1871] print hdisk8 hdisk9 hdisk10 +Res1:cl_disk_available[1876] : Extract the name of the volume group that owns this disk
+Res1:cl_disk_available[1878] print datavg1,datavg2
+Res1:cl_disk_available[1878] read vg LIST_OF_VOLUME_GROUPS_FOR_RG
+Res1:cl_disk_available[1878] IFS=', '
+Res1:cl_disk_available[1881] : Those disks which are in a volume group which is already varyd on
+Res1:cl_disk_available[1882] : do not need to be checked
+Res1:cl_disk_available[1884] [[ -z vgmgdb ]]
+Res1:cl_disk_available[1884] [[ 'softvg rootvg' != ?(\ )vgmgdb?(\ ) ]]
+Res1:cl_disk_available[1885] verify_disk_availability hdisk8
+Res1:cl_disk_available[4] disk=hdisk8
+Res1:cl_disk_available[4] typeset disk
+Res1:cl_disk_available[7] : Do not do this check for a Remote Physical Volume Client
+Res1:cl_disk_available[8] : disk. The makedev for a Remote Physical Volume Client disk
+Res1:cl_disk_available[9] : is done in the predisk_available Replicated Resource Method.
+Res1:cl_disk_available[11] lsdev -Cc disk -l hdisk8 -F PdDvLn
+Res1:cl_disk_available[11] PdDvLn=disk/fcp/HWXSG1
+Res1:cl_disk_available[12] [[ disk/fcp/HWXSG1 == disk/remote_disk/rpvclient ]]
+Res1:cl_disk_available[17] lsdev -Cc disk -l hdisk8 -S A -F status
+Res1:cl_disk_available[17] LC_ALL=C
+Res1:cl_disk_available[17] [[ -z Available ]]
+Res1:cl_disk_available[25] : Note that the resource manager is not updated with the status of the
+Res1:cl_disk_available[26] : individual disks. This is because loss of a disk is not necessarily
+Res1:cl_disk_available[27] : severe enough to stop the event - varyonvg may still work.
+Res1:cl_disk_available[1876] : Extract the name of the volume group that owns this disk
+Res1:cl_disk_available[1878] print vgptdb,vgptdb,vgptdb,vgptdb,vgptdb,vgptdb,vgptdb,vgptdb,vgptdb
+Res1:cl_disk_available[1878] read vg LIST_OF_VOLUME_GROUPS_FOR_RG
+Res1:cl_disk_available[1878] IFS=', '
+Res1:cl_disk_available[1881] : Those disks which are in a volume group which is already varyd on
+Res1:cl_disk_available[1882] : do not need to be checked
+Res1:cl_disk_available[1884] [[ -z vgptdb ]]
+Res1:cl_disk_available[1884] [[ 'softvg rootvg' != ?(\ )vgptdb?(\ ) ]]
+Res1:cl_disk_available[1885] verify_disk_availability hdisk9
+Res1:cl_disk_available[4] disk=hdisk9
+Res1:cl_disk_available[4] typeset disk
+Res1:cl_disk_available[7] : Do not do this check for a Remote Physical Volume Client
+Res1:cl_disk_available[8] : disk. The makedev for a Remote Physical Volume Client disk
+Res1:cl_disk_available[9] : is done in the predisk_available Replicated Resource Method.
+Res1:cl_disk_available[11] lsdev -Cc disk -l hdisk9 -F PdDvLn
+Res1:cl_disk_available[11] PdDvLn=disk/fcp/HWXSG1
+Res1:cl_disk_available[12] [[ disk/fcp/HWXSG1 == disk/remote_disk/rpvclient ]]
+Res1:cl_disk_available[17] lsdev -Cc disk -l hdisk9 -S A -F status
+Res1:cl_disk_available[17] LC_ALL=C
+Res1:cl_disk_available[17] [[ -z Available ]]
+Res1:cl_disk_available[25] : Note that the resource manager is not updated with the status of the
+Res1:cl_disk_available[26] : individual disks. This is because loss of a disk is not necessarily
+Res1:cl_disk_available[27] : severe enough to stop the event - varyonvg may still work.
+Res1:cl_disk_available[1876] : Extract the name of the volume group that owns this disk
+Res1:cl_disk_available[1878] print vgname1,vgname2
+Res1:cl_disk_available[1878] read vg LIST_OF_VOLUME_GROUPS_FOR_RG
+Res1:cl_disk_available[1878] IFS=', '
+Res1:cl_disk_available[1881] : Those disks which are in a volume group which is already varyd on
+Res1:cl_disk_available[1882] : do not need to be checked
+Res1:cl_disk_available[1884] [[ -z vgptdb ]]
+Res1:cl_disk_available[1884] [[ 'softvg rootvg' != ?(\ )vgptdb?(\ ) ]]
+Res1:cl_disk_available[1885] verify_disk_availability hdisk10
+Res1:cl_disk_available[4] disk=hdisk10
+Res1:cl_disk_available[4] typeset disk
+Res1:cl_disk_available[7] : Do not do this check for a Remote Physical Volume Client
+Res1:cl_disk_available[8] : disk. The makedev for a Remote Physical Volume Client disk
+Res1:cl_disk_available[9] : is done in the predisk_available Replicated Resource Method.
+Res1:cl_disk_available[11] lsdev -Cc disk -l hdisk10 -F PdDvLn
+Res1:cl_disk_available[11] PdDvLn=disk/fcp/HWXSG1
+Res1:cl_disk_available[12] [[ disk/fcp/HWXSG1 == disk/remote_disk/rpvclient ]]

IBM hacmp 识别不是自己的存储 都会进行验证
需要修改custm disk method

使用odmget -q name = xxxx CuDv和smit hacmp命令配置ReserveCheck和ReserveBreak
功能时需执行以下步骤:

  1. 运行odmget -q name = xxxx CuDv命令获取硬盘类型信息。
  2. 运行smit hacmp命令。
  3. 在“HACMP for AIX”配置界面中选择目录“Extended Configuration”“Extended
    Resource Configuration”“HACMP Extended Resources Configuration”“Configure
    Custom Disk Methods”“Add Custom Disk Methods”。
  4. 在如下打印页面中,将“1”中获取的硬盘类型信息输入“New Disk Type”后的字段内。
    [Entry Fields]
  5. New Disk Type []
  6. Method to identify ghost disks[SCSI3] +
  7. Method to determine if a reserve is held [/usr/sbin/ReserveCheck] +
  8. Method to break a reserve [/usr/sbin/ReserveBreak] +
    Break reserves in parallel
    false +
  9. Method to make the disk available [MKDEV]
  10. 按“Enter”键完成ReserveCheck和ReserveBreak功能配置。

lspv

lsattr –El hdiskX X为磁盘号

修改队列深度

chdev -l hdiskX -a queue_depth=32 –P 所有磁盘均需要修改

修改reserve_policy

chdev -a PR_key_value=1(主)

chdev -a PR_key_value=2(备)

chdev -a PR_key_value=2 -l hdiskX 所有磁盘均需要修改

chdev -a reserve_policy=PR_exclusive -l hdiskX 所有磁盘均需要修改

配置ReserveCheck 和ReserveBreak 功能
odmget -q name = xxxx CuDv和smit hacmp命令用于配置ReserveCheck和ReserveBreak
功能。配置了ReserveCheck和ReserveBreak功能后,当一个节点失效时,另一个节点
将自动调用ReserveCheck和ReserveBreak来接管失效节点的资源。

如果觉得我的文章对您有用,请点赞。您的支持将鼓励我继续创作!

1

添加新评论0 条评论

Ctrl+Enter 发表

作者其他文章

X社区推广