政府机关hacmp

HACMP停止、切换异常

1、从A机停止HA后资源组未varyoff,服务IP存在;
2、重新启动两台机器HA,计划将A机资源组切换到B机,切换时资源组未切换。请大家帮忙分析下。



A机HA日志:

May 14 05:59:12 EVENT START: node_down xindaia graceful

:node_down[63] [[ high == high ]]
:node_down[63] version=1.59.1.2
:node_down[64] cl_get_path
:node_down[64] HA_DIR=es
:node_down[66] NODENAME=xindaia
:node_down[66] export NODENAME
:node_down[67] PARAM=graceful
:node_down[67] export PARAM
:node_down[69] UPDATESTATDFILE=/usr/es/sbin/cluster/etc/updatestatd
:node_down[72] : This will be the exit status seen by the Cluster Manager.
:node_down[73] : If STATUS is not 0, the Cluster Manager will enter reconfiguration
:node_down[74] : All lower level scripts should pass status back to the caller.
:node_down[75] : This will allow a Resource Groups to be processed individaully,
:node_down[76] : independent of the status of another resource group.
:node_down[78] STATUS=0
:node_down[78] typeset -i STATUS
:node_down[80] EMULATE=REAL
:node_down[82] set -u
:node_down[84] (( 2 < 1 ))
:node_down[89] rm -f /tmp/.RPCLOCKDSTOPPED
:node_down[90] rm -f /usr/es/sbin/cluster/etc/updatestatd
:node_down[92] [[ graceful == forced ]]
:node_down[114] UPDATESTATD=0
:node_down[115] export UPDATESTATD
:node_down[119] : If RG_DEPENDENCIES was set to true by the cluster manager,
:node_down[120] : then all resource group actions are taken via rg_move events.
:node_down[122] [[ FALSE == FALSE ]]
:node_down[125] : Set the RESOURCE_GROUPS environment variable with the names
:node_down[126] : of all Resource Groups participating in this event, and export
:node_down[127] : them to all successive scripts.
:node_down[129] set -a
:node_down[130] clsetenvgrp xindaia node_down graceful
:clsetenvgrp[+50] [[ high = high ]]
:clsetenvgrp[+50] version=1.16
:clsetenvgrp[+52] usingVer=clSetenvgrp
:clsetenvgrp[+57] clSetenvgrp xindaia node_down graceful
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+58] exit 0
:node_down[130] eval FORCEDOWN_GROUPS='""' RESOURCE_GROUPS='""' HOMELESS_GROUPS='""' HOMELESS_FOLLOWER_GROUPS='""' ERRSTATE_GROUPS='""' PRINCIPAL_ACTIONS='""' ASSOCIATE_ACTIONS='""' AUXILLIARY_ACTIONS='""'
:node_down[1] FORCEDOWN_GROUPS=''
:node_down[1] RESOURCE_GROUPS=''
:node_down[1] HOMELESS_GROUPS=''
:node_down[1] HOMELESS_FOLLOWER_GROUPS=''
:node_down[1] ERRSTATE_GROUPS=''
:node_down[1] PRINCIPAL_ACTIONS=''
:node_down[1] ASSOCIATE_ACTIONS=''
:node_down[1] AUXILLIARY_ACTIONS=''
:node_down[131] RC=0
:node_down[132] set +a
:node_down[133] (( 0 != 0 ))
:node_down[138] : Process_Resources for parallel-processed resource groups
:node_down[139] : If RG_DEPENDENCIES is true, then this call is responsible for
:node_down[140] : starting the necessary rg_move events.
:node_down[142] process_resources
:process_resources[+2596] [[ high = high ]]
:process_resources[+2596] version=1.115.1.7
:process_resources[+2597] :process_resources[+2597] cl_get_path
HA_DIR=es
:process_resources[+2599] STATUS=0
:process_resources[+2600] sddsrv_off=FALSE
:process_resources[+2602] [[ -z  ]]
:process_resources[+2604] EMULATE=REAL
:process_resources[+2607] true
:process_resources[+2609] set -a
:process_resources[+2612] clRGPA
:clRGPA[+49] [[ high = high ]]
:clRGPA[+49] version=1.16
:clRGPA[+51] usingVer=clrgpa
:clRGPA[+56] clrgpa
:clRGPA[+57] exit 0
:process_resources[+2612] eval JOB_TYPE=RELEASE RESOURCE_GROUPS="data" PRINCIPAL_ACTION="RELEASE" AUXILLIARY_ACTION="NONE"
:process_resources[+2612] JOB_TYPE=RELEASE RESOURCE_GROUPS=data PRINCIPAL_ACTION=RELEASE AUXILLIARY_ACTION=NONE
:process_resources[+2614] RC=0
:process_resources[+2615] set +a
:process_resources[+2617] [ 0 -ne 0 ]
:process_resources[+2623] RESOURCE_GROUPS=data
:process_resources[+2846] set_resource_group_state RELEASING
:process_resources[set_resource_group_state+4] STAT=0
+data:process_resources[set_resource_group_state+7] export GROUPNAME
+data:process_resources[set_resource_group_state+8] [ RELEASING != DOWN ]
+data:process_resources[set_resource_group_state+10] [ REAL = EMUL ]
+data:process_resources[set_resource_group_state+15] clchdaemons -d clstrmgr_scripts -t resource_locator -n xindaia -o data -v RELEASING
+data:process_resources[set_resource_group_state+16] [ 0 -ne 0 ]
+data:process_resources[set_resource_group_state+27] [ RELEASING = ACQUIRING ]
+data:process_resources[set_resource_group_state+32] [ RELEASING = RELEASING ]
+data:process_resources[set_resource_group_state+34] cl_RMupdate releasing data process_resources
Reference string: Thu.May.14.05:59:12.GMT.2015.process_resources.data.ref
+data:process_resources[set_resource_group_state+35] continue
+data:process_resources[set_resource_group_state+81] return 0
+data:process_resources[+2847] RC=0
+data:process_resources[+2847] [[ 0 != 0 ]]
+data:process_resources[+2607] true
+data:process_resources[+2609] set -a
+data:process_resources[+2612] clRGPA
+data:clRGPA[+49] [[ high = high ]]
+data:clRGPA[+49] version=1.16
+data:clRGPA[+51] usingVer=clrgpa
+data:clRGPA[+56] clrgpa
+data:clRGPA[+57] exit 0
+data:process_resources[+2612] eval JOB_TYPE=APPLICATIONS ACTION=RELEASE ALL_APPLICATIONS="data" RESOURCE_GROUPS="data " MISCDATA=""
+data:process_resources[+2612] JOB_TYPE=APPLICATIONS ACTION=RELEASE ALL_APPLICATIONS=data RESOURCE_GROUPS=data  MISCDATA=
+data:process_resources[+2614] RC=0
+data:process_resources[+2615] set +a
+data:process_resources[+2617] [ 0 -ne 0 ]
+data:process_resources[+2623] RESOURCE_GROUPS=data
+data:process_resources[+2747] export GROUPNAME=data
+data:process_resources[+2748] process_applications RELEASE
+data:process_resources[process_applications+9] export TMP_FILE=/var/hacmp/log/.process_resources_applications.369122
+data:process_resources[process_applications+11] rm -f /var/hacmp/log/.process_resources_applications.369122
+data:process_resources[process_applications+13] [ RELEASE = ACQUIRE ]
+data:process_resources[process_applications+17] cmd_to_execute=stop_server
+data:process_resources[process_applications+21] NOERRORAPPS=
+data:process_resources[process_applications+22] WAITPIDS=
+data:process_resources[process_applications+23] LPAR_ACQUIRE_FAILED=0
+data:process_resources[process_applications+24] LPAR_RELEASE_FAILED=0
+data:process_resources[process_applications+25] START_STOP_FAILED=0
+data:process_resources[process_applications+33] export GROUPNAME
+data:process_resources[process_applications+39] read LIST_OF_APPLICATIONS_FOR_RG
+data:process_resources[process_applications+39] get_list_head data
+data:process_resources[get_list_head+4] read listhead listtail
+data:process_resources[get_list_head+4] +data:process_resources[get_list_head+4] echo data
IFS=:
+data:process_resources[get_list_head+5] tr ,  
+data:process_resources[get_list_head+5] echo data
+data:process_resources[process_applications+40] read ALL_APPLICATIONS
+data:process_resources[process_applications+40] get_list_tail data
+data:process_resources[get_list_tail+4] read listhead listtail
+data:process_resources[get_list_tail+4] IFS=:
+data:process_resources[get_list_tail+4] echo data
+data:process_resources[get_list_tail+5] echo
+data:process_resources[process_applications+42] read MISCDATA_FOR_RG
+data:process_resources[process_applications+42] get_list_head
+data:process_resources[get_list_head+4] read listhead listtail
+data:process_resources[get_list_head+4] IFS=:
+data:process_resources[get_list_head+4] echo
+data:process_resources[get_list_head+5] tr ,  
+data:process_resources[get_list_head+5] echo
+data:process_resources[process_applications+43] read MISCDATA
+data:process_resources[process_applications+43] get_list_tail
+data:process_resources[get_list_tail+4] read listhead listtail
+data:process_resources[get_list_tail+4] IFS=:
+data:process_resources[get_list_tail+4] echo
+data:process_resources[get_list_tail+5] echo
+data:process_resources[process_applications+46] [ RELEASE = RELEASE ]
+data:process_resources[process_applications+52] TMPLIST=
+data:process_resources[process_applications+53] let cnt=0
+data:process_resources[process_applications+54] print data
+data:process_resources[process_applications+54] set -A appnames data
+data:process_resources[process_applications+56] ((   cnt < 1  ))
+data:process_resources[process_applications+57] TMPLIST=data
+data:process_resources[process_applications+58] LIST_OF_APPLICATIONS_FOR_RG=data
+data:process_resources[process_applications+59] let cnt=cnt+1
+data:process_resources[process_applications+56] ((   cnt < 1  ))
+data:process_resources[process_applications+62] LIST_OF_APPLICATIONS_FOR_RG=data
+data:process_resources[process_applications+66] export APPLICATIONS=data
+data:process_resources[process_applications+67] export MISC_DATA=
+data:process_resources[process_applications+72] [ RELEASE = ACQUIRE ]
+data:process_resources[process_applications+97] ALLAPPLICATIONS=data  
+data:process_resources[process_applications+103] WAITPIDS= 348450
+data:process_resources[process_applications+109] wait 348450
+data:process_resources[process_applications+98] start_or_stop_applications_for_rg RELEASE /var/hacmp/log/.process_resources_applications.369122.data
+data:process_resources[start_or_stop_applications_for_rg+5] [ RELEASE = ACQUIRE ]
+data:process_resources[start_or_stop_applications_for_rg+9] cmd_to_execute=stop_server
+data:process_resources[start_or_stop_applications_for_rg+16] STATUS_FILE=/var/hacmp/log/.process_resources_applications.369122.data
+data:process_resources[start_or_stop_applications_for_rg+22] clcallev stop_server data

May 14 05:59:13 EVENT START: stop_server data

+data:stop_server[+48] [[ high = high ]]
+data:stop_server[+48] version=1.4.1.13
+data:stop_server[+49] +data:stop_server[+49] cl_get_path
HA_DIR=es
+data:stop_server[+51] STATUS=0
+data:stop_server[+55] [ ! -n  ]
+data:stop_server[+57] EMULATE=REAL
+data:stop_server[+60] PROC_RES=false
+data:stop_server[+64] [[ APPLICATIONS != 0 ]]
+data:stop_server[+64] [[ APPLICATIONS != GROUP ]]
+data:stop_server[+65] PROC_RES=true
+data:stop_server[+68] typeset WPARNAME WPARDIR EXEC
+data:stop_server[+69] WPARDIR=
+data:stop_server[+70] EXEC=
+data:stop_server[+72] +data:stop_server[+72] clwparname data
+data:clwparname[35] [[ high == high ]]
+data:clwparname[35] version=1.3
+data:clwparname[37] . /usr/es/sbin/cluster/wpar/wpar_utils
+data:clwparname[+20] ERRNO=0
+data:clwparname[+22] [[ high == high ]]
+data:clwparname[+22] set -x
+data:clwparname[+23] [[ high == high ]]
+data:clwparname[+23] version=1.7
+data:clwparname[+25] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+data:clwparname[+20] [[ high == high ]]
+data:clwparname[+20] set -x
+data:clwparname[+21] [[ high == high ]]
+data:clwparname[+21] version=1.2
+data:clwparname[+23] PATH=/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/events/emulate:/usr/es/sbin/cluster/events/emulate/driver:/usr/es/sbin/cluster/events/emulate/utils:/usr/es/sbin/cluster/tguides/bin:/usr/es/sbin/cluster/tguides/classes:/usr/es/sbin/cluster/tguides/images:/usr/es/sbin/cluster/tguides/scripts:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+data:clwparname[+24] export PATH
+data:clwparname[+26] typeset usageErr invalArgErr internalErr
+data:clwparname[+28] usageErr=10
+data:clwparname[+29] invalArgErr=11
+data:clwparname[+30] internalErr=12
+data:clwparname[+39] rgName=data
+data:clwparname[+42] uname
+data:clwparname[+42] OSNAME=AIX
+data:clwparname[+51] [[ AIX == *AIX* ]]
+data:clwparname[+54] lslpp -l bos.wpars
+data:clwparname[+54] 1> /dev/null 2>& 1
+data:clwparname[+56] loadWparName data
+data:clwparname[loadWparName+5] usage='Usage: loadWparName '
+data:clwparname[loadWparName+5] typeset -r usage
+data:clwparname[loadWparName+6] typeset rgName wparName wparDir rc
+data:clwparname[loadWparName+8] [[ 1 < 1 ]]
+data:clwparname[loadWparName+13] rgName=data
+data:clwparname[loadWparName+14] wparDir=/var/hacmp/adm/wpar
+data:clwparname[loadWparName+15] [[ ! -d /var/hacmp/adm/wpar ]]
+data:clwparname[loadWparName+22] [[ -f /var/hacmp/adm/wpar/data ]]
+data:clwparname[loadWparName+23] cat /var/hacmp/adm/wpar/data
+data:clwparname[loadWparName+23] wparName=''
+data:clwparname[loadWparName+24] [[ -n '' ]]
+data:clwparname[loadWparName+36] return 0
+data:clwparname[+56] wparName=''
+data:clwparname[+57] rc=0
+data:clwparname[+58] (( 0 != 0 ))
+data:clwparname[+64] printf %s
+data:clwparname[+65] exit 0
WPARNAME=
+data:stop_server[+74] set -u
+data:stop_server[+77] ALLSERVERS=All_servers
+data:stop_server[+78] [ REAL = EMUL ]
+data:stop_server[+83] cl_RMupdate resource_releasing All_servers stop_server
Reference string: Thu.May.14.05:59:13.GMT.2015.stop_server.All_servers.data.ref
+data:stop_server[+88] [[ -n  ]]
+data:stop_server[+107] +data:stop_server[+107] cut -d: -f3
+data:stop_server[+107] cllsserv -cn data
STOP=/app/dbstop.sh
+data:stop_server[+108] +data:stop_server[+108] cut -d  -f1
+data:stop_server[+108] echo /app/dbstop.sh
STOP_SCRIPT=/app/dbstop.sh
+data:stop_server[+110] PATTERN=xindaia data
+data:stop_server[+110] [[ -n  ]]
+data:stop_server[+110] [[ -z  ]]
+data:stop_server[+110] [[ -x /app/dbstop.sh ]]
+data:stop_server[+120] [ REAL = EMUL ]
+data:stop_server[+125] /app/dbstop.sh
+data:stop_server[+125] ODMDIR=/etc/objrepos
######  ######   #####
#     # #     # #     #
#     # #     #       #
#     # ######   #####
#     # #     # #
#     # #     # #
######  ######  #######

#####  ####### ####### ######
#     #    #    #     # #     #
#          #    #     # #     #
#####     #    #     # ######
      #    #    #     # #
#     #    #    #     # #
#####     #    ####### #

05/14/2015 20:59:13     0   0   SQL1032N  No start database manager command was issued.
SQL1032N  No start database manager command was issued.  SQLSTATE=57019
+data:stop_server[+127] [ 0 -ne 0 ]
+data:stop_server[+155] ALLNOERRSERV=All_nonerror_servers
+data:stop_server[+156] [ REAL = EMUL ]
+data:stop_server[+161] cl_RMupdate resource_down All_nonerror_servers stop_server
Reference string: Thu.May.14.05:59:13.GMT.2015.stop_server.All_nonerror_servers.data.ref
+data:stop_server[+164] exit 0
May 14 05:59:13 EVENT COMPLETED: stop_server data  0

+data:process_resources[start_or_stop_applications_for_rg+23] RC=0
+data:process_resources[start_or_stop_applications_for_rg+23] [[ 0 != 0 ]]
+data:process_resources[start_or_stop_applications_for_rg+23] [[ 0 != 0 ]]
+data:process_resources[start_or_stop_applications_for_rg+47] print data 0
+data:process_resources[start_or_stop_applications_for_rg+47] 1> /var/hacmp/log/.process_resources_applications.369122.data
+data:process_resources[process_applications+116] [ RELEASE = RELEASE ]
+data:process_resources[process_applications+117] server_release_lpar_resources data
+data:server_release_lpar_resources[606] [[ high == high ]]
+data:server_release_lpar_resources[606] version=1.14.4.5
+data:server_release_lpar_resources[608] typeset HOSTNAME
+data:server_release_lpar_resources[609] typeset MANAGED_SYSTEM
+data:server_release_lpar_resources[610] typeset HMC_IP
+data:server_release_lpar_resources[611] added_apps=''
+data:server_release_lpar_resources[611] typeset added_apps
+data:server_release_lpar_resources[612] APPLICATIONS=''
+data:server_release_lpar_resources[612] typeset APPLICATIONS
+data:server_release_lpar_resources[613] mem_release_type=''
+data:server_release_lpar_resources[613] typeset mem_release_type
+data:server_release_lpar_resources[615] mem_resource=0
+data:server_release_lpar_resources[615] typeset mem_resource
+data:server_release_lpar_resources[616] cpu_resource=0
+data:server_release_lpar_resources[616] typeset cpu_resource
+data:server_release_lpar_resources[617] cuod_mem_resource=0
+data:server_release_lpar_resources[617] typeset cuod_mem_resource
+data:server_release_lpar_resources[618] cuod_cpu_resource=0
+data:server_release_lpar_resources[618] typeset cuod_cpu_resource
+data:server_release_lpar_resources[620] display_event_summary=false
+data:server_release_lpar_resources[620] typeset display_event_summary
+data:server_release_lpar_resources[622] lmb_size=0
+data:server_release_lpar_resources[622] typeset lmb_size
+data:server_release_lpar_resources[624] typeset -i check_cuod
+data:server_release_lpar_resources[625] RC=0
+data:server_release_lpar_resources[625] typeset -i RC
+data:server_release_lpar_resources[628] : Look for any added application servers, beyond those running at the moment
+data:server_release_lpar_resources[630] getopts :g: opt
+data:server_release_lpar_resources[638] shift 0
+data:server_release_lpar_resources[640] APPLICATIONS=data
+data:server_release_lpar_resources[643] : Set up values we are going to need to talk to the HMC, if they have not
+data:server_release_lpar_resources[644] : been set up before.
+data:server_release_lpar_resources[646] [[ -z '' ]]
+data:server_release_lpar_resources[647] hostname
+data:server_release_lpar_resources[647] HOSTNAME=xindaia
+data:server_release_lpar_resources[650] [[ -z xindaia ]]
+data:server_release_lpar_resources[654] [[ -z '' ]]
+data:server_release_lpar_resources[657] sed -n '/value =/s/^.*"(.*)".*/1/p'
+data:server_release_lpar_resources[656] odmget -q name='xindaia and object=HMC_IP' HACMPnode
+data:server_release_lpar_resources[656] HMC_IP=''
+data:server_release_lpar_resources[658] [[ -z '' ]]
+data:server_release_lpar_resources[660] : Node is not an LPAR node
+data:server_release_lpar_resources[662] exit 0
+data:process_resources[process_applications+123] cat /var/hacmp/log/.process_resources_applications.369122.data
+data:process_resources[process_applications+123] read SUCCESS
+data:process_resources[process_applications+123] awk { print $2 }
+data:process_resources[process_applications+123] [[ 0 != 0 ]]
+data:process_resources[process_applications+133] rm -f /var/hacmp/log/.process_resources_applications.369122.data
+data:process_resources[process_applications+133] [[ 0 != 0 ]]
+data:process_resources[process_applications+133] [[ 0 != 0 ]]
+data:process_resources[process_applications+133] [[ 0 != 0 ]]
+data:process_resources[process_applications+147] return 0
+data:process_resources[+2749] RC=0
+data:process_resources[+2749] [[ RELEASE = RELEASE ]]
+data:process_resources[+2749] [[ 0 != 0 ]]
+data:process_resources[+2607] true
+data:process_resources[+2609] set -a
+data:process_resources[+2612] clRGPA
+data:clRGPA[+49] [[ high = high ]]
+data:clRGPA[+49] version=1.16
+data:clRGPA[+51] usingVer=clrgpa
+data:clRGPA[+56] clrgpa
+data:clRGPA[+57] exit 0
+data:process_resources[+2612] eval JOB_TYPE=FILESYSTEMS ACTION=RELEASE FILE_SYSTEMS="/als6tj" RESOURCE_GROUPS="data " FSCHECK_TOOLS="" RECOVERY_METHODS="sequential"
+data:process_resources[+2612] JOB_TYPE=FILESYSTEMS ACTION=RELEASE FILE_SYSTEMS=/als6tj RESOURCE_GROUPS=data  FSCHECK_TOOLS= RECOVERY_METHODS=sequential
+data:process_resources[+2614] RC=0
+data:process_resources[+2615] set +a
+data:process_resources[+2617] [ 0 -ne 0 ]
+data:process_resources[+2623] RESOURCE_GROUPS=data
+data:process_resources[+2681] export GROUPNAME=data
+data:process_resources[+2683] process_file_systems RELEASE
+data:process_resources[process_file_systems+5] STAT=0
+data:process_resources[process_file_systems+7] [ RELEASE = ACQUIRE ]
+data:process_resources[process_file_systems+28] cl_deactivate_fs
+data:cl_deactivate_fs[399] [[ high == high ]]
+data:cl_deactivate_fs[399] version=1.1.11.4
+data:cl_deactivate_fs[401] STATUS=0
+data:cl_deactivate_fs[402] SLEEP=2
+data:cl_deactivate_fs[402] typeset -i SLEEP
+data:cl_deactivate_fs[403] LIMIT=60
+data:cl_deactivate_fs[403] typeset -i LIMIT
+data:cl_deactivate_fs[404] export SLEEP
+data:cl_deactivate_fs[405] export LIMIT
+data:cl_deactivate_fs[406] TMP_FILENAME=_deactivate_fs.tmp
+data:cl_deactivate_fs[408] [[ 0 != 0 ]]
+data:cl_deactivate_fs[413] OEM_CALL=false
+data:cl_deactivate_fs[416] EMULATE=REAL
+data:cl_deactivate_fs[419] : Check here to see if the forced unmount option can be used
+data:cl_deactivate_fs[421] FORCE_OK=''
+data:cl_deactivate_fs[422] export FORCE_OK
+data:cl_deactivate_fs[424] : Each of the V, R, M and F fields are padded to fixed length,
+data:cl_deactivate_fs[425] : to allow reliable comparisons. E.g., maximum VRMF is
+data:cl_deactivate_fs[426] : 99.99.999.999
+data:cl_deactivate_fs[428] typeset -i V R M F
+data:cl_deactivate_fs[429] typeset -Z2 V
+data:cl_deactivate_fs[430] typeset -Z2 R
+data:cl_deactivate_fs[431] typeset -Z3 M
+data:cl_deactivate_fs[432] typeset -Z3 F
+data:cl_deactivate_fs[433] jfs2_lvl=0601002000
+data:cl_deactivate_fs[433] typeset -i jfs2_lvl
+data:cl_deactivate_fs[434] VRMF=0
+data:cl_deactivate_fs[434] typeset -i VRMF
+data:cl_deactivate_fs[437] : Here try and figure out what level of JFS2 is installed
+data:cl_deactivate_fs[439] lslpp -lcqOr bos.rte.filesystem
+data:cl_deactivate_fs[439] cut -f3 -d:
+data:cl_deactivate_fs[439] read V R M F
+data:cl_deactivate_fs[439] IFS=.
+data:cl_deactivate_fs[440] VRMF=0601003000
+data:cl_deactivate_fs[442] (( 601003000 >= 601002000 ))
+data:cl_deactivate_fs[444] FORCE_OK=true
+data:cl_deactivate_fs[448] : if JOB_TYPE is set, and it does not equal to GROUP, then
+data:cl_deactivate_fs[449] : we are processing for process_resources
+data:cl_deactivate_fs[451] [[ FILESYSTEMS != 0 ]]
+data:cl_deactivate_fs[451] [[ FILESYSTEMS != GROUP ]]
+data:cl_deactivate_fs[452] deactivate_fs_process_resources
+data:cl_deactivate_fs[5] STATUS=0
+data:cl_deactivate_fs[8] : for the temp file, just take the first rg name
+data:cl_deactivate_fs[10] echo data
+data:cl_deactivate_fs[10] cut -f 1 -d ' '
+data:cl_deactivate_fs[10] read RES_GRP
+data:cl_deactivate_fs[11] TMP_FILENAME=data_deactivate_fs.tmp
+data:cl_deactivate_fs[14] : Remove the status file if already exists
+data:cl_deactivate_fs[16] rm -f /tmp/data_deactivate_fs.tmp
+data:cl_deactivate_fs[19] : go through all resource groups
+data:cl_deactivate_fs[23] export GROUPNAME
+data:cl_deactivate_fs[24] export RECOVERY_METHOD
+data:cl_deactivate_fs[27] : Get a reverse sorted list of the filesystems in this resource group,
+data:cl_deactivate_fs[28] : so that they can be released in the opposite order they were mounted.
+data:cl_deactivate_fs[29] : This is needed for nested mounts
+data:cl_deactivate_fs[31] print /als6tj
+data:cl_deactivate_fs[31] read LIST_OF_FILE_SYSTEMS_FOR_RG FILE_SYSTEMS
+data:cl_deactivate_fs[31] IFS=:
+data:cl_deactivate_fs[32] print /als6tj
+data:cl_deactivate_fs[32] sort -ru
+data:cl_deactivate_fs[32] tr , 'n'
+data:cl_deactivate_fs[32] LIST_OF_FILE_SYSTEMS_FOR_RG=/als6tj
+data:cl_deactivate_fs[35] : Get the recovery method used for all filesystems in this resource group
+data:cl_deactivate_fs[37] print sequential
+data:cl_deactivate_fs[37] read RECOVERY_METHOD RECOVERY_METHODS
+data:cl_deactivate_fs[37] IFS=:
+data:cl_deactivate_fs[38] print sequential
+data:cl_deactivate_fs[38] cut -f 1 -d ,
+data:cl_deactivate_fs[38] RECOVERY_METHOD=sequential
+data:cl_deactivate_fs[41] : verify the recovery method
+data:cl_deactivate_fs[43] RECOVERY_METHOD=sequential
+data:cl_deactivate_fs[44] RECOVERY_METHOD=sequential
+data:cl_deactivate_fs[46] [[ sequential != sequential ]]
+data:cl_deactivate_fs[53] : update resource manager - file systems being released
+data:cl_deactivate_fs[55] ALLFS=All_filesystems
+data:cl_deactivate_fs[56] [[ REAL == EMUL ]]
+data:cl_deactivate_fs[61] cl_RMupdate resource_releasing All_filesystems cl_deactivate_fs
Reference string: Thu.May.14.05:59:13.GMT.2015.cl_deactivate_fs.All_filesystems.data.ref
+data:cl_deactivate_fs[63] pid_list=''
+data:cl_deactivate_fs[66] : now that all variables are set, perform the umounts
+data:cl_deactivate_fs:/als6tj[70] PS4_LOOP=/als6tj
+data:cl_deactivate_fs:/als6tj[71] [[ sequential == parallel ]]
+data:cl_deactivate_fs:/als6tj[82] [[ REAL == EMUL ]]
+data:cl_deactivate_fs:/als6tj[87] fs_umount /als6tj cl_deactivate_fs data_deactivate_fs.tmp
+data:cl_deactivate_fs(.090)[fs_umount+5] FS=/als6tj
+data:cl_deactivate_fs(.090)[fs_umount+5] typeset FS
+data:cl_deactivate_fs(.090)[fs_umount+6] PROGNAME=cl_deactivate_fs
+data:cl_deactivate_fs(.090)[fs_umount+6] typeset PROGNAME
+data:cl_deactivate_fs(.090)[fs_umount+7] TMP_FILENAME=data_deactivate_fs.tmp
+data:cl_deactivate_fs(.090)[fs_umount+7] typeset TMP_FILENAME
+data:cl_deactivate_fs(.090)[fs_umount+8] clwparroot data
+data:clwparroot[35] [[ high == high ]]
+data:clwparroot[35] version=1.1
+data:clwparroot[37] . /usr/es/sbin/cluster/wpar/wpar_utils
+data:clwparroot[+20] ERRNO=0
+data:clwparroot[+22] [[ high == high ]]
+data:clwparroot[+22] set -x
+data:clwparroot[+23] [[ high == high ]]
+data:clwparroot[+23] version=1.7
+data:clwparroot[+25] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+data:clwparroot[+20] [[ high == high ]]
+data:clwparroot[+20] set -x
+data:clwparroot[+21] [[ high == high ]]
+data:clwparroot[+21] version=1.2
+data:clwparroot[+23] PATH=/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/events/emulate:/usr/es/sbin/cluster/events/emulate/driver:/usr/es/sbin/cluster/events/emulate/utils:/usr/es/sbin/cluster/tguides/bin:/usr/es/sbin/cluster/tguides/classes:/usr/es/sbin/cluster/tguides/images:/usr/es/sbin/cluster/tguides/scripts:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+data:clwparroot[+24] export PATH
+data:clwparroot[+26] typeset usageErr invalArgErr internalErr
+data:clwparroot[+28] usageErr=10
+data:clwparroot[+29] invalArgErr=11
+data:clwparroot[+30] internalErr=12
+data:clwparroot[+39] rgName=data
+data:clwparroot[+42] uname
+data:clwparroot[+42] OSNAME=AIX
+data:clwparroot[+44] [[ AIX == *AIX* ]]
+data:clwparroot[+45] lslpp -l bos.wpars
+data:clwparroot[+45] 1> /dev/null 2>& 1
+data:clwparroot[+47] loadWparName data
+data:clwparroot[loadWparName+5] usage='Usage: loadWparName '
+data:clwparroot[loadWparName+5] typeset -r usage
+data:clwparroot[loadWparName+6] typeset rgName wparName wparDir rc
+data:clwparroot[loadWparName+8] [[ 1 < 1 ]]
+data:clwparroot[loadWparName+13] rgName=data
+data:clwparroot[loadWparName+14] wparDir=/var/hacmp/adm/wpar
+data:clwparroot[loadWparName+15] [[ ! -d /var/hacmp/adm/wpar ]]
+data:clwparroot[loadWparName+22] [[ -f /var/hacmp/adm/wpar/data ]]
+data:clwparroot[loadWparName+23] cat /var/hacmp/adm/wpar/data
+data:clwparroot[loadWparName+23] wparName=''
+data:clwparroot[loadWparName+24] [[ -n '' ]]
+data:clwparroot[loadWparName+36] return 0
+data:clwparroot[+47] wparName=''
+data:clwparroot[+48] [[ -z '' ]]
+data:clwparroot[+48] exit 0
+data:cl_deactivate_fs(.150)[fs_umount+8] WPAR_ROOT=''
+data:cl_deactivate_fs(.150)[fs_umount+8] typeset WPAR_ROOT
+data:cl_deactivate_fs(.150)[fs_umount+9] STATUS=0
+data:cl_deactivate_fs(.150)[fs_umount+9] typeset STATUS
+data:cl_deactivate_fs(.150)[fs_umount+10] typeset lv
+data:cl_deactivate_fs(.150)[fs_umount+11] typeset lv_lsfs
+data:cl_deactivate_fs(.150)[fs_umount+14] : Get the logical volume associated with the filesystem
+data:cl_deactivate_fs(.160)[fs_umount+16] lsfs -c /als6tj
+data:cl_deactivate_fs(.160)[fs_umount+16] lv_lsfs=$'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acctn/als6tj:/dev/fslvnew00:jfs2:::398458880:rw:yes:no'
+data:cl_deactivate_fs(.160)[fs_umount+28] : Get the logical volume name and filesystem type
+data:cl_deactivate_fs(.170)[fs_umount+30] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acctn/als6tj:/dev/fslvnew00:jfs2:::398458880:rw:yes:no'
+data:cl_deactivate_fs(.170)[fs_umount+30] cut -d: -f2
+data:cl_deactivate_fs(.170)[fs_umount+30] tail -1
+data:cl_deactivate_fs(.180)[fs_umount+30] lv=/dev/fslvnew00
+data:cl_deactivate_fs(.190)[fs_umount+31] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acctn/als6tj:/dev/fslvnew00:jfs2:::398458880:rw:yes:no'
+data:cl_deactivate_fs(.190)[fs_umount+31] cut -d: -f3
+data:cl_deactivate_fs(.190)[fs_umount+31] tail -1
+data:cl_deactivate_fs(.190)[fs_umount+31] fs_type=jfs2
+data:cl_deactivate_fs(.190)[fs_umount+34] : For WPARs, find the real file system name
+data:cl_deactivate_fs(.190)[fs_umount+36] [[ -n '' ]]
+data:cl_deactivate_fs(.190)[fs_umount+39] : Check to see if filesystem is mounted.
+data:cl_deactivate_fs(.190)[fs_umount+41] grep -qw /dev/fslvnew00
+data:cl_deactivate_fs(.190)[fs_umount+41] mount
+data:cl_deactivate_fs(.190)[fs_umount+43] (( count=0))
+data:cl_deactivate_fs(.190)[fs_umount+43] (( count <= 60))
+data:cl_deactivate_fs(.190)[fs_umount+46] : Try to unmount the file system
+data:cl_deactivate_fs(.190)[fs_umount+47] date '+%h %d %H:%M:%S.000'
+data:cl_deactivate_fs(.200)[fs_umount+47] : Attempt 1 of 61 to unmount at May 14 05:59:13.000
+data:cl_deactivate_fs(.200)[fs_umount+49] umount /als6tj
+data:cl_deactivate_fs(7.470)[fs_umount+52] : Unmount of /als6tj worked.
+data:cl_deactivate_fs(7.470)[fs_umount+54] break
+data:cl_deactivate_fs(7.470)[fs_umount+130] echo 0
+data:cl_deactivate_fs(7.470)[fs_umount+130] 1>> /tmp/data_deactivate_fs.tmp
+data:cl_deactivate_fs(7.470)[fs_umount+131] return 0
+data:cl_deactivate_fs:/als6tj[91] unset PS4_LOOP
+data:cl_deactivate_fs[96] : wait to sync all the processes.
+data:cl_deactivate_fs[98] [[ -z '' ]]
+data:cl_deactivate_fs[100] wait
+data:cl_deactivate_fs[104] ALLNOERROR=All_non_error_filesystems
+data:cl_deactivate_fs[106] : update resource manager
+data:cl_deactivate_fs[108] [[ REAL == EMUL ]]
+data:cl_deactivate_fs[113] cl_RMupdate resource_down All_non_error_filesystems cl_deactivate_fs
Reference string: Thu.May.14.05:59:21.GMT.2015.cl_deactivate_fs.All_non_error_filesystems.data.ref
+data:cl_deactivate_fs[118] : Check to see how the unmounts went
+data:cl_deactivate_fs[120] [[ -f /tmp/data_deactivate_fs.tmp ]]
+data:cl_deactivate_fs[122] grep -q 1 /tmp/data_deactivate_fs.tmp
+data:cl_deactivate_fs[130] rm -f /tmp/data_deactivate_fs.tmp
+data:cl_deactivate_fs[133] return 0
+data:cl_deactivate_fs[453] exit 0
+data:process_resources[process_file_systems+29] RC=0
+data:process_resources[process_file_systems+29] [[ 0 != 0 ]]
+data:process_resources[process_file_systems+29] [[ 0 != 0 ]]
+data:process_resources[process_file_systems+46] return 0
+data:process_resources[+2684] RC=0
+data:process_resources[+2684] [[ RELEASE == RELEASE ]]
+data:process_resources[+2687] ((  0 != 0  ))
+data:process_resources[+2607] true
+data:process_resources[+2609] set -a
+data:process_resources[+2612] clRGPA
+data:clRGPA[+49] [[ high = high ]]
+data:clRGPA[+49] version=1.16
+data:clRGPA[+51] usingVer=clrgpa
+data:clRGPA[+56] clrgpa
+data:clRGPA[+57] exit 0
+data:process_resources[+2612] eval JOB_TYPE=VGS ACTION=RELEASE VOLUME_GROUPS="datavg,rootvg" RESOURCE_GROUPS="data " EXPORT_FILESYSTEM=""
+data:process_resources[+2612] JOB_TYPE=VGS ACTION=RELEASE VOLUME_GROUPS=datavg,rootvg RESOURCE_GROUPS=data  EXPORT_FILESYSTEM=
+data:process_resources[+2614] RC=0
+data:process_resources[+2615] set +a
+data:process_resources[+2617] [ 0 -ne 0 ]
+data:process_resources[+2623] RESOURCE_GROUPS=data
+data:process_resources[+2758] export GROUPNAME=data
+data:process_resources[+2759] process_volume_groups_main RELEASE
+data:process_resources[process_volume_groups_main+3] typeset -i DEF_VARYON_ACTION=0
+data:process_resources[process_volume_groups_main+4] typeset -i FAILURE_IN_METHOD=0
+data:process_resources[process_volume_groups_main+6] STAT=0
+data:process_resources[process_volume_groups_main+8] +data:process_resources[process_volume_groups_main+8] getReplicatedResources
REPLICATED_RESOURCES=false
+data:process_resources[process_volume_groups_main+8] [[ false = true ]]
+data:process_resources[process_volume_groups_main+186] process_volume_groups RELEASE
+data:process_resources[process_volume_groups+4] STAT=0
+data:process_resources[process_volume_groups+6] export GROUPNAME=data
+data:process_resources[process_volume_groups+12] [ RELEASE = ACQUIRE ]
+data:process_resources[process_volume_groups+33] cl_deactivate_vgs -n
+data:cl_deactivate_vgs[439] [[ high == high ]]
+data:cl_deactivate_vgs[439] version=1.1.10.6
+data:cl_deactivate_vgs[440] cl_get_path
+data:cl_deactivate_vgs[440] HA_DIR=es
+data:cl_deactivate_vgs[442] STATUS=0
+data:cl_deactivate_vgs[442] typeset -i STATUS
+data:cl_deactivate_vgs[443] TMP_VARYOFF_STATUS=/tmp/_deactivate_vgs.tmp
+data:cl_deactivate_vgs[444] sddsrv_off=FALSE
+data:cl_deactivate_vgs[445] ALLVGS=All_volume_groups
+data:cl_deactivate_vgs[446] OEM_CALL=false
+data:cl_deactivate_vgs[448] [[ -z '' ]]
+data:cl_deactivate_vgs[450] EMULATE=REAL
+data:cl_deactivate_vgs[453] (( 1 != 0 ))
+data:cl_deactivate_vgs[453] [[ -n == -c ]]
+data:cl_deactivate_vgs[462] EVENT_TYPE=not_set
+data:cl_deactivate_vgs[463] EVENT_TYPE=not_set
+data:cl_deactivate_vgs[466] : if JOB_TYPE is set, and it does not equal to GROUP, then
+data:cl_deactivate_vgs[467] : we are processing for process_resources
+data:cl_deactivate_vgs[469] [[ VGS != 0 ]]
+data:cl_deactivate_vgs[469] [[ VGS != GROUP ]]
+data:cl_deactivate_vgs[470] PROC_RES=true
+data:cl_deactivate_vgs[484] : set -u will report an error if any variable used in the script is not set
+data:cl_deactivate_vgs[486] set -u
+data:cl_deactivate_vgs[489] : Remove the status file if it currently exists
+data:cl_deactivate_vgs[491] rm -f /tmp/_deactivate_vgs.tmp
+data:cl_deactivate_vgs[494] : If the sddsrv daemon is running - vpath dead path detection and
+data:cl_deactivate_vgs[495] : recovery - turn it off, since interactions with the fibre channel
+data:cl_deactivate_vgs[496] : device driver will, in the case where there actually is a dead path,
+data:cl_deactivate_vgs[497] : slow down every vpath operation.
+data:cl_deactivate_vgs[499] ls '/dev/vpath*'
+data:cl_deactivate_vgs[499] 1> /dev/null 2>& 1
+data:cl_deactivate_vgs[526] : Setup for the hats_adapter calls
+data:cl_deactivate_vgs[528] cldomain
+data:cl_deactivate_vgs[528] HA_DOMAIN_NAME=xindai_cluster
+data:cl_deactivate_vgs[528] export HA_DOMAIN_NAME
+data:cl_deactivate_vgs[529] HB_SERVER_SOCKET=/var/ha/soc/topsvcs/server_socket
+data:cl_deactivate_vgs[529] export HB_SERVER_SOCKET
+data:cl_deactivate_vgs[532] : Special processing is required for 2 node clusters. determine the number
+data:cl_deactivate_vgs[533] : of nodes and AIX level
+data:cl_deactivate_vgs[535] TWO_NODE_CLUSTER=FALSE
+data:cl_deactivate_vgs[535] export TWO_NODE_CLUSTER
+data:cl_deactivate_vgs[536] FS_TYPES=''
+data:cl_deactivate_vgs[536] export FS_TYPES
+data:cl_deactivate_vgs[537] grep 'name ='
+data:cl_deactivate_vgs[537] sort
+data:cl_deactivate_vgs[537] wc -l
+data:cl_deactivate_vgs[537] uniq
+data:cl_deactivate_vgs[537] odmget HACMPnode
+data:cl_deactivate_vgs[537] (( 2 ==        2 ))
+data:cl_deactivate_vgs[538] [[ -n '' ]]
+data:cl_deactivate_vgs[588] : Pick up a list of currently varyd on volume groups
+data:cl_deactivate_vgs[590] lsvg -L -o
+data:cl_deactivate_vgs[590] 2> /tmp/lsvg.err
+data:cl_deactivate_vgs[590] VG_ON_LIST=$'datavgnrootvg'
+data:cl_deactivate_vgs[593] : if we are not called from process_resources, we have the old-style
+data:cl_deactivate_vgs[594] : environment and parameters
+data:cl_deactivate_vgs[596] [[ true == false ]]
+data:cl_deactivate_vgs[668] : Called from process_resources
+data:cl_deactivate_vgs[670] LIST_OF_VOLUME_GROUPS_FOR_RG=''
+data:cl_deactivate_vgs[671] SAVE_VOLUME_GROUPS=datavg,rootvg
+data:cl_deactivate_vgs[676] : Pick up the volume groups for this resource group.
+data:cl_deactivate_vgs[677] : The format of the list is
+data:cl_deactivate_vgs[678] : rg1vg1,rg1vg2,rg1vg3:rg2vg1,rg2vg2,rg2vg3:rg3vg1...
+data:cl_deactivate_vgs[679] : The expression below picks up everything to the first :
+data:cl_deactivate_vgs[681] echo datavg,rootvg
+data:cl_deactivate_vgs[681] read LIST_OF_VOLUME_GROUPS_FOR_RG VOLUME_GROUPS
+data:cl_deactivate_vgs[681] IFS=:
+data:cl_deactivate_vgs[684] : Process any mndhb for these volume groups
+data:cl_deactivate_vgs[686] IFS=,
+data:cl_deactivate_vgs[686] set -- datavg,rootvg
+data:cl_deactivate_vgs[686] print datavg rootvg
+data:cl_deactivate_vgs[686] vgs_process_mndhb 'datavg rootvg'
+data:cl_deactivate_vgs[65] [[ high == high ]]
+data:cl_deactivate_vgs[65] set -x
+data:cl_deactivate_vgs[67] VG_LIST='datavg rootvg'
+data:cl_deactivate_vgs[67] typeset VG_LIST
+data:cl_deactivate_vgs[68] typeset lv_list
+data:cl_deactivate_vgs[69] lv_base=''
+data:cl_deactivate_vgs[69] typeset lv_base
+data:cl_deactivate_vgs[71] STATUS=0
+data:cl_deactivate_vgs[71] typeset -i STATUS
+data:cl_deactivate_vgs[72] RC=0
+data:cl_deactivate_vgs[72] typeset -i RC
+data:cl_deactivate_vgs[77] : If this vg contains lvs that are part of a mndhb network, tell
+data:cl_deactivate_vgs[78] : topsvcs to stop monitoring the network.
+data:cl_deactivate_vgs[79] : Note that we use clrsctinfo/cllsif because it will do the raw device
+data:cl_deactivate_vgs[80] : name mapping for us.
+data:cl_deactivate_vgs[82] clrsctinfo -p cllsif -c
+data:cl_deactivate_vgs[82] grep :datavg:
+data:cl_deactivate_vgs[82] sort -u
+data:cl_deactivate_vgs[82] cut -f 7 -d:
+data:cl_deactivate_vgs[82] lv_list=''
+data:cl_deactivate_vgs[77] : If this vg contains lvs that are part of a mndhb network, tell
+data:cl_deactivate_vgs[78] : topsvcs to stop monitoring the network.
+data:cl_deactivate_vgs[79] : Note that we use clrsctinfo/cllsif because it will do the raw device
+data:cl_deactivate_vgs[80] : name mapping for us.
+data:cl_deactivate_vgs[82] sort -u
+data:cl_deactivate_vgs[82] grep :rootvg:
+data:cl_deactivate_vgs[82] cut -f 7 -d:
+data:cl_deactivate_vgs[82] clrsctinfo -p cllsif -c
+data:cl_deactivate_vgs[82] lv_list=''
+data:cl_deactivate_vgs[103] : if there were any calls to hats_adapter give topsvcs a bit to catch up
+data:cl_deactivate_vgs[105] [[ -n '' ]]
+data:cl_deactivate_vgs[106] return 0
+data:cl_deactivate_vgs[691] : Finish varyoff procesing
+data:cl_deactivate_vgs[693] LIST_OF_VOLUME_GROUPS_FOR_RG=''
+data:cl_deactivate_vgs[694] VOLUME_GROUPS=datavg,rootvg
+data:cl_deactivate_vgs[698] export GROUPNAME
+data:cl_deactivate_vgs[700] : Pick up the volume groups for this resource group.
+data:cl_deactivate_vgs[701] : The format of the list is
+data:cl_deactivate_vgs[702] : rg1vg1,rg1vg2,rg1vg3:rg2vg1,rg2vg2,rg2vg3:rg3vg1...
+data:cl_deactivate_vgs[703] : The expression below picks up everything to the first :
+data:cl_deactivate_vgs[705] echo datavg,rootvg
+data:cl_deactivate_vgs[705] read LIST_OF_VOLUME_GROUPS_FOR_RG VOLUME_GROUPS
+data:cl_deactivate_vgs[705] IFS=:
+data:cl_deactivate_vgs[708] : Reverse the order, so that volume groups are released in the
+data:cl_deactivate_vgs[709] : opposite order from which they were acquired
+data:cl_deactivate_vgs[713] echo datavg,rootvg
+data:cl_deactivate_vgs[715] sort -ru
+data:cl_deactivate_vgs[714] tr , 'n'
+data:cl_deactivate_vgs[713] LIST_OF_VOLUME_GROUPS_FOR_RG=$'rootvgndatavg'
+data:cl_deactivate_vgs[715] : Update Resource Manager - let it know that we are varying off the
+data:cl_deactivate_vgs[716] : volume groups for the resource group
+data:cl_deactivate_vgs[718] [[ REAL == EMUL ]]
+data:cl_deactivate_vgs[723] cl_RMupdate resource_releasing All_volume_groups cl_deactivate_vgs
Reference string: Thu.May.14.05:59:21.GMT.2015.cl_deactivate_vgs.All_volume_groups.data.ref
+data:cl_deactivate_vgs[727] : Process the volume groups for this resource group
+data:cl_deactivate_vgs:rootvg[731] PS4_LOOP=rootvg
+data:cl_deactivate_vgs:rootvg[733] : Find out if this volume group is varyd on
+data:cl_deactivate_vgs:rootvg[735] print datavg rootvg
+data:cl_deactivate_vgs:rootvg[735] grep -qw rootvg
+data:cl_deactivate_vgs:rootvg[742] MODE=9999
+data:cl_deactivate_vgs:rootvg[743] /usr/sbin/getlvodm -v rootvg
+data:cl_deactivate_vgs:rootvg[743] VGID=00ce3e1400004c0000000127d2ac9ac4
+data:cl_deactivate_vgs:rootvg[744] lqueryvg -g 00ce3e1400004c0000000127d2ac9ac4 -X
+data:cl_deactivate_vgs:rootvg[744] MODE=0
+data:cl_deactivate_vgs:rootvg[745] (( 0 != 0 ))
+data:cl_deactivate_vgs:rootvg[747] [[ REAL == EMUL ]]
+data:cl_deactivate_vgs:rootvg[753] : Yes, it is varied on, so go vary it off
+data:cl_deactivate_vgs:rootvg[755] vgs_varyoff rootvg 0
+data:cl_deactivate_vgs:rootvg[126] PS4_FUNC=vgs_varyoff
+data:cl_deactivate_vgs(.600):rootvg[126] PS4_TIMER=true
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+126] ERRNO=0
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+126] typeset PS4_FUNC PS4_TIMER ERRNO
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+127] [[ high == high ]]
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+127] set -x
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+129] VG=rootvg
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+129] typeset VG
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+130] MODE=0
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+130] typeset MODE
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+132] typeset OPEN_FSs
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+133] typeset OPEN_LVs
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+134] typeset TMP_VG_LIST
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+136] STATUS=0
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+136] typeset -i STATUS
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+137] RC=0
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+137] typeset -i RC
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+138] SELECTIVE_FAILOVER=false
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+138] typeset SELECTIVE_FAILOVER
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+139] typeset LV
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+140] lv_list=''
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+140] typeset lv_list
+data:cl_deactivate_vgs(.600):rootvg[vgs_varyoff+143] : Check to see if this is a DARE event, whilst we have open file systems
+data:cl_deactivate_vgs(.700):rootvg[vgs_varyoff+145] lsvg -l -L rootvg
+data:cl_deactivate_vgs(.700):rootvg[vgs_varyoff+145] LANG=C
+data:cl_deactivate_vgs(.800):rootvg[vgs_varyoff+145] TMP_VG_LIST=$'rootvg:nLV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINTnhd5                 boot       1       2       2    closed/syncd  N/Anhd6                 paging     2       4       2    open/syncd    N/Anhd8                 jfs2log    1       2       2    open/syncd    N/Anhd4                 jfs2       4       8       2    open/syncd    /nhd2                 jfs2       16      32      2    open/syncd    /usrnhd9var              jfs2       8       16      2    open/syncd    /varnhd3                 jfs2       20      40      2    open/syncd    /tmpnhd1                 jfs2       8       16      2    open/syncd    /homenhd10opt             jfs2       48      96      2    open/syncd    /optnhd11admin           jfs2       1       2       2    open/syncd    /adminnlg_dumplv           sysdump    8       8       1    open/syncd    N/Anlivedump            jfs2       1       2       2    open/syncd    /var/adm/ras/livedumpnfslv00              jfs2       400     800     2    closed/syncd  N/Anpaging00            paging     14      14      1    open/syncd    N/Anfslv02              jfs2       4       4       1    open/syncd    /bmcnfslv03              jfs2       8       8       1    open/syncd    /precise'
+data:cl_deactivate_vgs(.800):rootvg[vgs_varyoff+147] [[ not_set == reconfig* ]]
+data:cl_deactivate_vgs(.800):rootvg[vgs_varyoff+165] : Get list of open logical volumes corresponding to file systems
+data:cl_deactivate_vgs(1.000):rootvg[vgs_varyoff+167] print $'rootvg:nLV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINTnhd5                 boot       1       2       2    closed/syncd  N/Anhd6                 paging     2       4       2    open/syncd    N/Anhd8                 jfs2log    1       2       2    open/syncd    N/Anhd4                 jfs2       4       8       2    open/syncd    /nhd2                 jfs2       16      32      2    open/syncd    /usrnhd9var              jfs2       8       16      2    open/syncd    /varnhd3                 jfs2       20      40      2    open/syncd    /tmpnhd1                 jfs2       8       16      2    open/syncd    /homenhd10opt             jfs2       48      96      2    open/syncd    /optnhd11admin           jfs2       1       2       2    open/syncd    /adminnlg_dumplv           sysdump    8       8       1    open/syncd    N/Anlivedump            jfs2       1       2       2    open/syncd    /var/adm/ras/livedu+data:cl_deactivate_vgs(1.000):rootvg[vgs_varyoff+167] awk '$2 ~ /jfs2?$/ && $6 ~ /open/ {print $1}'
mpnfslv00              jfs2       400     800     2    closed/syncd  N/Anpaging00            paging     14      14      1    open/syncd    N/Anfslv02              jfs2       4       4       1    open/syncd    /bmcnfslv03              jfs2       8       8       1    open/syncd    /precise'
+data:cl_deactivate_vgs(1.000):rootvg[vgs_varyoff+167] OPEN_LVs=$'hd4nhd2nhd9varnhd3nhd1nhd10optnhd11adminnlivedumpnfslv02nfslv03'
+data:cl_deactivate_vgs(1.000):rootvg[vgs_varyoff+170] : If this is an rg_move on selective fallover, lsvg -l might not work
+data:cl_deactivate_vgs(1.000):rootvg[vgs_varyoff+171] : so try looking up the LVs in the ODM if the VG is online
+data:cl_deactivate_vgs(1.000):rootvg[vgs_varyoff+173] [[ -z $'hd4nhd2nhd9varnhd3nhd1nhd10optnhd11adminnlivedumpnfslv02nfslv03' ]]
+data:cl_deactivate_vgs(1.000):rootvg[vgs_varyoff+191] : Attempt to kill off any processes using the logical volume, so that
+data:cl_deactivate_vgs(1.000):rootvg[vgs_varyoff+192] : varyoff will hopefully work. Varyoff is guaranteed to fail if there
+data:cl_deactivate_vgs(1.000):rootvg[vgs_varyoff+193] : are open connections to any logical volume.
+data:cl_deactivate_vgs(1.000):rootvg[vgs_varyoff+195] [[ -n $'hd4nhd2nhd9varnhd3nhd1nhd10optnhd11adminnlivedumpnfslv02nfslv03' ]]
+data:cl_deactivate_vgs(1.000):rootvg[vgs_varyoff+199] fuser -k -u -x /dev/hd4
/dev/hd4:        0c(root)       1c(root)   65698c(root)   82018c(root)   90280c(root)   94428(root)   98452c(root)  118962c(pconsole)  127096c(root)  131192c(root)  135316c(root)  139420(root)  143494c(root)  147594c(root)  151688c(root)  159898c(root)  163902(root)  168060(root)  172246c  180402(root)  184526c(root)  188608(root)  192704c(pconsole)  196748(root)  209088(root)  217326(root)  229376(root)  233538(patrol)  250014c  274598c(root)  290866(root)  303262c(root)  311380c(root)  319688(root)  323596(root)  364590c(root)  368726c(root)  381076c(root)  397554c(dasusr1)  405758c(root)  409694c(root)  413702c  417944c(root)   53696c(root)   66044c(root)   82290(root)   90522c(root)   94712(dasusr1)   98746(root)  115114c(root)  119186c(root)  127394c(root)  131502(root)  139710(pconsole)  143672(patrol)  156068c(root)  160242c(root)  172472c(root)  176562(root)  184608(patrol)  201154(root)  229780c  319744c(root)  324012c(root)  327972c(root)  336326c(root)  340396c(root)  344452c(root)  360790c(root)  393660c  401854c(root)  409954c(root)  414034c  430546c(root)

May 14 06:05:12 EVENT START: config_too_long 360 TE_FAIL_NODE

:config_too_long[+65] [[ high = high ]]
:config_too_long[+65] version=1.16
:config_too_long[+66] :config_too_long[+66] cl_get_path
HA_DIR=es
:config_too_long[+68] NUM_SECS=360
:config_too_long[+69] EVENT=TE_FAIL_NODE
:config_too_long[+71] HOUR=3600
:config_too_long[+72] THRESHOLD=5
:config_too_long[+73] SLEEP_INTERVAL=1
:config_too_long[+79] PERIOD=30
:config_too_long[+82] set -u
:config_too_long[+87] LOOPCNT=0
:config_too_long[+88] MESSAGECNT=0
:config_too_long[+89] :config_too_long[+89] cllsclstr -c
:config_too_long[+89] cut -d : -f2
:config_too_long[+89] grep -v cname
CLUSTER=xindai_cluster
:config_too_long[+90] TIME=360
:config_too_long[+91] sleep_cntr=0
:config_too_long[+96] [ -x /usr/lpp/ssp/bin/spget_syspar ]
:cl_sel[46] FFDC_DIRECTORY=/tmp/ibmsupt/hacmp
:cl_sel[49] uname -m
:cl_sel[49] UNAME=00CE3E144C00
:cl_sel[55] PLATFORM=__AIX__
:cl_sel[55] export PLATFORM
:cl_sel[62] [[ __AIX__ == __AIX__ ]]
:cl_sel[63] awk '{ print $3 }'
:cl_sel[63] tail -1
:cl_sel[63] df -k /tmp
:cl_sel[63] FREE_SPACE=4662504
:cl_sel[71] REQUIRED_SPACE=20000
:cl_sel[72] [ 4662504 -lt 20000 ]
:cl_sel[82] test -d /tmp/ibmsupt/hacmp
:cl_sel[83] [ 0 -ne 0 ]
:cl_sel[94] date +%Y.%m.%d.%H.%M
:cl_sel[94] TIMESTAMP=2015.05.14.06.05
:cl_sel[95] [[ -z 2015.05.14.06.05 ]]
:cl_sel[105] awk -F '"' '{print $2}'
:cl_sel[105] grep 'value ='
:cl_sel[105] odmget -qname=hacmp.out HACMPlogs
:cl_sel[105] LOG_DIRECTORY=/tmp
:cl_sel[107] [[ -z /tmp ]]
:cl_sel[117] tar cvf /tmp/ibmsupt/hacmp/eventlogs.2015.05.14.06.05 /tmp/hacmp.out /tmp/hacmp.out.1 /tmp/hacmp.out.3 /tmp/hacmp.out.5 /tmp/hacmp.out.6 /tmp/hacmp.out.7
:cl_sel[117] 1> /dev/null 2>& 1
:cl_sel[118] [ 0 -ne 0 ]
:cl_sel[125] compress /tmp/ibmsupt/hacmp/eventlogs.2015.05.14.06.05
:cl_sel[125] 1> /dev/null 2>& 1
:cl_sel[130] ls -1 /tmp/ibmsupt/hacmp/eventlogs.2010.05.29.17.19.Z /tmp/ibmsupt/hacmp/eventlogs.2010.06.05.21.17.Z /tmp/ibmsupt/hacmp/eventlogs.2010.08.22.10.47.Z /tmp/ibmsupt/hacmp/eventlogs.2011.03.01.00.12.Z /tmp/ibmsupt/hacmp/eventlogs.2011.03.01.00.18.Z /tmp/ibmsupt/hacmp/eventlogs.2015.05.14.06.05.Z
:cl_sel[130] wc -l
:cl_sel[130] 2> /dev/null
:cl_sel[130] FFDC_COUNT='       6'
:cl_sel[131] [ '       6' -gt 5 ]
:cl_sel[133] expr 6 - 5
:cl_sel[133] number_to_rm=1
:cl_sel[134] head -1
:cl_sel[134] ls -rt1 /tmp/ibmsupt/hacmp/eventlogs.2010.05.29.17.19.Z /tmp/ibmsupt/hacmp/eventlogs.2010.06.05.21.17.Z /tmp/ibmsupt/hacmp/eventlogs.




2.切换HA脚本


May 14 22:34:10 EVENT START: fail_interface xindaia 192.168.1.4

:fail_interface[+58] [[ high = high ]]
:fail_interface[+58] version=1.10
:fail_interface[+59] :fail_interface[+59] cl_get_path
HA_DIR=es
:fail_interface[+61] [ 2 -ne 2 ]
:fail_interface[+67] NODENAME=xindaia
:fail_interface[+68] ADDR=192.168.1.4
:fail_interface[+70] set -u
:fail_interface[+72] :fail_interface[+72] dspmsg scripts.cat 8062 Interface 192.168.1.4 has failed on node xindaia.n 192.168.1.4 xindaia
MSG=Interface 192.168.1.4 has failed on node xindaia.
:fail_interface[+73] echo Interface 192.168.1.4 has failed on node xindaia.
:fail_interface[+73] 1> /dev/console
:fail_interface[+75] [[ xindaia = xindaia ]]
:fail_interface[+79] :fail_interface[+79] cut -d: -f3
:fail_interface[+79] cllsif -Scn 192.168.1.4
NETWORK=net_ether_01
:fail_interface[+82] :fail_interface[+82] odmget -qname=net_ether_01 HACMPnetwork
:fail_interface[+82] sed s/"//g
:fail_interface[+82] awk $1 == "alias" {print $3}
ALIASING=1
:fail_interface[+82] [[ 1 = 1 ]]
:fail_interface[+85] saveNSORDER=UNDEFINED
:fail_interface[+86] NSORDER=local
:fail_interface[+86] export NSORDER
:fail_interface[+87] netstat -in
Name  Mtu   Network     Address           ZoneID    Ipkts Ierrs    Opkts Oerrs  Coll
en0*  1500  link#2      0.14.5e.76.18.9c       -    69417     0    30943     3     0
en0*  1500  192.168.1   192.168.1.4            -    69417     0    30943     3     0
en2   1500  link#3      0.14.5e.76.18.60       -    41661     0     8651     3     0
en2   1500  192.168.2   192.168.2.4            -    41661     0     8651     3     0
en2   1500  10.1.200    10.1.200.136           -    41661     0     8651     3     0
en3   1500  link#4      0.14.5e.76.18.61       -        0     0       84    84     0
en3   1500  192.168.8   192.168.8.3            -        0     0       84    84     0
lo0   16896 link#1                             -    22306     0    22324     0     0
lo0   16896 127         127.0.0.1              -    22306     0    22324     0     0
lo0   16896 ::1                                1    22306     0    22324     0     0
:fail_interface[+88] netstat -rnC
Routing tables
Destination        Gateway           Flags     Wt  Policy  If   Cost Config_Cost

Route tree for Protocol Family 2 (Internet):
default            10.1.200.254      UG         1    -   en2       0    0
10.1.200.0         10.1.200.136      UHSb       1    -   en2       0    0 =>
10.1.200/24        10.1.200.136      U          1    -   en2       0    0
10.1.200.136       127.0.0.1         UGHS       1    -   lo0       0    0
10.1.200.255       10.1.200.136      UHSb       1    -   en2       0    0
127/8              127.0.0.1         U          1    -   lo0       0    0
192.168.1.0        192.168.1.4       UHSb       1    -   en0       0    0 =>
192.168.1/24       192.168.1.4       U          1    -   en0       0    0
192.168.1.4        127.0.0.1         UGHS       1    -   lo0       0    0
192.168.1.255      192.168.1.4       UHSb       1    -   en0       0    0
192.168.2.0        192.168.2.4       UHSb       1    -   en2       0    0 =>
192.168.2/24       192.168.2.4       U          1    -   en2       0    0
192.168.2.4        127.0.0.1         UGHS       1    -   lo0       0    0
192.168.2.255      192.168.2.4       UHSb       1    -   en2       0    0
192.168.8.0        192.168.8.3       UHSb       1    -   en3       0    0 =>
192.168.8/24       192.168.8.3       U          1    -   en3       0    0
192.168.8.3        127.0.0.1         UGHS       1    -   lo0       0    0
192.168.8.255      192.168.8.3       UHSb       1    -   en3       0    0

Route tree for Protocol Family 24 (Internet v6):
::1                ::1               UH         1    -   lo0       0    0
:fail_interface[+89] cl_configure_persistent_address fail_boot -i 192.168.1.4 -n net_ether_01
:cl_configure_persistent_address[+903] [[ high = high ]]
:cl_configure_persistent_address[+903] version=1.37.1.4
:cl_configure_persistent_address[+904] :cl_configure_persistent_address[+904] cl_get_path
HA_DIR=es
:cl_configure_persistent_address[+907] :cl_configure_persistent_address[+907] uname -m
UNAME=00CE3E144C00
:cl_configure_persistent_address[+913] export PLATFORM=__AIX__
:cl_configure_persistent_address[+917] :cl_configure_persistent_address[+917] get_local_nodename
:get_local_nodename[+42] [[ high = high ]]
:get_local_nodename[+42] version=1.2.1.19
:get_local_nodename[+43] :get_local_nodename[+43] cl_get_path
HA_DIR=es
:get_local_nodename[+45] AIXODMDIR=/etc/objrepos
:get_local_nodename[+46] HAODMDIR=/etc/es/objrepos
:get_local_nodename[+48] :get_local_nodename[+48] uname -m
UNAME=00CE3E144C00
:get_local_nodename[+54] export PLATFORM=__AIX__
:get_local_nodename[+60] export ODMDIR=/etc/es/objrepos
:get_local_nodename[+62] :get_local_nodename[+62] /usr/es/sbin/cluster/utilities/cllsclstr -N
nodename=xindaia
:get_local_nodename[+64] :get_local_nodename[+64] cut -d: -f1
:get_local_nodename[+64] cllsnode -cS
NODENAME=xindaia
xindaib
:get_local_nodename[+68] [[ xindaia = xindaia ]]
:get_local_nodename[+71] print xindaia
:get_local_nodename[+72] exit 0
LOCALNODENAME=xindaia
:cl_configure_persistent_address[+919] NETWORK=
:cl_configure_persistent_address[+920] ALIVE_IF=
:cl_configure_persistent_address[+921] FAILED_IF=
:cl_configure_persistent_address[+922] FAILED_ADDRESS=
:cl_configure_persistent_address[+923] CHECK_HA_ALIVE=1
:cl_configure_persistent_address[+924] UPDATE_CLSTRMGR=1
:cl_configure_persistent_address[+926] RESTORE_ROUTES=/usr/es/sbin/cluster/.pers_restore_routes
:cl_configure_persistent_address[+928] ACTION=fail_boot
:cl_configure_persistent_address[+929] shift
:cl_configure_persistent_address[+931] getopt n:a:f:i:dP -i 192.168.1.4 -n net_ether_01
:cl_configure_persistent_address[+931] set -- -i 192.168.1.4 -n net_ether_01 --
:cl_configure_persistent_address[+933] [[ 0 != 0 ]]
:cl_configure_persistent_address[+933] [[ fail_boot =  ]]
:cl_configure_persistent_address[+938] [[ -i != -- ]]
:cl_configure_persistent_address[+954] FAILED_ADDRESS=192.168.1.4
:cl_configure_persistent_address[+955] shift
:cl_configure_persistent_address[+955] shift
:cl_configure_persistent_address[+955] [[ -n != -- ]]
:cl_configure_persistent_address[+941] NETWORK=net_ether_01
:cl_configure_persistent_address[+942] shift
:cl_configure_persistent_address[+942] shift
:cl_configure_persistent_address[+942] [[ -- != -- ]]
:cl_configure_persistent_address[+975] shift
:cl_configure_persistent_address[+977] set -u
:cl_configure_persistent_address[+981] [[ fail_boot = up ]]
:cl_configure_persistent_address[+981] [[ fail_boot = swap ]]
:cl_configure_persistent_address[+981] [[ fail_boot = fail_boot ]]
:cl_configure_persistent_address[+981] [[ 192.168.1.4 =  ]]
:cl_configure_persistent_address[+981] [[ net_ether_01 =  ]]
:cl_configure_persistent_address[+1163] :cl_configure_persistent_address[+1163] awk {print $1}
:cl_configure_persistent_address[+1163] clgetif -a 192.168.1.4
:cl_configure_persistent_address[+1163] 2> /dev/null
IF=en0
:cl_configure_persistent_address[+1164] :cl_configure_persistent_address[+1164] cut -d: -f3
:cl_configure_persistent_address[+1164] cllsif -Scn 192.168.1.4
NETWORK=net_ether_01
:cl_configure_persistent_address[+1164] isAliasingNetwork net_ether_01
:cl_configure_persistent_address[isAliasingNetwork+4] NETWORK=net_ether_01
:cl_configure_persistent_address[isAliasingNetwork+6] odmget -qname=net_ether_01 HACMPnetwork
:cl_configure_persistent_address[isAliasingNetwork+6] sed s/"//g
:cl_configure_persistent_address[isAliasingNetwork+6] awk $1 == "alias" {print $3}
:cl_configure_persistent_address[isAliasingNetwork+6] print 1
:cl_configure_persistent_address[+1164] [[ 1 != 1 ]]
:cl_configure_persistent_address[+1174] :cl_configure_persistent_address[+1174] awk -F: $2 == "persistent" && $3 == "net_ether_01" {print $1}
:cl_configure_persistent_address[+1174] cllsif -Scpi xindaia
PERSISTENT=
:cl_configure_persistent_address[+1174] [[  =  ]]
:cl_configure_persistent_address[+1177] exit 0
:fail_interface[+93] :fail_interface[+93] clgetif -n 192.168.1.4
:fail_interface[+93] LANG=C
NETMASK=255.255.255.0
:fail_interface[+94] :fail_interface[+94] clgetif -a 192.168.1.4
:fail_interface[+94] LANG=C
IF1=en0
:fail_interface[+95] BOOT1=192.168.1.4
:fail_interface[+97] :fail_interface[+97] awk -F: -v net=net_ether_01 -v if1=en0 ($2=="boot" && $3==net && $9!=if1) {printf("%sn",$7)}
:fail_interface[+97] cllsif -cSi xindaia
BOOT2=192.168.2.4
:fail_interface[+97] [[ -n 192.168.2.4 ]]
:fail_interface[+97] [[ __AIX__ = __AIX__ ]]
:fail_interface[+104] :fail_interface[+104] awk -v boot1=192.168.1.4 (NR > 4 && $1!="default" && $2==boot1 && $3=="U")
                    {printf("%s %s",$1,$2)}
:fail_interface[+104] netstat -rn
BROUTE=192.168.1/24 192.168.1.4
:fail_interface[+104] [[ -n 192.168.1/24 192.168.1.4 ]]
:fail_interface[+104] clgetnet 192.168.1.4 255.255.255.0
:fail_interface[+104] clgetnet 192.168.2.4 255.255.255.0
:fail_interface[+104] [[ 192.168.1.0 = 192.168.2.0 ]]
:fail_interface[+104] [[ UNDEFINED != UNDEFINED ]]
:fail_interface[+136] export NSORDER=
:fail_interface[+141] exit 0
May 14 22:34:11 EVENT COMPLETED: fail_interface xindaia 192.168.1.4 0

                        HACMP Event Summary
Event: TE_FAIL_ADAPTER
Start time: Thu May 14 22:34:10 2015

End time: Thu May 14 22:34:11 2015

Action:                Resource:                        Script Name:
----------------------------------------------------------------------------
No resources changed as a result of this event
----------------------------------------------------------------------------

May 14 22:34:51 EVENT START: network_down xindaia net_ether_01

:network_down[+62] [[ high = high ]]
:network_down[+62] version=1.23
:network_down[+63] :network_down[+63] cl_get_path
HA_DIR=es
:network_down[+65] [ 2 -ne 2 ]
:network_down[+77] :network_down[+77] cl_rrmethods2call net_cleanup
:cl_rrmethods2call[+49] [[ high = high ]]
:cl_rrmethods2call[+49] version=1.14
:cl_rrmethods2call[+50] :cl_rrmethods2call[+50] cl_get_path
HA_DIR=es
:cl_rrmethods2call[+76] RRMETHODS=
:cl_rrmethods2call[+77] NEED_RR_ENV_VARS=no
:cl_rrmethods2call[+83] :cl_rrmethods2call[+83] odmget -qname=net_ether_01 HACMPnetwork
:cl_rrmethods2call[+83] egrep nimname
:cl_rrmethods2call[+83] sed s/"//g
:cl_rrmethods2call[+83] awk {print $3}
RRNET=ether
:cl_rrmethods2call[+83] [[ ether = XD_data ]]
:cl_rrmethods2call[+89] exit 0
METHODS=
:network_down[+91] set -u
:network_down[+104] exit 0
May 14 22:34:51 EVENT COMPLETED: network_down xindaia net_ether_01 0


May 14 22:34:52 EVENT START: network_down_complete xindaia net_ether_01

:network_down_complete[+61] [[ high = high ]]
:network_down_complete[+61] version=1.1.1.13
:network_down_complete[+62] :network_down_complete[+62] cl_get_path
HA_DIR=es
:network_down_complete[+64] [ ! -n  ]
:network_down_complete[+66] EMULATE=REAL
:network_down_complete[+69] [ 2 -ne 2 ]
:network_down_complete[+75] set -u
:network_down_complete[+81] STATUS=0
:network_down_complete[+85] odmget HACMPnode
:network_down_complete[+85] grep name =
:network_down_complete[+85] sort
:network_down_complete[+85] uniq
:network_down_complete[+85] wc -l
:network_down_complete[+85] [ 2 -eq 2 ]
:network_down_complete[+87] :network_down_complete[+87] odmget HACMPgroup
:network_down_complete[+87] grep group =
:network_down_complete[+87] sed s/"//g
:network_down_complete[+87] awk {print $3}
RESOURCE_GROUPS=xindai
data
:network_down_complete[+91] :network_down_complete[+91] odmget -q group=xindai AND name=EXPORT_FILESYSTEM HACMPresource
:network_down_complete[+91] grep value
:network_down_complete[+91] sed s/"//g
:network_down_complete[+91] awk {print $3}
EXPORTLIST=
:network_down_complete[+92] [ -n  ]
:network_down_complete[+91] :network_down_complete[+91] odmget -q group=data AND name=EXPORT_FILESYSTEM HACMPresource
:network_down_complete[+91] grep value
:network_down_complete[+91] sed s/"//g
:network_down_complete[+91] awk {print $3}
EXPORTLIST=
:network_down_complete[+92] [ -n  ]
:network_down_complete[+114] cl_hb_alias_network net_ether_01 add
:cl_hb_alias_network[+57] [[ high = high ]]
:cl_hb_alias_network[+57] version=1.4
:cl_hb_alias_network[+58] :cl_hb_alias_network[+58] cl_get_path
HA_DIR=es
:cl_hb_alias_network[+60] NETWORK=net_ether_01
:cl_hb_alias_network[+61] ACTION=add
:cl_hb_alias_network[+64] [[ 2 != 2 ]]
:cl_hb_alias_network[+70] [[ add != add ]]
:cl_hb_alias_network[+76] set -u
:cl_hb_alias_network[+78] cl_echo 33 Starting execution of /usr/es/sbin/cluster/utilities/cl_hb_alias_network with parameters net_ether_01 addn /usr/es/sbin/cluster/utilities/cl_hb_alias_network net_ether_01 add
:cl_echo[+35] version=1.16
:cl_echo[+84] HACMP_OUT_FILE=/tmp/hacmp.out
:cl_echo[+89] set +u
:cl_echo[+89] [[ -n  ]]
:cl_echo[+92] set -u
:cl_echo[+95] print -n -u2 May 14 2015 22:34:52
May 14 2015 22:34:52 :cl_echo[+96] MSG_ID=33
:cl_echo[+97] shift
:cl_echo[+98] dspmsg scripts.cat 33 Starting execution of /usr/es/sbin/cluster/utilities/cl_hb_alias_network with parameters net_ether_01 addn /usr/es/sbin/cluster/utilities/cl_hb_alias_network net_ether_01 add
:cl_echo[+98] 1>& 2
Starting execution of /usr/es/sbin/cluster/utilities/cl_hb_alias_network with parameters net_ether_01 add
:cl_echo[+101] clsynclog /tmp/hacmp.out
:cl_echo[+101] 1> /dev/null 2>& 1
:cl_echo[+105] return 0
:cl_hb_alias_network[+79] date
Thu May 14 22:34:52 GMT+08:00 2015
:cl_hb_alias_network[+81] :cl_hb_alias_network[+81] get_local_nodename
:get_local_nodename[+42] [[ high = high ]]
:get_local_nodename[+42] version=1.2.1.19
:get_local_nodename[+43] :get_local_nodename[+43] cl_get_path
HA_DIR=es
:get_local_nodename[+45] AIXODMDIR=/etc/objrepos
:get_local_nodename[+46] HAODMDIR=/etc/es/objrepos
:get_local_nodename[+48] :get_local_nodename[+48] uname -m
UNAME=00CE3E144C00
:get_local_nodename[+54] export PLATFORM=__AIX__
:get_local_nodename[+60] export ODMDIR=/etc/es/objrepos
:get_local_nodename[+62] :get_local_nodename[+62] /usr/es/sbin/cluster/utilities/cllsclstr -N
nodename=xindaia
:get_local_nodename[+64] :get_local_nodename[+64] cut -d: -f1
:get_local_nodename[+64] cllsnode -cS
NODENAME=xindaia
xindaib
:get_local_nodename[+68] [[ xindaia = xindaia ]]
:get_local_nodename[+71] print xindaia
:get_local_nodename[+72] exit 0
LOCALNODENAME=xindaia
:cl_hb_alias_network[+82] STATUS=0
:cl_hb_alias_network[+85] cllsnw -Scn net_ether_01
:cl_hb_alias_network[+85] grep -q hb_over_alias
:cl_hb_alias_network[+85] cut -d: -f4
:cl_hb_alias_network[+85] exit 0
:network_down_complete[+120] exit 0
May 14 22:34:52 EVENT COMPLETED: network_down_complete xindaia net_ether_01 0

                        HACMP Event Summary
Event: TE_FAIL_NETWORK
Start time: Thu May 14 22:34:51 2015

End time: Thu May 14 22:34:52 2015

Action:                Resource:                        Script Name:
----------------------------------------------------------------------------
No resources changed as a result of this event
----------------------------------------------------------------------------

May 14 22:35:02 EVENT START: rg_move_release xindaia 1

:rg_move_release[+54] [[ high = high ]]
:rg_move_release[+54] version=1.6
:rg_move_release[+56] set -u
:rg_move_release[+58] [ 2 != 2 ]
:rg_move_release[+64] set +u
:rg_move_release[+66] clcallev rg_move xindaia 1 RELEASE

May 14 22:35:02 EVENT START: rg_move xindaia 1 RELEASE

:get_local_nodename[+42] [[ high = high ]]
:get_local_nodename[+42] version=1.2.1.19
:get_local_nodename[+43] :get_local_nodename[+43] cl_get_path
HA_DIR=es
:get_local_nodename[+45] AIXODMDIR=/etc/objrepos
:get_local_nodename[+46] HAODMDIR=/etc/es/objrepos
:get_local_nodename[+48] :get_local_nodename[+48] uname -m
UNAME=00CE3E144C00
:get_local_nodename[+54] export PLATFORM=__AIX__
:get_local_nodename[+60] export ODMDIR=/etc/es/objrepos
:get_local_nodename[+62] :get_local_nodename[+62] /usr/es/sbin/cluster/utilities/cllsclstr -N
nodename=xindaia
:get_local_nodename[+64] :get_local_nodename[+64] cut -d: -f1
:get_local_nodename[+64] cllsnode -cS
NODENAME=xindaia
xindaib
:get_local_nodename[+68] [[ xindaia = xindaia ]]
:get_local_nodename[+71] print xindaia
:get_local_nodename[+72] exit 0
:rg_move[+71] version=1.43
:rg_move[+81] STATUS=0
:rg_move[+83] [ ! -n  ]
:rg_move[+85] EMULATE=REAL
:rg_move[+91] set -u
:rg_move[+93] export NODENAME=xindaia
:rg_move[+94] RGID=1
:rg_move[+95] [[ 3 = 3 ]]
:rg_move[+97] ACTION=RELEASE
:rg_move[+104] odmget -qid=1 HACMPgroup
:rg_move[+104] awk {print $3}
:rg_move[+104] egrep group =
:rg_move[+104] eval RGNAME="data"
:rg_move[+104] RGNAME=data
:rg_move[+106] UPDATESTATD=0
:rg_move[+107] export UPDATESTATD
:rg_move[+111] export RG_MOVE_EVENT=true
:rg_move[+116] group_state=$RESGRP_data_xindaia
:rg_move[+117] set +u
:rg_move[+118] eval print $RESGRP_data_xindaia
:rg_move[+118] print ERROR
:rg_move[+118] export RG_MOVE_ONLINE=ERROR
:rg_move[+119] set -u
:rg_move[+120] RG_MOVE_ONLINE=ERROR
:rg_move[+127] rm -f /tmp/.NFSSTOPPED
:rg_move[+128] rm -f /tmp/.RPCLOCKDSTOPPED
:rg_move[+135] set -a
:rg_move[+136] clsetenvgrp xindaia rg_move data
:clsetenvgrp[+50] [[ high = high ]]
:clsetenvgrp[+50] version=1.16
:clsetenvgrp[+52] usingVer=clSetenvgrp
:clsetenvgrp[+57] clSetenvgrp xindaia rg_move data
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+58] exit 0
:rg_move[+136] eval NFS_data="TRANS" NFSNODE_data="xindaib" FORCEDOWN_GROUPS="" RESOURCE_GROUPS="data" HOMELESS_GROUPS="" HOMELESS_FOLLOWER_GROUPS="" ERRSTATE_GROUPS="" PRINCIPAL_ACTIONS="R" ASSOCIATE_ACTIONS="MO" AUXILLIARY_ACTIONS="N"
:rg_move[+136] NFS_data=TRANS NFSNODE_data=xindaib FORCEDOWN_GROUPS= RESOURCE_GROUPS=data HOMELESS_GROUPS= HOMELESS_FOLLOWER_GROUPS= ERRSTATE_GROUPS= PRINCIPAL_ACTIONS=R ASSOCIATE_ACTIONS=MO AUXILLIARY_ACTIONS=N
:rg_move[+137] RC=0
:rg_move[+138] set +a
:rg_move[+142] [[ 0 -ne 0 ]]
:rg_move[+142] [[ -z data ]]
:rg_move[+151] [[ -z FALSE ]]
:rg_move[+197] set -a
:rg_move[+198] clsetenvres data rg_move
:rg_move[+198] eval PRINCIPAL_ACTION="RELEASE" ASSOCIATE_ACTION="MOUNT" AUXILLIARY_ACTION="NONE" VG_RR_ACTION="RELEASE" SIBLING_NODES= SIBLING_NONOWNER_NODES= SIBLING_ACQUIRING_NODES= SIBLING_RELEASING_NODES= FOLLOWER_ACTION="NONE" NFS_HOST=xindaib DISK= CONCURRENT_VOLUME_GROUP= EXPORT_FILESYSTEM= EXPORT_FILESYSTEM_V4= STABLE_STORAGE_PATH= STABLE_STORAGE_COOKIE= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= COMMUNICATION_LINKS= SHARED_TAPE_RESOURCES= MOUNT_FILESYSTEM= INACTIVE_TAKEOVER= TAKEOVER_LABEL= NFSMOUNT_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= PPRC_REP_RESOURCE= ERCMF_REP_RESOURCE= SVCPPRC_REP_RESOURCE= GMD_REP_RESOURCE= GMVG_REP_RESOURCE= OEM_VOLUME_GROUP= OEM_FILESYSTEM= APPLICATIONS="data" FILESYSTEM="ALL" FORCED_VARYON="false" FSCHECK_TOOL="fsck" FS_BEFORE_IPADDR="false" RECOVERY_METHOD="sequential" SERVICE_LABEL="xindaiservice" SSA_DISK_FENCING="false" VG_AUTO_IMPORT="false" VOLUME_GROUP="datavg"
:rg_move[+198] PRINCIPAL_ACTION=RELEASE ASSOCIATE_ACTION=MOUNT AUXILLIARY_ACTION=NONE VG_RR_ACTION=RELEASE SIBLING_NODES= SIBLING_NONOWNER_NODES= SIBLING_ACQUIRING_NODES= SIBLING_RELEASING_NODES= FOLLOWER_ACTION=NONE NFS_HOST=xindaib DISK= CONCURRENT_VOLUME_GROUP= EXPORT_FILESYSTEM= EXPORT_FILESYSTEM_V4= STABLE_STORAGE_PATH= STABLE_STORAGE_COOKIE= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= COMMUNICATION_LINKS= SHARED_TAPE_RESOURCES= MOUNT_FILESYSTEM= INACTIVE_TAKEOVER= TAKEOVER_LABEL= NFSMOUNT_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= PPRC_REP_RESOURCE= ERCMF_REP_RESOURCE= SVCPPRC_REP_RESOURCE= GMD_REP_RESOURCE= GMVG_REP_RESOURCE= OEM_VOLUME_GROUP= OEM_FILESYSTEM= APPLICATIONS=data FILESYSTEM=ALL FORCED_VARYON=false FSCHECK_TOOL=fsck FS_BEFORE_IPADDR=false RECOVERY_METHOD=sequential SERVICE_LABEL=xindaiservice SSA_DISK_FENCING=false VG_AUTO_IMPORT=false VOLUME_GROUP=datavg
:rg_move[+199] set +a
:rg_move[+200] export GROUPNAME=data
+data:rg_move[+200] [[ RELEASE =  ]]
+data:rg_move[+224] [ RELEASE = RELEASE ]
+data:rg_move[+226] [ RELEASE = RELEASE ]
+data:rg_move[+228] clcallev node_down_local

May 14 22:35:02 EVENT START: node_down_local

+data:node_down_local[152] [[ high == high ]]
+data:node_down_local[152] version=1.2.14.1
+data:node_down_local[154] NOT_RELEASE=''
+data:node_down_local[155] STATUS=0
+data:node_down_local[155] typeset -i STATUS
+data:node_down_local[157] EMULATE=REAL
+data:node_down_local[159] (( 0 !=0  ))
+data:node_down_local[167] set +u
+data:node_down_local[168] RESOURCES_CLEANUP=''
+data:node_down_local[169] set -u
+data:node_down_local[172] set +u
+data:node_down_local[173] OEM_FILESYSTEM=''
+data:node_down_local[174] OEM_VOLUME_GROUP=''
+data:node_down_local[177] eval echo '${RESGRP_data_xindaia}'
+data:node_down_local[1] echo ERROR
+data:node_down_local[177] read group_state
+data:node_down_local[182] [[ '' != CLEANUP ]]
+data:node_down_local[189] eval 'echo ${RESGRP_data_xindaia}'
+data:node_down_local[1] echo ERROR
+data:node_down_local[189] read state
+data:node_down_local[190] [[ ERROR != TMP_ERROR ]]
+data:node_down_local[191] [[ ERROR != ONLINE ]]
+data:node_down_local[192] [[ ERROR != ERROR ]]
+data:node_down_local[202] set_resource_status RELEASING
+data:node_down_local[4] set +u
+data:node_down_local[5] NOT_DOIT=''
+data:node_down_local[6] set -u
+data:node_down_local[8] [[ '' == CLEANUP ]]
+data:node_down_local[12] [[ '' != TRUE ]]
+data:node_down_local[14] [[ REAL == EMUL ]]
+data:node_down_local[19] clchdaemons -d clstrmgr_scripts -t resource_locator -n xindaia -o data -v RELEASING
+data:node_down_local[28] [[ RELEASING == RELEASING ]]
+data:node_down_local[30] [[ NONE == RELEASE_SECONDARY ]]
+data:node_down_local[31] [[ NONE == SECONDARY_BECOMES_PRIMARY ]]
+data:node_down_local[35] cl_RMupdate releasing data node_down_local
Reference string: Thu.May.14.22:35:02.GMT+08:00.2015.node_down_local.data.ref
+data:node_down_local[212] [[ -z ERROR ]]
+data:node_down_local[219] set -u
+data:node_down_local[224] [[ -n data ]]
+data:node_down_local[231] TMPLIST=''
+data:node_down_local[232] [[ -n data ]]
+data:node_down_local[233] print data
+data:node_down_local[233] read first_one APPLICATIONS
+data:node_down_local[234] TMPLIST=data
+data:node_down_local[232] [[ -n '' ]]
+data:node_down_local[237] APPLICATIONS=data
+data:node_down_local[240] [[ REAL == EMUL ]]
+data:node_down_local[245] [[ '' != TRUE ]]
+data:node_down_local[246] clcallev stop_server data

May 14 22:35:02 EVENT START: stop_server data

+data:stop_server[+48] [[ high = high ]]
+data:stop_server[+48] version=1.4.1.15
+data:stop_server[+49] +data:stop_server[+49] cl_get_path
HA_DIR=es
+data:stop_server[+51] STATUS=0
+data:stop_server[+55] [ ! -n  ]
+data:stop_server[+57] EMULATE=REAL
+data:stop_server[+60] PROC_RES=false
+data:stop_server[+64] [[ 0 != 0 ]]
+data:stop_server[+68] typeset WPARNAME WPARDIR EXEC
+data:stop_server[+69] WPARDIR=
+data:stop_server[+70] EXEC=
+data:stop_server[+72] +data:stop_server[+72] clwparname data
+data:clwparname[35] [[ high == high ]]
+data:clwparname[35] version=1.3
+data:clwparname[37] . /usr/es/sbin/cluster/wpar/wpar_utils
+data:clwparname[+20] ERRNO=0
+data:clwparname[+22] [[ high == high ]]
+data:clwparname[+22] set -x
+data:clwparname[+23] [[ high == high ]]
+data:clwparname[+23] version=1.7.1.2
+data:clwparname[+25] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+data:clwparname[+20] [[ high == high ]]
+data:clwparname[+20] set -x
+data:clwparname[+21] [[ high == high ]]
+data:clwparname[+21] version=1.2.1.2
+data:clwparname[+23] PATH=/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/events/emulate:/usr/es/sbin/cluster/events/emulate/driver:/usr/es/sbin/cluster/events/emulate/utils:/usr/es/sbin/cluster/tguides/bin:/usr/es/sbin/cluster/tguides/classes:/usr/es/sbin/cluster/tguides/images:/usr/es/sbin/cluster/tguides/scripts:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+data:clwparname[+24] export PATH
+data:clwparname[+26] typeset usageErr invalArgErr internalErr
+data:clwparname[+28] usageErr=10
+data:clwparname[+29] invalArgErr=11
+data:clwparname[+30] internalErr=12
+data:clwparname[+39] rgName=data
+data:clwparname[+42] uname
+data:clwparname[+42] OSNAME=AIX
+data:clwparname[+51] [[ AIX == *AIX* ]]
+data:clwparname[+54] lslpp -l bos.wpars
+data:clwparname[+54] 1> /dev/null 2>& 1
+data:clwparname[+56] loadWparName data
+data:clwparname[loadWparName+5] usage='Usage: loadWparName '
+data:clwparname[loadWparName+5] typeset -r usage
+data:clwparname[loadWparName+6] typeset rgName wparName wparDir rc
+data:clwparname[loadWparName+8] [[ 1 < 1 ]]
+data:clwparname[loadWparName+13] rgName=data
+data:clwparname[loadWparName+14] wparDir=/var/hacmp/adm/wpar
+data:clwparname[loadWparName+15] [[ ! -d /var/hacmp/adm/wpar ]]
+data:clwparname[loadWparName+22] [[ -f /var/hacmp/adm/wpar/data ]]
+data:clwparname[loadWparName+23] cat /var/hacmp/adm/wpar/data
+data:clwparname[loadWparName+23] wparName=''
+data:clwparname[loadWparName+24] [[ -n '' ]]
+data:clwparname[loadWparName+36] return 0
+data:clwparname[+56] wparName=''
+data:clwparname[+57] rc=0
+data:clwparname[+58] (( 0 != 0 ))
+data:clwparname[+64] printf %s
+data:clwparname[+65] exit 0
WPARNAME=
+data:stop_server[+74] set -u
+data:stop_server[+77] ALLSERVERS=All_servers
+data:stop_server[+78] [ REAL = EMUL ]
+data:stop_server[+83] cl_RMupdate resource_releasing All_servers stop_server
Reference string: Thu.May.14.22:35:02.GMT+08:00.2015.stop_server.All_servers.data.ref
+data:stop_server[+88] [[ -n  ]]
+data:stop_server[+107] +data:stop_server[+107] cut -d: -f3
+data:stop_server[+107] cllsserv -cn data
STOP=/app/stop.sh
+data:stop_server[+108] +data:stop_server[+108] cut -d  -f1
+data:stop_server[+108] echo /app/stop.sh
STOP_SCRIPT=/app/stop.sh
+data:stop_server[+110] PATTERN=xindaia data
+data:stop_server[+110] [[ -n  ]]
+data:stop_server[+110] [[ -z  ]]
+data:stop_server[+110] [[ -x /app/stop.sh ]]
+data:stop_server[+120] [ REAL = EMUL ]
+data:stop_server[+125] /app/stop.sh
+data:stop_server[+125] ODMDIR=/etc/objrepos

   ##    #####   #####
  #  #   #    #  #    #
#    #  #    #  #    #
######  #####   #####
#    #  #       #
#    #  #       #


  ####    #####   ####   #####
#          #    #    #  #    #
  ####      #    #    #  #    #
      #     #    #    #  #####
#    #     #    #    #  #
  ####      #     ####   #

[YOU HAVE NEW MAIL]
ADMU0116I: Tool information is being logged in file
           /opt/IBM/WebSphere/AppServer/profiles/AppSrv01/logs/server1/stopServer.log
ADMU0128I: Starting tool with the AppSrv01 profile
ADMU3100I: Reading configuration for server: server1
ADMU3201I: Server stop request issued. Waiting for stop status.
ADMU4000I: Server server1 stop completed.

======WEB平台停止成功======
/home/als6tj/allstop.sh[11]: kill: 0403-008 The number of parameters specified is not correct.
/home/als6tj/allstop.sh[14]: test: 0403-004 Specify a parameter with this command.
======esb通讯服务器停止成功======
/home/als6tj/allstop.sh[23]: kill: 0403-008 The number of parameters specified is not correct.
/home/als6tj/allstop.sh[26]: test: 0403-004 Specify a parameter with this command.
======批量调度shell停止成功======
+data:stop_server[+127] [ 0 -ne 0 ]
+data:stop_server[+155] ALLNOERRSERV=All_nonerror_servers
+data:stop_server[+156] [ REAL = EMUL ]
+data:stop_server[+161] cl_RMupdate resource_down All_nonerror_servers stop_server
Reference string: Thu.May.14.22:35:20.GMT+08:00.2015.stop_server.All_nonerror_servers.data.ref
+data:stop_server[+164] exit 0
May 14 22:35:20 EVENT COMPLETED: stop_server data 0

+data:node_down_local[257] server_release_lpar_resources data
+data:server_release_lpar_resources[630] [[ high == high ]]
+data:server_release_lpar_resources[630] version=1.14.4.8
+data:server_release_lpar_resources[632] typeset HOSTNAME
+data:server_release_lpar_resources[633] typeset MANAGED_SYSTEM
+data:server_release_lpar_resources[634] typeset HMC_IP
+data:server_release_lpar_resources[635] added_apps=''
+data:server_release_lpar_resources[635] typeset added_apps
+data:server_release_lpar_resources[636] APPLICATIONS=''
+data:server_release_lpar_resources[636] typeset APPLICATIONS
+data:server_release_lpar_resources[637] mem_release_type=''
+data:server_release_lpar_resources[637] typeset mem_release_type
+data:server_release_lpar_resources[639] mem_resource=0
+data:server_release_lpar_resources[639] typeset mem_resource
+data:server_release_lpar_resources[640] cpu_resource=0
+data:server_release_lpar_resources[640] typeset cpu_resource
+data:server_release_lpar_resources[641] cuod_mem_resource=0
+data:server_release_lpar_resources[641] typeset cuod_mem_resource
+data:server_release_lpar_resources[642] cuod_cpu_resource=0
+data:server_release_lpar_resources[642] typeset cuod_cpu_resource
+data:server_release_lpar_resources[644] display_event_summary=false
+data:server_release_lpar_resources[644] typeset display_event_summary
+data:server_release_lpar_resources[646] lmb_size=0
+data:server_release_lpar_resources[646] typeset lmb_size
+data:server_release_lpar_resources[648] typeset -i check_cuod
+data:server_release_lpar_resources[649] RC=0
+data:server_release_lpar_resources[649] typeset -i RC
+data:server_release_lpar_resources[652] : Look for any added application servers, beyond those running at the moment
+data:server_release_lpar_resources[654] getopts :g: opt
+data:server_release_lpar_resources[662] shift 0
+data:server_release_lpar_resources[664] APPLICATIONS=data
+data:server_release_lpar_resources[667] : Set up values we are going to need to talk to the HMC, if they have not
+data:server_release_lpar_resources[668] : been set up before.
+data:server_release_lpar_resources[670] [[ -z '' ]]
+data:server_release_lpar_resources[671] hostname
+data:server_release_lpar_resources[671] HOSTNAME=xindaia
+data:server_release_lpar_resources[674] [[ -z xindaia ]]
+data:server_release_lpar_resources[678] [[ -z '' ]]
+data:server_release_lpar_resources[681] sed -n '/value =/s/^.*"(.*)".*/1/p'
+data:server_release_lpar_resources[680] odmget -q name='xindaia and object=HMC_IP' HACMPnode
+data:server_release_lpar_resources[680] HMC_IP=''
+data:server_release_lpar_resources[682] [[ -z '' ]]
+data:server_release_lpar_resources[684] : Node is not an LPAR node
+data:server_release_lpar_resources[686] exit 0
+data:node_down_local[263] [[ -n '' ]]
+data:node_down_local[282] [[ -n '' ]]
+data:node_down_local[301] [[ -n '' ]]
+data:node_down_local[323] [[ -n '' ]]
+data:node_down_local[342] CROSSMOUNT=0
+data:node_down_local[342] typeset -i CROSSMOUNT
+data:node_down_local[343] export CROSSMOUNT
+data:node_down_local[345] [[ -n '' ]]
+data:node_down_local[385] (( 0 == 0 ))
+data:node_down_local[391] grep 'name ='
+data:node_down_local[391] sort
+data:node_down_local[391] wc -l
+data:node_down_local[391] uniq
+data:node_down_local[391] odmget HACMPnode
+data:node_down_local[391] (( 2 ==        2 ))
+data:node_down_local[393] cut -f2 '-d"'
+data:node_down_local[393] grep 'group ='
+data:node_down_local[393] odmget HACMPgroup
+data:node_down_local[393] RESOURCE_GROUPS=$'xindaindata'
+data:node_down_local[398] cut -f2 '-d"'
+data:node_down_local[398] grep 'value ='
+data:node_down_local[397] odmget -q group='xindai AND name=EXPORT_FILESYSTEM' HACMPresource
+data:node_down_local[397] EXPORTLIST=''
+data:node_down_local[398] [[ -n '' ]]
+data:node_down_local[398] cut -f2 '-d"'
+data:node_down_local[398] grep 'value ='
+data:node_down_local[397] odmget -q group='data AND name=EXPORT_FILESYSTEM' HACMPresource
+data:node_down_local[397] EXPORTLIST=''
+data:node_down_local[398] [[ -n '' ]]
+data:node_down_local[421] [[ false == true ]]
+data:node_down_local[429] [[ -n '' ]]
+data:node_down_local[439] [[ '' != TRUE ]]
+data:node_down_local[440] clcallev release_vg_fs ALL datavg '' ''

May 14 22:35:20 EVENT START: release_vg_fs ALL datavg  

+data:release_vg_fs[52] version=1.4.10.3
+data:release_vg_fs[55] STATUS=0
+data:release_vg_fs[56] DEF_VARYON_ACTION=0
+data:release_vg_fs[56] typeset -i DEF_VARYON_ACTION
+data:release_vg_fs[59] [[ RELEASE != RELEASE ]]
+data:release_vg_fs[65] FILE_SYSTEMS=ALL
+data:release_vg_fs[66] VOLUME_GROUPS=datavg
+data:release_vg_fs[67] OEM_FILE_SYSTEMS=''
+data:release_vg_fs[68] OEM_VOLUME_GROUPS=''
+data:release_vg_fs[69] VG_MOD=false
+data:release_vg_fs[70] SELECTIVE_FAILOVER=false
+data:release_vg_fs[71] DEF_VARYOFF_ACTION=0
+data:release_vg_fs[71] typeset -i DEF_VARYOFF_ACTION
+data:release_vg_fs[75] : If ALL file systems are specified, find the file systems in the volume
+data:release_vg_fs[76] : groups for the resource group
+data:release_vg_fs[78] [[ ALL == ALL ]]
+data:release_vg_fs[80] FILE_SYSTEMS=''
+data:release_vg_fs[81] [[ -z datavg ]]
+data:release_vg_fs[92] [[ -n datavg ]]
+data:release_vg_fs[95] : Remove any duplicates that may have crept in
+data:release_vg_fs[97] print datavg
+data:release_vg_fs[97] sort -ru
+data:release_vg_fs[97] tr ' ' 'n'
+data:release_vg_fs[97] VOLUME_GROUPS=datavg
+data:release_vg_fs[100] : Expand the file systems
+data:release_vg_fs[101] : if this is an rg_move on selective fallover, lsvg -l might not work
+data:release_vg_fs[102] : so try looking up the LVs in the ODM if the VG is online
+data:release_vg_fs[104] [[ true == true ]]
+data:release_vg_fs[105] [[ ERROR != ONLINE ]]
+data:release_vg_fs[107] SELECTIVE_FAILOVER=true
+data:release_vg_fs[107] export SELECTIVE_FAILOVER
+data:release_vg_fs[109] date
Thu May 14 22:35:20 GMT+08:00 2015
+data:release_vg_fs[111] lsvg -L -o
+data:release_vg_fs[111] grep -x datavg
datavg
+data:release_vg_fs[110] sed -n '/dependency =/s/^.*"(.*)".*/1/p'
+data:release_vg_fs[109] odmget -q 'name = datavg' CuDep
+data:release_vg_fs[116] lsattr -E -l loglv02 -a type -F value
+data:release_vg_fs[116] LV_type=jfs2log
+data:release_vg_fs[117] [[ jfs2log == jfs?(2) ]]
+data:release_vg_fs[116] lsattr -E -l fslvnew00 -a type -F value
+data:release_vg_fs[116] LV_type=jfs2
+data:release_vg_fs[117] [[ jfs2 == jfs?(2) ]]
+data:release_vg_fs[121] sed -n '/value =/s/^.*"(.*)".*/1/p'
+data:release_vg_fs[120] odmget -q 'name = fslvnew00 and attribute = label' CuAt
+data:release_vg_fs[120] FS=/als6tj
+data:release_vg_fs[121] [[ -n /als6tj ]]
+data:release_vg_fs[123] FILE_SYSTEMS=' /als6tj'
+data:release_vg_fs[130] date
Thu May 14 22:35:20 GMT+08:00 2015
+data:release_vg_fs[133] : we modified the VG - set it back now.
+data:release_vg_fs[135] [[ false == true ]]
+data:release_vg_fs[142] : For OEM Filesystems, check for 'Mount All Filesystems'
+data:release_vg_fs[144] [[ -n '' ]]
+data:release_vg_fs[181] : If filesystems are given, figure out associated VGs.
+data:release_vg_fs[185] cl_fs2disk -v /als6tj
+data:release_vg_fs[185] vg=rootvg
+data:release_vg_fs[186] [[ '' == reconfig* ]]
+data:release_vg_fs[205] : Append to the previous VG list.
+data:release_vg_fs[207] VOLUME_GROUPS='datavg rootvg'
+data:release_vg_fs[211] : Take out any duplicate items in FILE_SYSTEMS list and VG list, and sort
+data:release_vg_fs[212] : these two lists into reverse order to deal with dependent mounts, then
+data:release_vg_fs[213] : call the individual script to unmount filesystems and varyoff volume
+data:release_vg_fs[214] : groups.
+data:release_vg_fs[216] [[ -n ' /als6tj' ]]
+data:release_vg_fs[219] : Remove any duplicates that may have crept in
+data:release_vg_fs[221] print /als6tj
+data:release_vg_fs[221] sort -ru
+data:release_vg_fs[221] tr ' ' 'n'
+data:release_vg_fs[221] FILE_SYSTEMS=/als6tj
+data:release_vg_fs[224] cl_deactivate_fs /als6tj
+data:cl_deactivate_fs[408] [[ high == high ]]
+data:cl_deactivate_fs[408] version=1.1.11.9
+data:cl_deactivate_fs[410] STATUS=0
+data:cl_deactivate_fs[411] SLEEP=2
+data:cl_deactivate_fs[411] typeset -i SLEEP
+data:cl_deactivate_fs[412] LIMIT=60
+data:cl_deactivate_fs[412] typeset -i LIMIT
+data:cl_deactivate_fs[413] export SLEEP
+data:cl_deactivate_fs[414] export LIMIT
+data:cl_deactivate_fs[415] TMP_FILENAME=_deactivate_fs.tmp
+data:cl_deactivate_fs[417] (( 1 != 0 ))
+data:cl_deactivate_fs[417] [[ /als6tj == -c ]]
+data:cl_deactivate_fs[422] OEM_CALL=false
+data:cl_deactivate_fs[425] EMULATE=REAL
+data:cl_deactivate_fs[428] : Check here to see if the forced unmount option can be used
+data:cl_deactivate_fs[430] FORCE_OK=''
+data:cl_deactivate_fs[431] export FORCE_OK
+data:cl_deactivate_fs[433] : Each of the V, R, M and F fields are padded to fixed length,
+data:cl_deactivate_fs[434] : to allow reliable comparisons. E.g., maximum VRMF is
+data:cl_deactivate_fs[435] : 99.99.999.999
+data:cl_deactivate_fs[437] typeset -i V R M F
+data:cl_deactivate_fs[438] typeset -Z2 R
+data:cl_deactivate_fs[439] typeset -Z3 M
+data:cl_deactivate_fs[440] typeset -Z3 F
+data:cl_deactivate_fs[441] jfs2_lvl=601002000
+data:cl_deactivate_fs[441] typeset -i jfs2_lvl
+data:cl_deactivate_fs[442] VRMF=0
+data:cl_deactivate_fs[442] typeset -i VRMF
+data:cl_deactivate_fs[445] : Here try and figure out what level of JFS2 is installed
+data:cl_deactivate_fs[447] lslpp -lcqOr bos.rte.filesystem
+data:cl_deactivate_fs[447] cut -f3 -d:
+data:cl_deactivate_fs[447] read V R M F
+data:cl_deactivate_fs[447] IFS=.
+data:cl_deactivate_fs[448] VRMF=601003000
+data:cl_deactivate_fs[450] (( 601003000 >= 601002000 ))
+data:cl_deactivate_fs[452] FORCE_OK=true
+data:cl_deactivate_fs[456] : if JOB_TYPE is set, and it does not equal to GROUP, then
+data:cl_deactivate_fs[457] : we are processing for process_resources
+data:cl_deactivate_fs[459] [[ 0 != 0 ]]
+data:cl_deactivate_fs[464] (( 1 == 0 ))
+data:cl_deactivate_fs[471] : At this point, we have an explicit list of filesystems to unmount
+data:cl_deactivate_fs[475] : Getting the resource group name from the environment
+data:cl_deactivate_fs[477] RES_GRP=data
+data:cl_deactivate_fs[478] TMP_FILENAME=data_deactivate_fs.tmp
+data:cl_deactivate_fs[481] : Remove the status file if already exists
+data:cl_deactivate_fs[483] rm -f /tmp/data_deactivate_fs.tmp
+data:cl_deactivate_fs[486] : if RECOVERY_METHOD is null get from ODM
+data:cl_deactivate_fs[488] [[ -z sequential ]]
+data:cl_deactivate_fs[494] print sequential
+data:cl_deactivate_fs[494] sed 's/^ //'
+data:cl_deactivate_fs[494] RECOVERY_METHOD=sequential
+data:cl_deactivate_fs[495] print sequential
+data:cl_deactivate_fs[495] sed 's/ $//'
+data:cl_deactivate_fs[495] RECOVERY_METHOD=sequential
+data:cl_deactivate_fs[496] [[ sequential != sequential ]]
+data:cl_deactivate_fs[503] set -u
+data:cl_deactivate_fs[506] : Are there any 'exports?'
+data:cl_deactivate_fs[508] [[ -n '' ]]
+data:cl_deactivate_fs[508] [[ -n '' ]]
+data:cl_deactivate_fs[517] : Reverse the order of the file systems list, to unmount in the opposite
+data:cl_deactivate_fs[518] : order from mounting. Important for nested mounts.
+data:cl_deactivate_fs[520] print /als6tj
+data:cl_deactivate_fs[520] /bin/sort -r
+data:cl_deactivate_fs[520] tr ' ' 'n'
+data:cl_deactivate_fs[520] FILELIST=/als6tj
+data:cl_deactivate_fs[523] : update resource manager - file systems being released
+data:cl_deactivate_fs[525] ALLFS=All_filesystems
+data:cl_deactivate_fs[526] [[ REAL == EMUL ]]
+data:cl_deactivate_fs[531] cl_RMupdate resource_releasing All_filesystems cl_deactivate_fs
Reference string: Thu.May.14.22:35:21.GMT+08:00.2015.cl_deactivate_fs.All_filesystems.data.ref
+data:cl_deactivate_fs[534] pid_list=''
+data:cl_deactivate_fs[536] [[ false == true ]]
+data:cl_deactivate_fs[542] [[ sequential == parallel ]]
+data:cl_deactivate_fs[553] [[ REAL == EMUL ]]
+data:cl_deactivate_fs[558] fs_umount /als6tj cl_deactivate_fs data_deactivate_fs.tmp
+data:cl_deactivate_fs(.120)[fs_umount+5] FS=/als6tj
+data:cl_deactivate_fs(.120)[fs_umount+5] typeset FS
+data:cl_deactivate_fs(.120)[fs_umount+6] PROGNAME=cl_deactivate_fs
+data:cl_deactivate_fs(.120)[fs_umount+6] typeset PROGNAME
+data:cl_deactivate_fs(.120)[fs_umount+7] TMP_FILENAME=data_deactivate_fs.tmp
+data:cl_deactivate_fs(.120)[fs_umount+7] typeset TMP_FILENAME
+data:cl_deactivate_fs(.120)[fs_umount+8] clwparroot data
+data:clwparroot[35] [[ high == high ]]
+data:clwparroot[35] version=1.1
+data:clwparroot[37] . /usr/es/sbin/cluster/wpar/wpar_utils
+data:clwparroot[+20] ERRNO=0
+data:clwparroot[+22] [[ high == high ]]
+data:clwparroot[+22] set -x
+data:clwparroot[+23] [[ high == high ]]
+data:clwparroot[+23] version=1.7.1.2
+data:clwparroot[+25] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+data:clwparroot[+20] [[ high == high ]]
+data:clwparroot[+20] set -x
+data:clwparroot[+21] [[ high == high ]]
+data:clwparroot[+21] version=1.2.1.2
+data:clwparroot[+23] PATH=/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/events/emulate:/usr/es/sbin/cluster/events/emulate/driver:/usr/es/sbin/cluster/events/emulate/utils:/usr/es/sbin/cluster/tguides/bin:/usr/es/sbin/cluster/tguides/classes:/usr/es/sbin/cluster/tguides/images:/usr/es/sbin/cluster/tguides/scripts:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+data:clwparroot[+24] export PATH
+data:clwparroot[+26] typeset usageErr invalArgErr internalErr
+data:clwparroot[+28] usageErr=10
+data:clwparroot[+29] invalArgErr=11
+data:clwparroot[+30] internalErr=12
+data:clwparroot[+39] rgName=data
+data:clwparroot[+42] uname
+data:clwparroot[+42] OSNAME=AIX
+data:clwparroot[+44] [[ AIX == *AIX* ]]
+data:clwparroot[+45] lslpp -l bos.wpars
+data:clwparroot[+45] 1> /dev/null 2>& 1
+data:clwparroot[+47] loadWparName data
+data:clwparroot[loadWparName+5] usage='Usage: loadWparName '
+data:clwparroot[loadWparName+5] typeset -r usage
+data:clwparroot[loadWparName+6] typeset rgName wparName wparDir rc
+data:clwparroot[loadWparName+8] [[ 1 < 1 ]]
+data:clwparroot[loadWparName+13] rgName=data
+data:clwparroot[loadWparName+14] wparDir=/var/hacmp/adm/wpar
+data:clwparroot[loadWparName+15] [[ ! -d /var/hacmp/adm/wpar ]]
+data:clwparroot[loadWparName+22] [[ -f /var/hacmp/adm/wpar/data ]]
+data:clwparroot[loadWparName+23] cat /var/hacmp/adm/wpar/data
+data:clwparroot[loadWparName+23] wparName=''
+data:clwparroot[loadWparName+24] [[ -n '' ]]
+data:clwparroot[loadWparName+36] return 0
+data:clwparroot[+47] wparName=''
+data:clwparroot[+48] [[ -z '' ]]
+data:clwparroot[+48] exit 0
+data:cl_deactivate_fs(.170)[fs_umount+8] WPAR_ROOT=''
+data:cl_deactivate_fs(.170)[fs_umount+8] typeset WPAR_ROOT
+data:cl_deactivate_fs(.170)[fs_umount+9] STATUS=0
+data:cl_deactivate_fs(.170)[fs_umount+9] typeset STATUS
+data:cl_deactivate_fs(.170)[fs_umount+10] typeset lv
+data:cl_deactivate_fs(.170)[fs_umount+11] typeset lv_lsfs
+data:cl_deactivate_fs(.170)[fs_umount+14] : Get the logical volume associated with the filesystem
+data:cl_deactivate_fs(.170)[fs_umount+16] lsfs -c /als6tj
+data:cl_deactivate_fs(.180)[fs_umount+16] lv_lsfs=$'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acctn/als6tj:/dev/fslvnew00:jfs2:::398458880:rw:no:no'
+data:cl_deactivate_fs(.180)[fs_umount+28] : Get the logical volume name and filesystem type
+data:cl_deactivate_fs(.190)[fs_umount+30] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acctn/als6tj:/dev/fslvnew00:jfs2:::398458880:rw:no:no'
+data:cl_deactivate_fs(.190)[fs_umount+30] cut -d: -f2
+data:cl_deactivate_fs(.190)[fs_umount+30] tail -1
+data:cl_deactivate_fs(.190)[fs_umount+30] lv=/dev/fslvnew00
+data:cl_deactivate_fs(.200)[fs_umount+31] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acctn/als6tj:/dev/fslvnew00:jfs2:::398458880:rw:no:no'
+data:cl_deactivate_fs(.200)[fs_umount+31] cut -d: -f3
+data:cl_deactivate_fs(.200)[fs_umount+31] tail -1
+data:cl_deactivate_fs(.200)[fs_umount+31] fs_type=jfs2
+data:cl_deactivate_fs(.200)[fs_umount+34] : For WPARs, find the real file system name
+data:cl_deactivate_fs(.200)[fs_umount+36] [[ -n '' ]]
+data:cl_deactivate_fs(.200)[fs_umount+39] : Check to see if filesystem is mounted.
+data:cl_deactivate_fs(.200)[fs_umount+46] mount
+data:cl_deactivate_fs(.200)[fs_umount+46] grep -Fw /dev/fslvnew00
+data:cl_deactivate_fs(.200)[fs_umount+46] read skip FS_MOUNTED skip
+data:cl_deactivate_fs(.220)[fs_umount+48] [[ -n /als6tj ]]
+data:cl_deactivate_fs(.220)[fs_umount+48] [[ /als6tj != /als6tj ]]
+data:cl_deactivate_fs(.220)[fs_umount+52] (( count=0))
+data:cl_deactivate_fs(.220)[fs_umount+52] (( count <= 60))
+data:cl_deactivate_fs(.220)[fs_umount+55] : Try to unmount the file system
+data:cl_deactivate_fs(.220)[fs_umount+56] date '+%h %d %H:%M:%S.000'
+data:cl_deactivate_fs(.240)[fs_umount+56] : Attempt 1 of 61 to unmount at May 14 22:35:21.000
+data:cl_deactivate_fs(.240)[fs_umount+58] umount /als6tj
umount: error unmounting /dev/fslvnew00: Device busy
+data:cl_deactivate_fs(.530)[fs_umount+66] : At this point, unmount of /als6tj has not worked. So,
+data:cl_deactivate_fs(.530)[fs_umount+67] : Send a SIGKILL to all processes having open file
+data:cl_deactivate_fs(.530)[fs_umount+68] : descriptors within this logical volume to allow
+data:cl_deactivate_fs(.530)[fs_umount+69] : the umount to succeed.
+data:cl_deactivate_fs(.530)[fs_umount+71] date '+%h %d %H:%M:%S.000'
May 14 22:35:21.000
+data:cl_deactivate_fs(.550)[fs_umount+72] fuser -k -u -x /dev/fslvnew00
/dev/fslvnew00:   201170c(root)  229816c(als6tj)
+data:cl_deactivate_fs(.590)[fs_umount+73] fuser -c -k -u -x /als6tj
/als6tj:
+data:cl_deactivate_fs(.710)[fs_umount+74] date '+%h %d %H:%M:%S.000'
May 14 22:35:21.000
+data:cl_deactivate_fs(.730)[fs_umount+77] : Wait 2 seconds for the kills to be effective
+data:cl_deactivate_fs(.730)[fs_umount+79] sleep 2
+data:cl_deactivate_fs(2.730)[fs_umount+82] : Unmount of /als6tj failed. If the force option can be used, try it here.
+data:cl_deactivate_fs(2.730)[fs_umount+84] [[ -n true ]]
+data:cl_deactivate_fs(2.730)[fs_umount+84] [[ jfs2 == jfs2 ]]
+data:cl_deactivate_fs(2.730)[fs_umount+86] umount -f /als6tj
forced unmount of /als6tj
forced unmount of /als6tj
+data:cl_deactivate_fs(2.910)[fs_umount+89] : Forced option of unmount /als6tj worked
+data:cl_deactivate_fs(2.910)[fs_umount+91] break
+data:cl_deactivate_fs(2.910)[fs_umount+139] echo 0
+data:cl_deactivate_fs(2.910)[fs_umount+139] 1>> /tmp/data_deactivate_fs.tmp
+data:cl_deactivate_fs(2.920)[fs_umount+140] return 0
+data:cl_deactivate_fs[565] : wait to sync all the processes.
+data:cl_deactivate_fs[567] [[ -n '' ]]
+data:cl_deactivate_fs[573] : update resource manager - file systems released
+data:cl_deactivate_fs[575] ALLNOERROR=All_non_error_filesystems
+data:cl_deactivate_fs[576] [[ REAL == EMUL ]]
+data:cl_deactivate_fs[581] cl_RMupdate resource_down All_non_error_filesystems cl_deactivate_fs
Reference string: Thu.May.14.22:35:23.GMT+08:00.2015.cl_deactivate_fs.All_non_error_filesystems.data.ref
+data:cl_deactivate_fs[584] [[ -f /tmp/data_deactivate_fs.tmp ]]
+data:cl_deactivate_fs[586] grep -q 1 /tmp/data_deactivate_fs.tmp
+data:cl_deactivate_fs[591] rm -f /tmp/data_deactivate_fs.tmp
+data:cl_deactivate_fs[594] exit 0
+data:release_vg_fs[240] [[ -n '' ]]
+data:release_vg_fs[249] : Call replicated resource prevg-offline method associated
+data:release_vg_fs[250] : with any replicated resource defined in the resource group
+data:release_vg_fs[251] : we arecurrently processing. Note that a return code of 3 from
+data:release_vg_fs[252] : the prevg-offline method indicates the we should override the
+data:release_vg_fs[253] : default action. The default action for offline_primary is to
+data:release_vg_fs[254] : varyoff the VG and the default action for offline_secondary
+data:release_vg_fs[255] : is to NOT varyoff the VG
+data:release_vg_fs[257] cl_rrmethods2call prevg_offline
+data:cl_rrmethods2call[+49] [[ high = high ]]
+data:cl_rrmethods2call[+49] version=1.14
+data:cl_rrmethods2call[+50] +data:cl_rrmethods2call[+50] cl_get_path
HA_DIR=es
+data:cl_rrmethods2call[+76] RRMETHODS=
+data:cl_rrmethods2call[+77] NEED_RR_ENV_VARS=no
+data:cl_rrmethods2call[+114] [[ no = yes ]]
+data:cl_rrmethods2call[+120] [[ -n  ]]
+data:cl_rrmethods2call[+125] [[ -n  ]]
+data:cl_rrmethods2call[+130] [[ -n  ]]
+data:cl_rrmethods2call[+135] [[ -n  ]]
+data:cl_rrmethods2call[+140] [[ -n  ]]
+data:cl_rrmethods2call[+145] echo
+data:cl_rrmethods2call[+146] exit 0
+data:release_vg_fs[257] METHODS=''
+data:release_vg_fs[258] SKIPVARYOFF=0
+data:release_vg_fs[282] [[ RELEASE == RELEASE ]]
+data:release_vg_fs[285] : This is an offline_primary case so an override
+data:release_vg_fs[286] : from the RR method means we skip the varyoff
+data:release_vg_fs[287] : since the default action is to varyoff
+data:release_vg_fs[289] SKIPVARYOFF=0
+data:release_vg_fs[291] (( 0 == 1 ))
+data:release_vg_fs[309] [[ -n 'datavg rootvg' ]]
+data:release_vg_fs[312] : Remove any duplicates that may have crept in
+data:release_vg_fs[314] print datavg rootvg
+data:release_vg_fs[314] sort -ru
+data:release_vg_fs[314] tr ' ' 'n'
+data:release_vg_fs[314] VOLUME_GROUPS=$'rootvgndatavg'
+data:release_vg_fs[316] (( 0 == 0 ))
+data:release_vg_fs[318] cl_deactivate_vgs rootvg datavg
+data:cl_deactivate_vgs[445] [[ high == high ]]
+data:cl_deactivate_vgs[445] version=1.1.10.8
+data:cl_deactivate_vgs[446] cl_get_path
+data:cl_deactivate_vgs[446] HA_DIR=es
+data:cl_deactivate_vgs[448] STATUS=0
+data:cl_deactivate_vgs[448] typeset -i STATUS
+data:cl_deactivate_vgs[449] TMP_VARYOFF_STATUS=/tmp/_deactivate_vgs.tmp
+data:cl_deactivate_vgs[450] sddsrv_off=FALSE
+data:cl_deactivate_vgs[451] ALLVGS=All_volume_groups
+data:cl_deactivate_vgs[452] OEM_CALL=false
+data:cl_deactivate_vgs[454] [[ -z '' ]]
+data:cl_deactivate_vgs[456] EMULATE=REAL
+data:cl_deactivate_vgs[459] (( 2 != 0 ))
+data:cl_deactivate_vgs[459] [[ rootvg == -c ]]
+data:cl_deactivate_vgs[468] EVENT_TYPE=not_set
+data:cl_deactivate_vgs[469] EVENT_TYPE=not_set
+data:cl_deactivate_vgs[472] : if JOB_TYPE is set, and it does not equal to GROUP, then
+data:cl_deactivate_vgs[473] : we are processing for process_resources
+data:cl_deactivate_vgs[475] [[ 0 != 0 ]]
+data:cl_deactivate_vgs[480] : Otherwise, check for valid call
+data:cl_deactivate_vgs[482] PROC_RES=false
+data:cl_deactivate_vgs[483] (( 2 == 0 ))
+data:cl_deactivate_vgs[491] : set -u will report an error if any variable used in the script is not set
+data:cl_deactivate_vgs[493] set -u
+data:cl_deactivate_vgs[496] : Remove the status file if it currently exists
+data:cl_deactivate_vgs[498] rm -f /tmp/_deactivate_vgs.tmp
+data:cl_deactivate_vgs[501] : Each of the V, R, M and F fields are padded to fixed length,
+data:cl_deactivate_vgs[502] : to allow reliable comparisons. E.g., maximum VRMF is
+data:cl_deactivate_vgs[503] : 99.99.999.999
+data:cl_deactivate_vgs[505] typeset -i V R M F
+data:cl_deactivate_vgs[506] typeset -Z2 R
+data:cl_deactivate_vgs[507] typeset -Z3 M
+data:cl_deactivate_vgs[508] typeset -Z3 F
+data:cl_deactivate_vgs[509] VRMF=0
+data:cl_deactivate_vgs[509] typeset -i VRMF
+data:cl_deactivate_vgs[512] : If the sddsrv daemon is running - vpath dead path detection and
+data:cl_deactivate_vgs[513] : recovery - turn it off, since interactions with the fibre channel
+data:cl_deactivate_vgs[514] : device driver will, in the case where there actually is a dead path,
+data:cl_deactivate_vgs[515] : slow down every vpath operation.
+data:cl_deactivate_vgs[517] ls '/dev/vpath*'
+data:cl_deactivate_vgs[517] 1> /dev/null 2>& 1
+data:cl_deactivate_vgs[580] : Setup for the hats_adapter calls
+data:cl_deactivate_vgs[582] cldomain
+data:cl_deactivate_vgs[582] HA_DOMAIN_NAME=xindai_cluster
+data:cl_deactivate_vgs[582] export HA_DOMAIN_NAME
+data:cl_deactivate_vgs[583] HB_SERVER_SOCKET=/var/ha/soc/topsvcs/server_socket
+data:cl_deactivate_vgs[583] export HB_SERVER_SOCKET
+data:cl_deactivate_vgs[586] : Special processing is required for 2 node clusters. determine the number
+data:cl_deactivate_vgs[587] : of nodes and AIX level
+data:cl_deactivate_vgs[589] TWO_NODE_CLUSTER=FALSE
+data:cl_deactivate_vgs[589] export TWO_NODE_CLUSTER
+data:cl_deactivate_vgs[590] FS_TYPES=''
+data:cl_deactivate_vgs[590] export FS_TYPES
+data:cl_deactivate_vgs[591] grep 'name ='
+data:cl_deactivate_vgs[591] sort
+data:cl_deactivate_vgs[591] wc -l
+data:cl_deactivate_vgs[591] uniq
+data:cl_deactivate_vgs[591] odmget HACMPnode
+data:cl_deactivate_vgs[591] (( 2 ==        2 ))
+data:cl_deactivate_vgs[592] [[ -n '' ]]
+data:cl_deactivate_vgs[641] : Pick up a list of currently varyd on volume groups
+data:cl_deactivate_vgs[643] lsvg -L -o
+data:cl_deactivate_vgs[643] 2> /tmp/lsvg.err
+data:cl_deactivate_vgs[643] VG_ON_LIST=$'datavgnrootvg'
+data:cl_deactivate_vgs[646] : if we are not called from process_resources, we have the old-style
+data:cl_deactivate_vgs[647] : environment and parameters
+data:cl_deactivate_vgs[649] [[ false == false ]]
+data:cl_deactivate_vgs[652] : Update the Resource Manager - let it know that were varying off these
+data:cl_deactivate_vgs[653] : volume groups
+data:cl_deactivate_vgs[655] [[ REAL == EMUL ]]
+data:cl_deactivate_vgs[660] cl_RMupdate resource_releasing All_volume_groups cl_deactivate_vgs
Reference string: Thu.May.14.22:35:24.GMT+08:00.2015.cl_deactivate_vgs.All_volume_groups.data.ref
+data:cl_deactivate_vgs[664] : First process any mndhb for these volume groups
+data:cl_deactivate_vgs[666] vgs_process_mndhb 'rootvg datavg'
+data:cl_deactivate_vgs[65] [[ high == high ]]
+data:cl_deactivate_vgs[65] set -x
+data:cl_deactivate_vgs[67] VG_LIST='rootvg datavg'
+data:cl_deactivate_vgs[67] typeset VG_LIST
+data:cl_deactivate_vgs[68] typeset lv_list
+data:cl_deactivate_vgs[69] lv_base=''
+data:cl_deactivate_vgs[69] typeset lv_base
+data:cl_deactivate_vgs[71] STATUS=0
+data:cl_deactivate_vgs[71] typeset -i STATUS
+data:cl_deactivate_vgs[72] RC=0
+data:cl_deactivate_vgs[72] typeset -i RC
+data:cl_deactivate_vgs[77] : If this vg contains lvs that are part of a mndhb network, tell
+data:cl_deactivate_vgs[78] : topsvcs to stop monitoring the network.
+data:cl_deactivate_vgs[79] : Note that we use clrsctinfo/cllsif because it will do the raw device
+data:cl_deactivate_vgs[80] : name mapping for us.
+data:cl_deactivate_vgs[82] grep :rootvg:
+data:cl_deactivate_vgs[82] sort -u
+data:cl_deactivate_vgs[82] cut -f 7 -d:
+data:cl_deactivate_vgs[82] clrsctinfo -p cllsif -c
+data:cl_deactivate_vgs[82] lv_list=''
+data:cl_deactivate_vgs[77] : If this vg contains lvs that are part of a mndhb network, tell
+data:cl_deactivate_vgs[78] : topsvcs to stop monitoring the network.
+data:cl_deactivate_vgs[79] : Note that we use clrsctinfo/cllsif because it will do the raw device
+data:cl_deactivate_vgs[80] : name mapping for us.
+data:cl_deactivate_vgs[82] sort -u
+data:cl_deactivate_vgs[82] grep :datavg:
+data:cl_deactivate_vgs[82] cut -f 7 -d:
+data:cl_deactivate_vgs[82] clrsctinfo -p cllsif -c
+data:cl_deactivate_vgs[82] lv_list=''
+data:cl_deactivate_vgs[103] : if there were any calls to hats_adapter give topsvcs a bit to catch up
+data:cl_deactivate_vgs[105] [[ -n '' ]]
+data:cl_deactivate_vgs[106] return 0
+data:cl_deactivate_vgs[668] PS4_LOOP=''
+data:cl_deactivate_vgs[668] typeset PS4_LOOP
+data:cl_deactivate_vgs[670] : Now, process the list of volume groups passed in
+data:cl_deactivate_vgs:rootvg[674] PS4_LOOP=rootvg
+data:cl_deactivate_vgs:rootvg[676] : Find out if it is varied on
+data:cl_deactivate_vgs:rootvg[678] [[ false == false ]]
+data:cl_deactivate_vgs:rootvg[681] : Dealing with AIX LVM volume groups
+data:cl_deactivate_vgs:rootvg[683] print datavg rootvg
+data:cl_deactivate_vgs:rootvg[683] grep -qw rootvg
+data:cl_deactivate_vgs:rootvg[690] MODE=9999
+data:cl_deactivate_vgs:rootvg[691] /usr/sbin/getlvodm -v rootvg
+data:cl_deactivate_vgs:rootvg[691] VGID=00ce3e1400004c0000000127d2ac9ac4
+data:cl_deactivate_vgs:rootvg[692] lqueryvg -g 00ce3e1400004c0000000127d2ac9ac4 -X
+data:cl_deactivate_vgs:rootvg[692] MODE=0
+data:cl_deactivate_vgs:rootvg[693] (( 0 != 0 ))
+data:cl_deactivate_vgs:rootvg[695] [[ REAL == EMUL ]]
+data:cl_deactivate_vgs:rootvg[701] : Yes, it is varyd on, so go vary it off
+data:cl_deactivate_vgs:datavg[674] PS4_LOOP=datavg
+data:cl_deactivate_vgs:datavg[676] : Find out if it is varied on
+data:cl_deactivate_vgs:datavg[678] [[ false == false ]]
+data:cl_deactivate_vgs:datavg[681] : Dealing with AIX LVM volume groups
+data:cl_deactivate_vgs:rootvg[703] vgs_varyoff rootvg 0
+data:cl_deactivate_vgs:rootvg[126] PS4_FUNC=vgs_varyoff
+data:cl_deactivate_vgs(.350):rootvg[126] PS4_TIMER=true
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+126] ERRNO=0
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+126] typeset PS4_FUNC PS4_TIMER ERRNO
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+127] [[ high == high ]]
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+127] set -x
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+129] VG=rootvg
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+129] typeset VG
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+130] MODE=0
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+130] typeset MODE
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+132] typeset OPEN_FSs
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+133] typeset OPEN_LVs
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+134] typeset TMP_VG_LIST
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+136] STATUS=0
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+136] typeset -i STATUS
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+137] RC=0
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+137] typeset -i RC
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+138] SELECTIVE_FAILOVER=false
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+138] typeset SELECTIVE_FAILOVER
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+139] typeset LV
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+140] lv_list=''
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+140] typeset lv_list
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+141] rc_lsvg=0
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+141] typeset -i rc_lsvg
+data:cl_deactivate_vgs(.350):rootvg[vgs_varyoff+144] : Check to see if this is a DARE event, whilst we have open file systems
+data:cl_deactivate_vgs:datavg[683] grep -qw datavg
+data:cl_deactivate_vgs:datavg[683] print datavg rootvg
+data:cl_deactivate_vgs:datavg[690] MODE=9999
+data:cl_deactivate_vgs:datavg[691] /usr/sbin/getlvodm -v datavg
+data:cl_deactivate_vgs(.360):rootvg[vgs_varyoff+146] lsvg -l -L rootvg
+data:cl_deactivate_vgs:datavg[691] VGID=00ce3e1400004c0000000127d9ac3596
+data:cl_deactivate_vgs(.370):rootvg[vgs_varyoff+146] 2> /dev/null
+data:cl_deactivate_vgs:datavg[692] lqueryvg -g 00ce3e1400004c0000000127d9ac3596 -X
+data:cl_deactivate_vgs:datavg[692] MODE=0
+data:cl_deactivate_vgs:datavg[693] (( 0 != 0 ))
+data:cl_deactivate_vgs:datavg[695] [[ REAL == EMUL ]]
+data:cl_deactivate_vgs:datavg[701] : Yes, it is varyd on, so go vary it off
+data:cl_deactivate_vgs:datavg[718] unset PS4_LOOP
+data:cl_deactivate_vgs[817] : Wait to sync all the background instances of vgs_varyoff
+data:cl_deactivate_vgs[819] wait
+data:cl_deactivate_vgs:datavg[703] vgs_varyoff datavg 0
+data:cl_deactivate_vgs:datavg[126] PS4_FUNC=vgs_varyoff
+data:cl_deactivate_vgs(.440):datavg[126] PS4_TIMER=true
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+126] ERRNO=0
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+126] typeset PS4_FUNC PS4_TIMER ERRNO
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+127] [[ high == high ]]
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+127] set -x
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+129] VG=datavg
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+129] typeset VG
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+130] MODE=0
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+130] typeset MODE
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+132] typeset OPEN_FSs
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+133] typeset OPEN_LVs
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+134] typeset TMP_VG_LIST
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+136] STATUS=0
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+136] typeset -i STATUS
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+137] RC=0
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+137] typeset -i RC
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+138] SELECTIVE_FAILOVER=false
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+138] typeset SELECTIVE_FAILOVER
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+139] typeset LV
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+140] lv_list=''
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+140] typeset lv_list
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+141] rc_lsvg=0
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+141] typeset -i rc_lsvg
+data:cl_deactivate_vgs(.440):datavg[vgs_varyoff+144] : Check to see if this is a DARE event, whilst we have open file systems
+data:cl_deactivate_vgs(.450):datavg[vgs_varyoff+146] lsvg -l -L datavg
+data:cl_deactivate_vgs(.450):datavg[vgs_varyoff+146] 2> /dev/null
+data:cl_deactivate_vgs(.530):rootvg[vgs_varyoff+146] TMP_VG_LIST=$'rootvg:nLV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINTnhd5                 boot       1       2       2    closed/syncd  N/Anhd6                 paging     2       4       2    open/syncd    N/Anhd8                 jfs2log    1       2       2    open/syncd    N/Anhd4                 jfs2       4       8       2    open/syncd    /nhd2                 jfs2       16      32      2    open/syncd    /usrnhd9var              jfs2       8       16      2    open/syncd    /varnhd3                 jfs2       20      40      2    open/syncd    /tmpnhd1                 jfs2       8       16      2    open/syncd    /homenhd10opt             jfs2       48      96      2    open/syncd    /optnhd11admin           jfs2       1       2       2    open/syncd    /adminnlg_dumplv           sysdump    8       8       1    open/syncd    N/Anlivedump            jfs2       1       2       2    open/syncd    /var/adm/ras/livedumpnfslv00              jfs2       400     800     2    closed/syncd  N/Anpaging00            paging     14      14      1    open/syncd    N/Anfslv02              jfs2       4       4       1    open/syncd    /bmcnfslv03              jfs2       8       8       1    open/syncd    /precise'
+data:cl_deactivate_vgs(.530):rootvg[vgs_varyoff+147] rc_lsvg=0
+data:cl_deactivate_vgs(.530):rootvg[vgs_varyoff+149] [[ not_set == reconfig* ]]
+data:cl_deactivate_vgs(.530):rootvg[vgs_varyoff+167] : Get list of open logical volumes corresponding to file systems
+data:cl_deactivate_vgs(.530):rootvg[vgs_varyoff+169] [[ -n $'rootvg:nLV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINTnhd5                 boot       1       2       2    closed/syncd  N/Anhd6                 paging     2       4       2    open/syncd    N/Anhd8                 jfs2log    1       2       2    open/syncd    N/Anhd4                 jfs2       4       8       2    open/syncd    /nhd2                 jfs2       16      32      2    open/syncd    /usrnhd9var              jfs2       8       16      2    open/syncd    /varnhd3                 jfs2       20      40      2    open/syncd    /tmpnhd1                 jfs2       8       16      2    open/syncd    /homenhd10opt             jfs2       48      96      2    open/syncd    /optnhd11admin           jfs2       1       2       2    open/syncd    /adminnlg_dumplv           sysdump    8       8       1    open/syncd    N/Anlivedump            jfs2       1       2       2    open/syncd    /var/adm/ras/livedumpnfslv00              jfs2       400     800     2    closed/syncd  N/Anpaging00            paging     14      14      1    open/syncd    N/Anfslv02              jfs2       4       4       1    open/syncd    /bmcnfslv03              jfs2       8       8       1    open/syncd    /precise' ]]
+data:cl_deactivate_vgs(.530):rootvg[vgs_varyoff+171] awk '$2 ~ /jfs2?$/ && $6 ~ /open/ {print $1}'
+data:cl_deactivate_vgs(.530):rootvg[vgs_varyoff+171] print $'rootvg:nLV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINTnhd5                 boot       1       2       2    closed/syncd  N/Anhd6                 paging     2       4       2    open/syncd    N/Anhd8                 jfs2log    1       2       2    open/syncd    N/Anhd4                 jfs2       4       8       2    open/syncd    /nhd2                 jfs2       16      32      2    open/syncd    /usrnhd9var              jfs2       8       16      2    open/syncd    /varnhd3                 jfs2       20      40      2    open/syncd    /tmpnhd1                 jfs2       8       16      2    open/syncd    /homenhd10opt             jfs2       48      96      2    open/syncd    /optnhd11admin           jfs2       1       2       2    open/syncd    /adminnlg_dumplv           sysdump    8       8       1    open/syncd    N/Anlivedump            jfs2       1       2       2    open/syncd    /var/adm/ras/livedumpnfslv00              jfs2       400     800     2    closed/syncd  N/Anpaging00            paging     14      14      1    open/syncd    N/Anfslv02              jfs2       4       4       1    open/syncd    /bmcnfslv03              jfs2       8       8       1    open/syncd    /precise'
+data:cl_deactivate_vgs(.540):datavg[vgs_varyoff+146] TMP_VG_LIST=$'datavg:nLV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINTnloglv02             jfs2log    1       1       1    closed/syncd  N/Anfslvnew00           jfs2       760     760     1    closed/syncd  /als6tj'
+data:cl_deactivate_vgs(.540):datavg[vgs_varyoff+147] rc_lsvg=0
+data:cl_deactivate_vgs(.540):datavg[vgs_varyoff+149] [[ not_set == reconfig* ]]
+data:cl_deactivate_vgs(.540):datavg[vgs_varyoff+167] : Get list of open logical volumes corresponding to file systems
+data:cl_deactivate_vgs(.540):datavg[vgs_varyoff+169] [[ -n $'datavg:nLV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINTnloglv02             jfs2log    1       1       1    closed/syncd  N/Anfslvnew00           jfs2       760     760     1    closed/syncd  /als6tj' ]]
+data:cl_deactivate_vgs(.540):rootvg[vgs_varyoff+171] OPEN_LVs=$'hd4nhd2nhd9varnhd3nhd1nhd10optnhd11adminnlivedumpnfslv02nfslv03'
+data:cl_deactivate_vgs(.540):rootvg[vgs_varyoff+175] : If this is an rg_move on selective fallover, lsvg -l might not work
+data:cl_deactivate_vgs(.540):rootvg[vgs_varyoff+176] : so try looking up the LVs in the ODM if the VG is online
+data:cl_deactivate_vgs(.540):rootvg[vgs_varyoff+178] [[ 0 != 0 ]]
+data:cl_deactivate_vgs(.540):rootvg[vgs_varyoff+196] : Attempt to kill off any processes using the logical volume, so that
+data:cl_deactivate_vgs(.540):rootvg[vgs_varyoff+197] : varyoff will hopefully work. Varyoff is guaranteed to fail if there
+data:cl_deactivate_vgs(.540):rootvg[vgs_varyoff+198] : are open connections to any logical volume.
+data:cl_deactivate_vgs(.540):rootvg[vgs_varyoff+200] [[ -n $'hd4nhd2nhd9varnhd3nhd1nhd10optnhd11adminnlivedumpnfslv02nfslv03' ]]
+data:cl_deactivate_vgs(.540):rootvg[vgs_varyoff+204] fuser -k -u -x /dev/hd4
/dev/hd4: +data:cl_deactivate_vgs(.550):datavg[vgs_varyoff+171] print $'datavg:nLV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINTnloglv02             jfs2log    1       1       1    closed/syncd  N/Anfslvnew00           jfs2       760     760     1    closed/syncd  /als6tj'
+data:cl_deactivate_vgs(.550):datavg[vgs_varyoff+171] awk '$2 ~ /jfs2?$/ && $6 ~ /open/ {print $1}'
       0c(root)       1c(root)   86118c(root)   90254c(root)  106694c(root)  114768(root)  118922c(root)  139458c(root)  143570(dasusr1)  147646c(root)  151686c(root)  155778(root)  159910(root)  164028(root)  168116c(root)  172158(root)  176298c(root)  180394c(pconsole)  184502c(root)  192718(root)  204990c(root)  217102(root)  225484(root)  254050(root)  266458c+data:cl_deactivate_vgs(.560):datavg[vgs_varyoff+171] OPEN_LVs=''
+data:cl_deactivate_vgs(.560):datavg[vgs_varyoff+175] : If this is an rg_move on selective fallover, lsvg -l might not work
+data:cl_deactivate_vgs(.560):datavg[vgs_varyoff+176] : so try looking up the LVs in the ODM if the VG is online
+data:cl_deactivate_vgs(.560):datavg[vgs_varyoff+178] [[ 0 != 0 ]]
+data:cl_deactivate_vgs(.560):datavg[vgs_varyoff+196] : Attempt to kill off any processes using the logical volume, so that
+data:cl_deactivate_vgs(.560):datavg[vgs_varyoff+197] : varyoff will hopefully work. Varyoff is guaranteed to fail if there
+data:cl_deactivate_vgs(.560):datavg[vgs_varyoff+198] : are open connections to any logical volume.
+data:cl_deactivate_vgs(.560):datavg[vgs_varyoff+200] [[ -n '' ]]
+data:cl_deactivate_vgs(.560):datavg[vgs_varyoff+210] : For two-node clusters, special processing for the highly available NFS
+data:cl_deactivate_vgs(.560):datavg[vgs_varyoff+211] : server function: tell NFS to dump the dup cache into the (dasusr1)  274494(root)  278534(root)  290872c(daemon)jfslog or
+data:cl_deactivate_vgs(.560):datavg[vgs_varyoff+212] : jfs2log, if the level of AIX supports it to allow it to be picked up
+data:cl_deactivate_vgs(.560):datavg[vgs_varyoff+213] : by the next node to get this volume group.
+data:cl_deactivate_vgs(.560):datavg[vgs_varyoff+215] [[ FALSE == TRUE ]]
+data:cl_deactivate_vgs(.560):datavg[vgs_varyoff+267] : Finally, vary off the volume group
+data:cl_deactivate_vgs(.560):datavg[vgs_varyoff+269] [[ 32 == 0 ]]
+data:cl_deactivate_vgs(.560):datavg[vgs_varyoff+285] varyoffvg datavg
  315558(root)  356572k(root)  368698c(nobody)  426150c(nobody)  479254c(nobody)   53590(root)   57786c(root)   69962c(root)   78216c(root)   82286c(root)   90522c(root)   94632(root)   98742c(root)  102676(root)  106892c(root)  119060c(nobody)  123380c(pconsole)  131522(root)  139686c(root)  147886(pconsole)  152028(root)  168364c(root)  172302(root)  180708c(db2inst1)  188702c(root)  192938c(root)  213408(root)  254386(root)  262536(root)  274858c(root)  295318(root)  303510c(root)  307608c(root)  324020(root)  336248(root)  344434c(root)  348592c(root)  360798(root)  365054c(root)

May 14 22:41:02 EVENT START: config_too_long 360 TE_RG_MOVE

:config_too_long[+65] [[ high = high ]]
:config_too_long[+65] version=1.16
:config_too_long[+66] :config_too_long[+66] cl_get_path
HA_DIR=es
:config_too_long[+68] NUM_SECS=360
:config_too_long[+69] EVENT=TE_RG_MOVE
:config_too_long[+71] HOUR=3600
:config_too_long[+72] THRESHOLD=5
:config_too_long[+73] SLEEP_INTERVAL=1
:config_too_long[+79] PERIOD=30
:config_too_long[+82] set -u
:config_too_long[+87] LOOPCNT=0
:config_too_long[+88] MESSAGECNT=0
:config_too_long[+89] :config_too_long[+89] cllsclstr -c
:config_too_long[+89] cut -d : -f2
:config_too_long[+89] grep -v cname
CLUSTER=xindai_cluster
:config_too_long[+90] TIME=360
:config_too_long[+91] sleep_cntr=0
:config_too_long[+96] [ -x /usr/lpp/ssp/bin/spget_syspar ]
:cl_sel[46] FFDC_DIRECTORY=/tmp/ibmsupt/hacmp
:cl_sel[49] uname -m
:cl_sel[49] UNAME=00CE3E144C00
:cl_sel[55] PLATFORM=__AIX__
:cl_sel[55] export PLATFORM
:cl_sel[62] [[ __AIX__ == __AIX__ ]]
:cl_sel[63] awk '{ print $3 }'
:cl_sel[63] tail -1
:cl_sel[63] df -k /tmp
:cl_sel[63] FREE_SPACE=4663372
:cl_sel[71] REQUIRED_SPACE=20000
:cl_sel[72] [ 4663372 -lt 20000 ]
:cl_sel[82] test -d /tmp/ibmsupt/hacmp
:cl_sel[83] [ 0 -ne 0 ]
:cl_sel[94] date +%Y.%m.%d.%H.%M
:cl_sel[94] TIMESTAMP=2015.05.14.22.41
:cl_sel[95] [[ -z 2015.05.14.22.41 ]]
:cl_sel[105] awk -F '"' '{print $2}'
:cl_sel[105] grep 'value ='
:cl_sel[105] odmget -qname=hacmp.out HACMPlogs
:cl_sel[105] LOG_DIRECTORY=/tmp
:cl_sel[107] [[ -z /tmp ]]
:cl_sel[117] tar cvf /tmp/ibmsupt/hacmp/eventlogs.2015.05.14.22.41 /tmp/hacmp.out /tmp/hacmp.out.1 /tmp/hacmp.out.2 /tmp/hacmp.out.4 /tmp/hacmp.out.6 /tmp/hacmp.out.7
:cl_sel[117] 1> /dev/null 2>& 1
:cl_sel[118] [ 0 -ne 0 ]
:cl_sel[125] compress /tmp/ibmsupt/hacmp/eventlogs.2015.05.14.22.41
:cl_sel[125] 1> /dev/null 2>& 1
:cl_sel[130] ls -1 /tmp/ibmsupt/hacmp/eventlogs.2010.06.05.21.17.Z /tmp/ibmsupt/hacmp/eventlogs.2010.08.22.10.47.Z /tmp/ibmsupt/hacmp/eventlogs.2011.03.01.00.12.Z /tmp/ibmsupt/hacmp/eventlogs.2011.03.01.00.18.Z /tmp/ibmsupt/hacmp/eventlogs.2015.05.14.06.05.Z /tmp/ibmsupt/hacmp/eventlogs.2015.05.14.22.41.Z
:cl_sel[130] wc -l
:cl_sel[130] 2> /dev/null
:cl_sel[130] FFDC_COUNT='       6'
:cl_sel[131] [ '       6' -gt 5 ]
:cl_sel[133] expr 6 - 5
:cl_sel[133] number_to_rm=1
:cl_sel[134] head -1
:cl_sel[134] ls -rt1 /tmp/ibmsupt/hacmp/eventlogs.2010.06.05.21.17.Z /tmp/ibmsupt/hacmp/eventlogs.2010.08.22.10.47.Z /tmp/ibmsupt/hacmp/eventlogs.2011.03.01.00.12.Z /tmp/ibmsupt/hacmp/eventlogs.2011.03.01.00.18.Z /tmp/ibmsupt/hacmp/eventlogs.2015.05.14.06.05.Z /tmp/ibmsupt/hacmp/eventlogs.2015.05.14.22.41.Z
:cl_sel[134] FFDC_LIST=/tmp/ibmsupt/hacmp/eventlogs.2010.06.05.21.17.Z
:cl_sel[137] rm -f /tmp/ibmsupt/hacmp/eventlogs.2010.06.05.21.17.Z
:cl_sel[141] dspmsg scripts.cat 10059 'FFDC event log collection saved to /tmp/ibmsupt/hacmp/eventlogs.2015.05.14.22.41n' /tmp/ibmsupt/hacmp/eventlogs.2015.05.14.22.41
FFDC event log collection saved to /tmp/ibmsupt/hacmp/eventlogs.2015.05.14.22.41
:cl_sel[143] exit 0
WARNING: Cluster xindai_cluster has been running recovery program 'TE_RG_MOVE' for 360 seconds. Please check cluster status.
WARNING: Cluster xindai_cluster has been running recovery program 'TE_RG_MOVE' for 390 seconds. Please check cluster status.
WARNING: Cluster xindai_cluster has been running recovery program 'TE_RG_MOVE' for 420 seconds. Please check cluster status.
WARNING: Cluster xindai_cluster has been running recovery program 'TE_RG_MOVE' for 450 seconds. Please check cluster status.
WARNING: Cluster xindai_cluster has been running recovery program 'TE_RG_MOVE' for 480 seconds. Please check cluster status.
WARNING: Cluster xindai_cluster has been running recovery program 'TE_RG_MOVE' for 540 seconds. Please check cluster status.
WARNING: Cluster xindai_cluster has been running recovery program 'TE_RG_MOVE' for 600 seconds. Please check cluster status.
WARNING: Cluster xindai_cluster has been running recovery program 'TE_RG_MOVE' for 660 seconds. Please check cluster status.
WARNING: Cluster xindai_cluster has been running recovery program 'TE_RG_MOVE' for 720 seconds. Please check cluster status.
WARNING: Cluster xindai_cluster has been running recovery program 'TE_RG_MOVE' for 780 seconds. Please check cluster status.
WARNING: Cluster xindai_cluster has been running recovery program 'TE_RG_MOVE' for 900 seconds. Please check cluster status.
WARNING: Cluster xindai_cluster has been running recovery program 'TE_RG_MOVE' for 1020 seconds. Please check cluster status.
:check_for_site_up[+54] [[ high = high ]]
:check_for_site_up[+54] version=1.4
:check_for_site_up[+55] :check_for_site_up[+55] cl_get_path
HA_DIR=es
:check_for_site_up[+57] STATUS=0
:check_for_site_up[+59] set +u
:check_for_site_up[+61] [ ]
:check_for_site_up[+72] exit 0
参与7

6同行回答

cy5156cy5156系统运维工程师szx
谢谢分享显示全部
谢谢分享收起
IT其它 · 2015-06-01
浏览5547
tuzi8351tuzi8351软件开发工程师天津
aix系统版本和HA版本是什么?我也遇到这种情况了,系统是aix5312, HA是6.1sp12,无解。。。。 ...qikaiyuan 发表于 2015-5-22 21:36     你看看你脚本有问题么?我怀疑是我的脚本出了问题,还没测试...显示全部
aix系统版本和HA版本是什么?我也遇到这种情况了,系统是aix5312, HA是6.1sp12,无解。。。。 ...
qikaiyuan 发表于 2015-5-22 21:36



    你看看你脚本有问题么?我怀疑是我的脚本出了问题,还没测试收起
政府机关 · 2015-05-26
浏览5300
qikaiyuanqikaiyuan软件开发工程师山东华鲁科技
aix系统版本和HA版本是什么?我也遇到这种情况了,系统是aix5312, HA是6.1sp12,无解。。。。显示全部
aix系统版本和HA版本是什么?我也遇到这种情况了,系统是aix5312, HA是6.1sp12,无解。。。。收起
互联网服务 · 2015-05-22
浏览5720
tuzi8351tuzi8351软件开发工程师天津
:Q这么多人看没人回复呢显示全部
:Q这么多人看没人回复呢收起
政府机关 · 2015-05-21
浏览5317
tuzi8351tuzi8351软件开发工程师天津
Startup Policy                                      Online On Home Node Only          ...显示全部
Startup Policy                                      Online On Home Node Only                                                                                      +
  Fallover Policy                                     Fallover To Next Priority Node In The List                                                                    +
  Fallback Policy                                     Never Fallback收起
政府机关 · 2015-05-18
浏览5848
zwz99999zwz99999系统工程师dcits
估计是切换策略有问题吧!看看你测ha资源组配置策略显示全部
估计是切换策略有问题吧!看看你测ha资源组配置策略收起
系统集成 · 2015-05-18
浏览9455

提问者

tuzi8351
软件开发工程师天津
擅长领域: 服务器AIXUnix

相关问题

相关资料

问题状态

  • 发布时间:2015-05-18
  • 关注会员:1 人
  • 问题浏览:18390
  • 最近回答:2015-06-01
  • X社区推广