IT培训教育PowerHA故障诊断

POWERHA无法启动?

两台小机做的双机,双机关闭再启动后资源组及服务ip在主机A及主机B上来回跳后来自动消失下面是启动信息、同步信息及HACMP.OUT帮忙看看启动信息:Verifying Cluster Configuration Prior to Starting Cluster Services.There are no active cluster nodes to verify against....显示全部

两台小机做的双机,双机关闭再启动后资源组及服务ip在主机A及主机B上来回跳后来自动消失
下面是启动信息、同步信息及HACMP.OUT帮忙看看

启动信息:
Verifying Cluster Configuration Prior to Starting Cluster Services.

There are no active cluster nodes to verify against.
Verifying node(s): db1 db2 requested to start

WARNING: Node db1 has cluster.es.nfs.rte installed however grace periods are not fully enabled on this node. Grace periods must be enabled before NFSv4 stable storage can be used.

HACMP will attempt to fix this opportunistically when acquiring NFS resources on this node however the change won't take effect until the next time that nfsd is started.

If this warning persists, the administrator should perform the following steps to enable grace periods on db1 at the next planned downtime:

  1. stopsrc -s nfsd
  2. smitty nfsgrcperiod
  3. startsrc -s nfsd

WARNING: Node db2 has cluster.es.nfs.rte installed however grace periods are not fully enabled on this node. Grace periods must be enabled before NFSv4 stable storage can be used.

HACMP will attempt to fix this opportunistically when acquiring NFS resources on this node however the change won't take effect until the next time that nfsd is started.

If this warning persists, the administrator should perform the following steps to enable grace periods on db2 at the next planned downtime:

  1. stopsrc -s nfsd
  2. smitty nfsgrcperiod
  1. startsrc -s nfsd

Successfully verified node(s): db1 db2

Starting Cluster Services on node: db1
This may take a few minutes. Please wait...
db1: start_cluster: Starting HACMP
db1: Current state: ST_INIT
db1: 3015378 - 0:00 syslogd
db1: Setting routerevalidate to 1
db1: 0513-059 The topsvcs Subsystem has been started. Subsystem PID is 16450018.
db1: 0513-059 The grpsvcs Subsystem has been started. Subsystem PID is 16384366.
db1: 0513-059 The emsvcs Subsystem has been started. Subsystem PID is 16646842.
db1: 0513-059 The gsclvmd Subsystem has been started. Subsystem PID is 16908822.
db1: 0513-059 The clinfoES Subsystem has been started. Subsystem PID is 17105646.
db1: Dec 12 2017 08:53:43 Starting execution of /usr/es/sbin/cluster/etc/rc.cluster
db1: with parameters: -boot -N -A -b -i -C interactive -P cl_rc_cluster
db1:
db1: Dec 12 2017 08:53:43 Checking for srcmstr active...
db1: Dec 12 2017 08:53:43 complete.
db1: Dec 12 2017 08:53:46
db1: /usr/es/sbin/cluster/utilities/clstart: called with flags -m -G -i -b -P cl_rc_cluster -C interactive -B -A
db1:
db1: Dec 12 2017 08:53:50
db1: Completed execution of /usr/es/sbin/cluster/etc/rc.cluster
db1: with parameters: -boot -N -A -b -i -C interactive -P cl_rc_cluster.
db1: Exit status = 0
db1:
Starting Cluster Services on node: db2
This may take a few minutes. Please wait...
db2: start_cluster: Starting HACMP
db2: Current state: ST_INIT
db2: 4325654 - 0:00 syslogd
db2: Setting routerevalidate to 1
db2: 0513-059 The topsvcs Subsystem has been started. Subsystem PID is 15401478.
db2: 0513-059 The grpsvcs Subsystem has been started. Subsystem PID is 12386874.
db2: 0513-059 The emsvcs Subsystem has been started. Subsystem PID is 15466930.
db2: 0513-059 The gsclvmd Subsystem has been started. Subsystem PID is 12911028.
db2: 0513-059 The clinfoES Subsystem has been started. Subsystem PID is 15597870.
db2: Dec 11 2017 19:57:11 Starting execution of /usr/es/sbin/cluster/etc/rc.cluster
db2: with parameters: -boot -N -A -b -i -C interactive -P cl_rc_cluster
db2:
db2: Dec 11 2017 19:57:11 Checking for srcmstr active...
db2: Dec 11 2017 19:57:11 complete.
db2: Dec 11 2017 19:57:14
db2: /usr/es/sbin/cluster/utilities/clstart: called with flags -m -G -i -b -P cl_rc_cluster -C interactive -B -A
db2:
db2: Dec 11 2017 19:57:18
db2: Completed execution of /usr/es/sbin/cluster/etc/rc.cluster
db2: with parameters: -boot -N -A -b -i -C interactive -P cl_rc_cluster.
db2: Exit status = 0
db2:
同步:
Verification to be performed on the following:

    Cluster Topology
    Cluster Resources

Verification will interactively correct verification errors.

Retrieving data from available cluster nodes. This could take a few minutes.

    
    Start data collection on node db1
    Start data collection on node db2
    Collector on node db2 completed
    Collector on node db1 completed
    Data collection complete

Verifying Cluster Topology...

    
    Completed 10 percent of the verification checks
    Completed 20 percent of the verification checks
    Completed 30 percent of the verification checks
    Completed 40 percent of the verification checks

Verifying Cluster Resources...

    
    Completed 50 percent of the verification checks
    Completed 60 percent of the verification checks
    Completed 70 percent of the verification checks
    Completed 80 percent of the verification checks
    Completed 90 percent of the verification checks
    Completed 100 percent of the verification checks

WARNING: Node db1 has cluster.es.nfs.rte installed however grace periods are not fully enabled on this node. Grace periods must be enabled before NFSv4 stable storage can be used.

HACMP will attempt to fix this opportunistically when acquiring NFS resources on this node however the change won't take effect until the next time that nfsd is started.

If this warning persists, the administrator should perform the following steps to enable grace periods on db1 at the next planned downtime:

  1. stopsrc -s nfsd
  2. smitty nfsgrcperiod
  3. startsrc -s nfsd

WARNING: Node db2 has cluster.es.nfs.rte installed however grace periods are not fully enabled on this node. Grace periods must be enabled before NFSv4 stable storage can be used.

HACMP will attempt to fix this opportunistically when acquiring NFS resources on this node however the change won't take effect until the next time that nfsd is started.
HACMP will attempt to fix this opportunistically when acquiring NFS resources on this node however the change won't take effect until the next time that nfsd is started.

If this warning persists, the administrator should perform the following steps to enable grace periods on db2 at the next planned downtime:

  1. stopsrc -s nfsd
  2. smitty nfsgrcperiod
  3. startsrc -s nfsd

Remember to redo automatic error notification if configuration has changed.
Committing any changes, as required, to all available nodes...
Adding any necessary HACMP entries to /etc/inittab and /etc/rc.net for IPAT on node db2.
Adding any necessary HACMP entries to /etc/inittab and /etc/rc.net for IPAT on node db1.


收起
参与10

查看其它 1 个回答baochengchen的回答

baochengchenbaochengchen系统工程师华际

太长了, 没看; 不过给你一个小思路:

把ha的application server 的脚本停掉; 就让ha激活共享卷和services ip; 在看看;

看看是什么情况; 其实很简单一句话, 就是铲除重配也就是瞬间的事情吧;

系统集成 · 2017-12-16
浏览3146
  • 用的空脚本,重新配了也这样
    2017-12-17
  • 那只能把日志清一下, 然后看看切换的原因查一下吧; 这个也没有什么办法了。应该是两个节点通信出现问题了; 比如检查一下心跳啥的;
    2017-12-20

回答者

baochengchen
系统工程师华际
擅长领域: 服务器UnixAIX

baochengchen 最近回答过的问题

回答状态

  • 发布时间:2017-12-16
  • 关注会员:3 人
  • 回答浏览:3146
  • X社区推广