密码保护:BCE刀箱一台HS22刀片更换主板后SN不匹配修改

这是一篇受密码保护的文章,您需要提供访问密码:

发表在 操作系统 | 标签为 | 要查看留言请输入您的密码。

密码保护:闰秒NTP规避调整及验证

这是一篇受密码保护的文章,您需要提供访问密码:

发表在 AIX, 操作系统 | 标签为 | 要查看留言请输入您的密码。

密码保护:dumplv扩容步骤

这是一篇受密码保护的文章,您需要提供访问密码:

发表在 AIX, 操作系统 | 标签为 | 要查看留言请输入您的密码。

密码保护:nim安装完系统盘后hdisk0和hdisk1位置错位解决

这是一篇受密码保护的文章,您需要提供访问密码:

发表在 AIX, 操作系统 | 要查看留言请输入您的密码。

hponcfg修改ILO管理口密码

bjhana57:~ # hponcfg -f ilo.xml
HP Lights-Out Online Configuration utility
Version 4.3.0 Date 12/10/2013 (c) Hewlett-Packard Company, 2014
Firmware Revision = 1.26 Device type = iLO 3 Driver name = hpilo
Please specify a value for variable %user_password% in the xml file.
bjhana57:~ # vi ilo.xml
bjhana57:~ # hponcfg -f ilo.xml
HP Lights-Out Online Configuration utility
Version 4.3.0 Date 12/10/2013 (c) Hewlett-Packard Company, 2014
Firmware Revision = 1.26 Device type = iLO 3 Driver name = hpilo
Integrated Lights-Out will reset at the end of the script.
Please wait while the firmware is reset. This might take a minute /
|

Script failed
bjhana57:~ #
bjhana57:~ #
bjhana57:~ #
bjhana57:~ # vi ilo.xml
bjhana57:~ # hponcfg -w ilo.xml
HP Lights-Out Online Configuration utility
Version 4.3.0 Date 12/10/2013 (c) Hewlett-Packard Company, 2014
Firmware Revision = 1.26 Device type = iLO 3 Driver name = hpilo
Management Processor configuration is successfully written to file “ilo.xml”
bjhana57:~ # cat ilo.xml

(<!– HPONCFG VERSION = “4.3.0” –>
<!– Generated 3/27/2015 9:47:37 –>
<RIBCL VERSION=”2.1″>
<LOGIN USER_LOGIN=”Administrator” PASSWORD=”password”>
<DIR_INFO MODE=”write”>
<MOD_DIR_CONFIG>
<DIR_AUTHENTICATION_ENABLED VALUE = “Y”/>
<DIR_LOCAL_USER_ACCT VALUE = “Y”/>
<DIR_SERVER_ADDRESS VALUE = “199.0.34.201”/>
<DIR_SERVER_PORT VALUE = “636”/>
<!– HPONCFG VERSION = “4.3.0” –>
<!– Generated 3/27/2015 9:47:37 –>
<RIBCL VERSION=”2.1″>
<LOGIN USER_LOGIN=”Administrator” PASSWORD=”password”>
<DIR_INFO MODE=”write”>
<MOD_DIR_CONFIG>
<DIR_AUTHENTICATION_ENABLED VALUE = “Y”/>
<DIR_LOCAL_USER_ACCT VALUE = “Y”/>
<DIR_SERVER_ADDRESS VALUE = “199.0.34.201”/>
<ADD_USER
USER_NAME = “hp”
USER_LOGIN = “hp”
PASSWORD = “%user_password%”>
<ADMIN_PRIV value = “Y”/>
<REMOTE_CONS_PRIV value = “Y”/>
<RESET_SERVER_PRIV value = “Y”/>
<VIRTUAL_MEDIA_PRIV value = “Y”/>
<CONFIG_ILO_PRIV value = “Y”/>
</ADD_USER>
<ADD_USER
USER_NAME = “upcadmin”
USER_LOGIN = “upcadmin”
PASSWORD = “%user_password%”>
<ADMIN_PRIV value = “Y”/>
<REMOTE_CONS_PRIV value = “Y”/>
<RESET_SERVER_PRIV value = “Y”/>
<VIRTUAL_MEDIA_PRIV value = “Y”/>
<CONFIG_ILO_PRIV value = “Y”/>
</ADD_USER>
</ADD_USER>
</ADD_USER>
</ADD_USER>
<RESET_SERVER_PRIV value = “Y”/>
<VIRTUAL_MEDIA_PRIV value = “Y”/>
<CONFIG_ILO_PRIV value = “Y”/>
<DIR_SERVER_PORT VALUE = “636”/>
<DIR_OBJECT_DN VALUE = “”/>
<DIR_OBJECT_PASSWORD VALUE = “”/>
<DIR_USER_CONTEXT_1 VALUE = “ou=People,dc=cmbc,dc=com”/>
<DIR_USER_CONTEXT_2 VALUE = “”/>
<DIR_USER_CONTEXT_3 VALUE = “”/>
</MOD_DIR_CONFIG>
</DIR_INFO>
<RIB_INFO MODE=”write”>
<MOD_NETWORK_SETTINGS>
<SPEED_AUTOSELECT VALUE = “Y”/>
<NIC_SPEED VALUE = “10”/>
<FULL_DUPLEX VALUE = “N”/>
<IP_ADDRESS VALUE = “199.0.41.161”/>
<SUBNET_MASK VALUE = “255.255.255.0”/>
<GATEWAY_IP_ADDRESS VALUE = “199.0.41.250”/>
<DNS_NAME VALUE = “ILOCNG213S875″/>
<PRIM_DNS_SERVER value = “210.22.70.3”/>
<DHCP_ENABLE VALUE = “N”/>
<DOMAIN_NAME VALUE = “”/>
<DHCP_GATEWAY VALUE = “N”/>
<DHCP_DNS_SERVER VALUE = “N”/>
<DHCP_STATIC_ROUTE VALUE = “N”/>
<DHCP_WINS_SERVER VALUE = “N”/>
<REG_WINS_SERVER VALUE = “Y”/>
<PRIM_WINS_SERVER value = “0.0.0.0”/>
<STATIC_ROUTE_1 DEST = “0.0.0.0” GATEWAY = “0.0.0.0”/>
<STATIC_ROUTE_2 DEST = “0.0.0.0” GATEWAY = “0.0.0.0”/>
<STATIC_ROUTE_3 DEST = “0.0.0.0” GATEWAY = “0.0.0.0”/>
</MOD_NETWORK_SETTINGS>
</RIB_INFO>
<USER_INFO MODE=”write”>
</ADD_USER>
<ADD_USER
USER_NAME = “lin”
USER_LOGIN = “lin”
PASSWORD = “lin”>
<ADMIN_PRIV value = “Y”/>
<REMOTE_CONS_PRIV value = “Y”/>
<RESET_SERVER_PRIV value = “Y”/>
<VIRTUAL_MEDIA_PRIV value = “Y”/>
<CONFIG_ILO_PRIV value = “Y”/>
</ADD_USER>
</USER_INFO>
</LOGIN>
</RIBCL>

“ilo.xml” 55L, 1810C written
<DIR_OBJECT_DN VALUE = “”/>
<DIR_OBJECT_PASSWORD VALUE = “”/>
<DIR_USER_CONTEXT_1 VALUE = “ou=People,dc=cmbc,dc=com”/>
<DIR_USER_CONTEXT_2 VALUE = “”/>
<DIR_USER_CONTEXT_3 VALUE = “”/>
</MOD_DIR_CONFIG>
</DIR_INFO>
<ADD_USER
USER_NAME = “upcadmin”
USER_LOGIN = “upcadmin”
PASSWORD = “%user_password%”>
<ADMIN_PRIV value = “Y”/>
<REMOTE_CONS_PRIV value = “Y”/>
<RESET_SERVER_PRIV value = “Y”/>
<VIRTUAL_MEDIA_PRIV value = “Y”/>
<CONFIG_ILO_PRIV value = “Y”/>
</ADD_USER>
<DIR_SERVER_PORT VALUE = “636”/>
<DIR_OBJECT_DN VALUE = “”/>
<DIR_OBJECT_PASSWORD VALUE = “”/>
<DIR_USER_CONTEXT_1 VALUE = “ou=People,dc=cmbc,dc=com”/>
<DIR_USER_CONTEXT_2 VALUE = “”/>
<DIR_USER_CONTEXT_3 VALUE = “”/>
</MOD_DIR_CONFIG>
</DIR_INFO>
<RIB_INFO MODE=”write”>
<MOD_NETWORK_SETTINGS>
<SPEED_AUTOSELECT VALUE = “Y”/>
<NIC_SPEED VALUE = “10”/>
<FULL_DUPLEX VALUE = “N”/>
<IP_ADDRESS VALUE = “199.0.41.161”/>
<SUBNET_MASK VALUE = “255.255.255.0”/>
<GATEWAY_IP_ADDRESS VALUE = “199.0.41.250”/>
<DNS_NAME VALUE = “ILOCNG213S875″/>
<PRIM_DNS_SERVER value = “210.22.70.3”/>
<DHCP_ENABLE VALUE = “N”/>
<DOMAIN_NAME VALUE = “”/>
<DHCP_GATEWAY VALUE = “N”/>
<DHCP_DNS_SERVER VALUE = “N”/>
<DHCP_STATIC_ROUTE VALUE = “N”/>
<DHCP_WINS_SERVER VALUE = “N”/>
<REG_WINS_SERVER VALUE = “Y”/>
<PRIM_WINS_SERVER value = “0.0.0.0”/>
<STATIC_ROUTE_1 DEST = “0.0.0.0” GATEWAY = “0.0.0.0”/>
<STATIC_ROUTE_2 DEST = “0.0.0.0” GATEWAY = “0.0.0.0”/>
<STATIC_ROUTE_3 DEST = “0.0.0.0” GATEWAY = “0.0.0.0”/>
</MOD_NETWORK_SETTINGS>
</RIB_INFO>
<USER_INFO MODE=”write”>
<ADD_USER
USER_NAME = “lin”
USER_LOGIN = “lin”
PASSWORD = “lin”>
<ADMIN_PRIV value = “Y”/>
<REMOTE_CONS_PRIV value = “Y”/>
<RESET_SERVER_PRIV value = “Y”/>
<VIRTUAL_MEDIA_PRIV value = “Y”/>
<CONFIG_ILO_PRIV value = “Y”/>
</ADD_USER>
</USER_INFO>
</LOGIN>
</RIBCL>

“ilo.xml” 54L, 1796C written
<RIB_INFO MODE=”write”>
<MOD_NETWORK_SETTINGS>
<SPEED_AUTOSELECT VALUE = “Y”/>
<NIC_SPEED VALUE = “10”/>
<FULL_DUPLEX VALUE = “N”/>
<IP_ADDRESS VALUE = “199.0.41.161”/>
<SUBNET_MASK VALUE = “255.255.255.0”/>
<GATEWAY_IP_ADDRESS VALUE = “199.0.41.250”/>
<DNS_NAME VALUE = “ILOCNG213S875″/>
<PRIM_DNS_SERVER value = “210.22.70.3”/>
<DHCP_ENABLE VALUE = “N”/>
<DOMAIN_NAME VALUE = “”/>
<DHCP_GATEWAY VALUE = “N”/>
<DHCP_DNS_SERVER VALUE = “N”/>
<DHCP_STATIC_ROUTE VALUE = “N”/>
<DHCP_WINS_SERVER VALUE = “N”/>
<REG_WINS_SERVER VALUE = “Y”/>
<PRIM_WINS_SERVER value = “0.0.0.0”/>
<STATIC_ROUTE_1 DEST = “0.0.0.0” GATEWAY = “0.0.0.0”/>
<STATIC_ROUTE_2 DEST = “0.0.0.0” GATEWAY = “0.0.0.0”/>
<STATIC_ROUTE_3 DEST = “0.0.0.0” GATEWAY = “0.0.0.0”/>
</MOD_NETWORK_SETTINGS>
</RIB_INFO>
<USER_INFO MODE=”write”>
<ADD_USER
USER_NAME = “hp”
USER_LOGIN = “hp”
<!– HPONCFG VERSION = “4.3.0” –>
<!– Generated 3/27/2015 9:48:26 –>
<RIBCL VERSION=”2.1″>
<LOGIN USER_LOGIN=”Administrator” PASSWORD=”password”>
<DIR_INFO MODE=”write”>
<MOD_DIR_CONFIG>
<DIR_AUTHENTICATION_ENABLED VALUE = “Y”/>
<DIR_LOCAL_USER_ACCT VALUE = “Y”/>
<DIR_SERVER_ADDRESS VALUE = “199.0.34.201”/>
<DIR_SERVER_PORT VALUE = “636”/>
<DIR_OBJECT_DN VALUE = “”/>
<DIR_OBJECT_PASSWORD VALUE = “”/>
<DIR_USER_CONTEXT_1 VALUE = “ou=People,dc=cmbc,dc=com”/>
<DIR_USER_CONTEXT_2 VALUE = “”/>
<DIR_USER_CONTEXT_3 VALUE = “”/>
</MOD_DIR_CONFIG>
</DIR_INFO>
<RIB_INFO MODE=”write”>
<MOD_NETWORK_SETTINGS>
<SPEED_AUTOSELECT VALUE = “Y”/>
<NIC_SPEED VALUE = “10”/>
<FULL_DUPLEX VALUE = “N”/>
<IP_ADDRESS VALUE = “199.0.41.161”/>
<SUBNET_MASK VALUE = “255.255.255.0”/>
<GATEWAY_IP_ADDRESS VALUE = “199.0.41.250”/>
<DNS_NAME VALUE = “ILOCNG213S875″/>
<PRIM_DNS_SERVER value = “210.22.70.3”/>
<DHCP_ENABLE VALUE = “N”/>
<DOMAIN_NAME VALUE = “”/>
<DHCP_GATEWAY VALUE = “N”/>
<DHCP_DNS_SERVER VALUE = “N”/>
<DHCP_STATIC_ROUTE VALUE = “N”/>
<DHCP_WINS_SERVER VALUE = “N”/>
<REG_WINS_SERVER VALUE = “Y”/>
<PRIM_WINS_SERVER value = “0.0.0.0”/>
<STATIC_ROUTE_1 DEST = “0.0.0.0” GATEWAY = “0.0.0.0”/>
<STATIC_ROUTE_2 DEST = “0.0.0.0” GATEWAY = “0.0.0.0”/>
<STATIC_ROUTE_3 DEST = “0.0.0.0” GATEWAY = “0.0.0.0”/>
</MOD_NETWORK_SETTINGS>
</RIB_INFO>
<USER_INFO MODE=”write”>
<ADD_USER
USER_NAME = “lin”
USER_LOGIN = “lin”
PASSWORD = “linziqiang”>
<ADMIN_PRIV value = “Y”/>
<REMOTE_CONS_PRIV value = “Y”/>
<RESET_SERVER_PRIV value = “Y”/>
<VIRTUAL_MEDIA_PRIV value = “Y”/>
<CONFIG_ILO_PRIV value = “Y”/>
</ADD_USER>
</USER_INFO>
</LOGIN>
</RIBCL>
~
“ilo.xml” 54L, 1803C written
PASSWORD = “%user_password%”>
<ADMIN_PRIV value = “Y”/>
<REMOTE_CONS_PRIV value = “Y”/>
<RESET_SERVER_PRIV value = “Y”/>
<VIRTUAL_MEDIA_PRIV value = “Y”/>
<CONFIG_ILO_PRIV value = “Y”/>
</ADD_USER>
<ADD_USER
USER_NAME = “upcadmin”
USER_LOGIN = “upcadmin”
PASSWORD = “%user_password%”>
<ADMIN_PRIV value = “Y”/>
<REMOTE_CONS_PRIV value = “Y”/>
<RESET_SERVER_PRIV value = “Y”/>
<VIRTUAL_MEDIA_PRIV value = “Y”/>
<CONFIG_ILO_PRIV value = “Y”/>
</ADD_USER>
</USER_INFO>
</LOGIN>
</RIBCL>
bjhana57:~ # vi ilo.xml
bjhana57:~ # hponcfg -w ilo.xml
HP Lights-Out Online Configuration utility
Version 4.3.0 Date 12/10/2013 (c) Hewlett-Packard Company, 2014
Firmware Revision = 1.26 Device type = iLO 3 Driver name = hpilo
Management Processor configuration is successfully written to file “ilo.xml”
bjhana57:~ #
bjhana57:~ # vi ilo.xml
bjhana57:~ # hponcfg -f ilo.xml
HP Lights-Out Online Configuration utility
Version 4.3.0 Date 12/10/2013 (c) Hewlett-Packard Company, 2014
Firmware Revision = 1.26 Device type = iLO 3 Driver name = hpilo
<INFORM>Integrated Lights-Out will reset at the end of the script.</INFORM>
</– ERROR :      STATUS= 0x0004
MESSAGE= Password is too short. –>

Please wait while the firmware is reset. This might take a minute
Script failed )

bjhana57:~ # vi ilo.xml
bjhana57:~ # hponcfg -f ilo.xml
HP Lights-Out Online Configuration utility
Version 4.3.0 Date 12/10/2013 (c) Hewlett-Packard Company, 2014
Firmware Revision = 1.26 Device type = iLO 3 Driver name = hpilo
Integrated Lights-Out will reset at the end of the script.

Please wait while the firmware is reset. This might take a minute
Script succeeded
bjhana57:~ # hponcfg
HP Lights-Out Online Configuration utility
Version 4.3.0 Date 12/10/2013 (c) Hewlett-Packard Company, 2014
Firmware Revision = 1.26 Device type = iLO 3 Driver name = hpilo

USAGE:
hponcfg -?
hponcfg -h
hponcfg -m minFw
hponcfg -r [-m minFw ]
hponcfg [-a] -w filename [-m minFw]
hponcfg -g [-m minFw]
hponcfg -f filename [-l filename] [-s namevaluepair] [-v] [-m minFw]
hponcfg -i [-l filename] [-s namevaluepair] [-v] [-m minFw]

-h, –help Display this message
-? Display this message
-r, –reset Reset the Management Processor to factory defaults
-f, –file Get/Set Management Processor configuration from “filename”
-i, –input Get/Set Management Processor configuration from the XML input
received through the standard input stream.
-w, –writeconfig Write the Management Processor configuration to “filename”
-a, –all Capture complete Management Processor configuration to the file.
This should be used along with ‘-w’ option
-l, –log Log replies to “filename”
-v, –xmlverbose Display all the responses from Management Processor
-s, –substitute Substitute variables present in input config file
with values specified in “namevaluepairs”
-g, –get_hostinfo Get the Host information
-m, –minfwlevel Minimum firmware level
bjhana57:~ # hponcfg -r
HP Lights-Out Online Configuration utility
Version 4.3.0 Date 12/10/2013 (c) Hewlett-Packard Company, 2014
Firmware Revision = 1.26 Device type = iLO 3 Driver name = hpilo
Resetting to Factory Defaults…This takes upto 60 seconds.

发表在 LINUX, 操作系统 | 标签为 | 留下评论

密码保护:SAS平台GPFS扩容步骤

这是一篇受密码保护的文章,您需要提供访问密码:

发表在 AIX, 操作系统 | 标签为 | 要查看留言请输入您的密码。

HACMP7.1修改SERVICE IP LABEL出现ST_RP_FAILED状态

smittty hacmp -> problem detection tools -> view current state

ST_INIT: cluster configured and down 二边cluster都停止时候 虽然subsystem启动但是状态是init
ST_JOINING: node joining the cluster 节点走在加入cluster
ST_VOTING: Inter-node decision state for an event
ST_RP_RUNNING: cluster running recovery program
ST_BARRIER: clstrmgr waiting at the barrier statement 正在启动操作中
ST_CBARRIER: clstrmgr is exiting recovery program
ST_UNSTABLE: cluster unstable
NOT_CONFIGURED: HA installed but not configured 没有配置ha资源组
ST_RP_FAILED: event script failed
ST_STABLE: cluster services are running with managed resources (stable cluster) or cluster services have been “forced” down with resource groups potentially in the UNMANAGED state (HACMP 5.4 only)

根据需求需要修改service ip label MFNBU_SRV为MFNBU,在没有先停集群的情况下,我先修改了/etc/hosts的主机名,再停止的HA发现clRGinfo节点1的状态为error:
MFNBU01:/#cat /etc/hosts
# IBM_PROLOG_BEGIN_TAG
# This is an automatically generated prolog.
#
# bos61D src/bos/usr/sbin/netstart/hosts 1.2
#
# Licensed Materials – Property of IBM
#
# COPYRIGHT International Business Machines Corp. 1985,1989
# All Rights Reserved
#
# US Government Users Restricted Rights – Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
# @(#)47 1.2 src/bos/usr/sbin/netstart/hosts, cmdnet, bos61D, d2007_49A2 10/1/07 13:57:52
# IBM_PROLOG_END_TAG
#
# COMPONENT_NAME: TCPIP hosts
#
# FUNCTIONS: loopback
#
# ORIGINS: 26 27
#
# (C) COPYRIGHT International Business Machines Corp. 1985, 1989
# All Rights Reserved
# Licensed Materials – Property of IBM
#
# US Government Users Restricted Rights – Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
# /etc/hosts
#
# This file contains the hostnames and their address for hosts in the
# network. This file is used to resolve a hostname into an Internet
# address.
#
# At minimum, this file must contain the name and address for each
# device defined for TCP in your /etc/net file. It may also contain
# entries for well-known (reserved) names such as timeserver
# and printserver as well as any other host name and address.
#
# The format of this file is:
# Internet Address Hostname # Comments
# Internet Address can be either IPv4 or IPv6 address.
# Items are separated by any number of blanks and/or tabs. A ‘#’
# indicates the beginning of a comment; characters up to the end of the
# line are not interpreted by routines which search this file. Blank
# lines are allowed.

# Internet Address Hostname # Comments
# 192.9.200.1 net0sample # ethernet name/address
# 128.100.0.1 token0sample # token ring name/address
# 10.2.0.2 x25sample # x.25 name/address
# 2000:1:1:1:209:6bff:feee:2b7f ipv6sample # ipv6 name/address
127.0.0.1 loopback localhost # loopback (lo0) name/address
#::1 loopback localhost # IPv6 loopback (lo0) name/address

40.43.192.6 NIMPBAC1
197.0.83.32 SZNIM
197.3.137.241 zwnim
197.3.137.228 TAIX

40.43.192.136 MFNBU01
40.43.192.137 MFNBU02

40.43.192.138 MFNBU_SVR

2.43.192.136 MFNBU01_bt1
2.43.192.137 MFNBU02_bt1
3.43.192.136 MFNBU01_bt2
3.43.192.137 MFNBU02_bt2

#opscenter
40.43.192.50 NBURMAN2

“/etc/hosts” 74 lines, 2301 characters
MFNBU01:/#smit hacmp
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
PowerHA SystemMirror

Move cursor to desired item and press Enter.

Cluster Nodes and Networks
Cluster Applications and Resources

System Management (C-SPOC)
Problem Determination Tools
Custom Cluster Configuration

Can’t find what you are looking for ?

F1=Help F2=Refresh F3=Cancel Esc+8=Image
PowerHA SystemMirror

Move cursor to desired item and press Enter.

Cluster Nodes and Networks
Cluster Applications and Resources

F1=Help F2=Refresh Esc+8=Image
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
System Management (C-SPOC)

Move cursor to desired item and press Enter.

Storage
PowerHA SystemMirror Services
Communication Interfaces
Resource Group and Applications
PowerHA SystemMirror Logs
PowerHA SystemMirror File Collection Management
Security and Users
LDAP
Configure GPFS

System Management (C-SPOC)

Move cursor to desired item and press Enter.

Storage
PowerHA SystemMirror Services
PowerHA SystemMirror Services
F1=Help F2=Refresh Esc+8=Image
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
PowerHA SystemMirror Services

Move cursor to desired item and press Enter.

Start Cluster Services
Stop Cluster Services
Show Cluster Services
Show Cluster Release Level

PowerHA SystemMirror Services

Move cursor to desired item and press Enter.

Start Cluster Services
Stop Cluster Services
Stop Cluster Services
F1=Help F2=Refresh Esc+8=Image
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
Stop Cluster Services

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[Entry Fields]
* Stop now, on system restart or both now +
Stop Cluster Services on these nodes [MFNBU01] +
BROADCAST cluster shutdown? true +
F1=Help F2=Refresh F3=Cancel F4=List
* Stop now, on system restart or both now +
F1=Help F2=Refresh F4=List
Esc+5=Reset Esc+6=Command Esc+8=Image
F1=Help F2=Refresh lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk F4=List
F1=Help F2=Refresh lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk F4=List
F1=Help F2=R F4=List
Esc+5=Reset Esc+ Esc+8=Image
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
x x
x x
x x
x x
x x
x x
x x
x x
x x
x x
F1=Help F2=Rx x F4=List
Esc+5=Reset Esc+x x Esc+8=Image
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
x Stop Cluster Services on these nodes x
x x
x F1=Help F2=Refresh F3=Cancel x
F1=Help F2=Rx Esc+7=Select Esc+8=Image Esc+0=Exit x F4=List
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
x Stop Cluster Services on these nodes x
x x
x Move cursor to desired item and press Esc+7. x
x ONE OR MORE items can be selected. x
x Press Enter AFTER making all selections. x
x x
x MFNBU01 x
x > MFNBU01 x

COMMAND STATUS

Command: running stdout: no stderr: no

Before command completion, additional instructions may appear below.

Broadcast message from root@MFNBU01 (tty) at 09:50:10 …

PowerHA SystemMirror on MFNBU01 shutting down. Please exit any cluster applications…
COMMAND STATUS

Command: running stdout: yes stderr: no

Before command completion, additional instructions may appear below.

MFNBU01: 0513-044 The clevmgrdES Subsystem was requested to stop.
MFNBU01: Mar 25 2015 09:50:10 /usr/es/sbin/cluster/utilities/clstop: called with flags -N -g
MFNBU02: 0513-044 The clevmgrdES Subsystem was requested to stop.
MFNBU02: Mar 25 2015 09:50:11 /usr/es/sbin/cluster/utilities/clstop: called with flags -N -g
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
COMMAND STATUS

Command: OK stdout: yes stderr: no

Before command completion, additional instructions may appear below.

MFNBU01: 0513-044 The clevmgrdES Subsystem was requested to stop.
MFNBU01: Mar 25 2015 09:50:10 /usr/es/sbin/cluster/utilities/clstop: called with flags -N -g
MFNBU02: 0513-044 The clevmgrdES Subsystem was requested to stop.
F1=Help F2=Refresh F3=Cancel Esc+6=Command
Esc+8=Image Esc+9=Shell Esc+0=Exit /=Find
COMMAND STATUS

Command: OK stdout: yes stderr: no

Before command completion, additional instructions may appear below.

MFNBU01:/#vi /etc/hosts
# IBM_PROLOG_BEGIN_TAG
# This is an automatically generated prolog.
#
# bos61D src/bos/usr/sbin/netstart/hosts 1.2
#
# Licensed Materials – Property of IBM
#
# COPYRIGHT International Business Machines Corp. 1985,1989
# All Rights Reserved
#
# US Government Users Restricted Rights – Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
# @(#)47 1.2 src/bos/usr/sbin/netstart/hosts, cmdnet, bos61D, d2007_49A2 10/1/07 13:57:52
# IBM_PROLOG_END_TAG
#
# COMPONENT_NAME: TCPIP hosts
#
# FUNCTIONS: loopback
#
# ORIGINS: 26 27
#
# (C) COPYRIGHT International Business Machines Corp. 1985, 1989
# All Rights Reserved
# Licensed Materials – Property of IBM
#
# US Government Users Restricted Rights – Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
# /etc/hosts
#
# This file contains the hostnames and their address for hosts in the
# network. This file is used to resolve a hostname into an Internet
# address.
#
# At minimum, this file must contain the name and address for each
# device defined for TCP in your /etc/net file. It may also contain
# entries for well-known (reserved) names such as timeserver
# and printserver as well as any other host name and address.
#
# The format of this file is:
# Internet Address Hostname # Comments
# Internet Address can be either IPv4 or IPv6 address.
# Items are separated by any number of blanks and/or tabs. A ‘#’
# indicates the beginning of a comment; characters up to the end of the
# line are not interpreted by routines which search this file. Blank
# lines are allowed.

# Internet Address Hostname # Comments
# 192.9.200.1 net0sample # ethernet name/address
# 128.100.0.1 token0sample # token ring name/address
# 10.2.0.2 x25sample # x.25 name/address
# 2000:1:1:1:209:6bff:feee:2b7f ipv6sample # ipv6 name/address
127.0.0.1 loopback localhost # loopback (lo0) name/address

40.43.192.6 NIMPBAC1

197.0.83.32 SZNIM

197.3.137.241 zwnim

197.3.137.228 TAIX

40.43.192.136 MFNBU01

40.43.192.137 MFNBU02

40.43.192.138 MFNBU

2.43.192.136 MFNBU01_bt1

2.43.192.137 MFNBU02_bt1

3.43.192.136 MFNBU01_bt2

3.43.192.137 MFNBU02_bt2

#opscenter

#opscenter
“/etc/hosts” 74 lines, 2301 characters
MFNBU01:/#lsha
Current state: ST_RP_FAILED
sccsid = “@(#)36 1.135.7.2 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 61haes_r711, 1225A_hacmp711 5/22/12 11:46:10”
build = “Dec 5 2012 11:50:45 1241C_hacmp711”
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0 ml_idx[2]=1
tp is 20433718
Events on event queue:
te_type 4, te_nodeid 1, te_network -1
te_type 4, te_nodeid 2, te_network -1
te_type 36, te_nodeid 1, te_network 1
te_type 11, te_nodeid 1, te_network -1
te_type 11, te_nodeid 2, te_network -1
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 13
local node vrmf is 7114
cluster fix level is “4”
The following timer(s) are currently active:
Event error node list: MFNBU01
Current DNP values
DNP Values for NodeId – 1 NodeName – MFNBU01
PgSpFree = 4190764 PvPctBusy = 0 PctTotalTimeIdle = 99.886272
DNP Values for NodeId – 2 NodeName – MFNBU02
PgSpFree = 4190430 PvPctBusy = 0 PctTotalTimeIdle = 99.854695
trcOn 0, kTraceOn 0, stopTraceOnExit 0, cdNodeOn 0
Last event run was FAIL_NODE on node 2
MFNBU01:/#lsha
Current state: ST_RP_FAILED
sccsid = “@(#)36 1.135.7.2 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 61haes_r711, 1225A_hacmp711 5/22/12 11:46:10”
build = “Dec 5 2012 11:50:45 1241C_hacmp711”
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0 ml_idx[2]=1
tp is 20433718
Events on event queue:
te_type 4, te_nodeid 1, te_network -1
te_type 4, te_nodeid 2, te_network -1
te_type 36, te_nodeid 1, te_network 1
te_type 11, te_nodeid 1, te_network -1
te_type 11, te_nodeid 2, te_network -1
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 13
local node vrmf is 7114
cluster fix level is “4”
The following timer(s) are currently active:
Event error node list: MFNBU01
Current DNP values
DNP Values for NodeId – 1 NodeName – MFNBU01
PgSpFree = 4190764 PvPctBusy = 0 PctTotalTimeIdle = 99.886272
DNP Values for NodeId – 2 NodeName – MFNBU02
PgSpFree = 4190430 PvPctBusy = 0 PctTotalTimeIdle = 99.854695
trcOn 0, kTraceOn 0, stopTraceOnExit 0, cdNodeOn 0
Last event run was FAIL_NODE on node 2
MFNBU01:/#lsha
Current state: ST_RP_FAILED
sccsid = “@(#)36 1.135.7.2 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 61haes_r711, 1225A_hacmp711 5/22/12 11:46:10”
build = “Dec 5 2012 11:50:45 1241C_hacmp711”
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0 ml_idx[2]=1
tp is 20433718
Events on event queue:
te_type 4, te_nodeid 1, te_network -1
te_type 4, te_nodeid 2, te_network -1
te_type 36, te_nodeid 1, te_network 1
te_type 11, te_nodeid 1, te_network -1
te_type 11, te_nodeid 2, te_network -1
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 13
local node vrmf is 7114
cluster fix level is “4”
The following timer(s) are currently active:
Event error node list: MFNBU01
Current DNP values
DNP Values for NodeId – 1 NodeName – MFNBU01
PgSpFree = 4190764 PvPctBusy = 0 PctTotalTimeIdle = 99.886272
DNP Values for NodeId – 2 NodeName – MFNBU02
PgSpFree = 4190430 PvPctBusy = 0 PctTotalTimeIdle = 99.854695
trcOn 0, kTraceOn 0, stopTraceOnExit 0, cdNodeOn 0
Last event run was FAIL_NODE on node 2
MFNBU01:/#lsha
Current state: ST_RP_FAILED
sccsid = “@(#)36 1.135.7.2 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 61haes_r711, 1225A_hacmp711 5/22/12 11:46:10”
build = “Dec 5 2012 11:50:45 1241C_hacmp711”
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0 ml_idx[2]=1
tp is 20433718
Events on event queue:
te_type 4, te_nodeid 1, te_network -1
te_type 4, te_nodeid 2, te_network -1
te_type 36, te_nodeid 1, te_network 1
te_type 11, te_nodeid 1, te_network -1
te_type 11, te_nodeid 2, te_network -1
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 13
local node vrmf is 7114
cluster fix level is “4”
The following timer(s) are currently active:
Event error node list: MFNBU01
Current DNP values
DNP Values for NodeId – 1 NodeName – MFNBU01
PgSpFree = 4190764 PvPctBusy = 0 PctTotalTimeIdle = 99.886272
DNP Values for NodeId – 2 NodeName – MFNBU02
PgSpFree = 4190430 PvPctBusy = 0 PctTotalTimeIdle = 99.854695
trcOn 0, kTraceOn 0, stopTraceOnExit 0, cdNodeOn 0
Last event run was FAIL_NODE on node 2

Command: failed stdout: yes stderr: no

Before command completion, additional instructions may appear below.

cl_clstop: ERROR: Node MFNBU01 has 5 event(s) outstanding as reported by command ‘lssrc -ls clstrmgrES’ and cannot be stopped until all outstanding events have completed
F1=Help F2=Refresh F3=Cancel Esc+6=Command
Esc+8=Image Esc+9=Shell Esc+0=Exit /=Find
COMMAND STATUS

Command: failed stdout: yes stderr: no

Before command completion, additional instructions may appear below.

MFNBU01:/#lsha
Current state: ST_RP_FAILED
sccsid = “@(#)36 1.135.7.2 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 61haes_r711, 1225A_hacmp711 5/22/12 11:46:10”
build = “Dec 5 2012 11:50:45 1241C_hacmp711”
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0 ml_idx[2]=1
tp is 20433718
Events on event queue:
te_type 4, te_nodeid 1, te_network -1
te_type 4, te_nodeid 2, te_network -1
te_type 36, te_nodeid 1, te_network 1
te_type 11, te_nodeid 1, te_network -1
te_type 11, te_nodeid 2, te_network -1
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 13
local node vrmf is 7114
cluster fix level is “4”
The following timer(s) are currently active:
Event error node list: MFNBU01
Current DNP values
DNP Values for NodeId – 1 NodeName – MFNBU01
PgSpFree = 4190764 PvPctBusy = 0 PctTotalTimeIdle = 99.886272
DNP Values for NodeId – 2 NodeName – MFNBU02
PgSpFree = 4190430 PvPctBusy = 0 PctTotalTimeIdle = 99.854695
trcOn 0, kTraceOn 0, stopTraceOnExit 0, cdNodeOn 0
Last event run was FAIL_NODE on node 2
MFNBU01:/#df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 5.00 4.90 3% 6506 1% /
/dev/hd2 12.00 9.48 22% 60381 3% /usr
/dev/hd9var 10.00 9.87 2% 802 1% /var
/dev/hd3 10.00 9.99 1% 97 1% /tmp
/dev/hd1 10.00 9.28 8% 3634 1% /home
/dev/hd11admin 0.50 0.50 1% 9 1% /admin
/proc – – – – – /proc
/dev/hd10opt 4.00 3.84 4% 2422 1% /opt
/dev/livedump 0.50 0.50 1% 4 1% /var/adm/ras/livedump
/dev/lvcmbc_admin 10.00 9.80 2% 16 1% /cmbc_admin
/dev/lvopenv 50.00 49.80 1% 4 1% /usr/openv
/aha – – – 48 1% /aha
MFNBU01:/#lsha
Current state: ST_RP_FAILED
sccsid = “@(#)36 1.135.7.2 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 61haes_r711, 1225A_hacmp711 5/22/12 11:46:10”
build = “Dec 5 2012 11:50:45 1241C_hacmp711”
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0 ml_idx[2]=1
tp is 20433718
Events on event queue:
te_type 4, te_nodeid 1, te_network -1
te_type 4, te_nodeid 2, te_network -1
te_type 36, te_nodeid 1, te_network 1
te_type 11, te_nodeid 1, te_network -1
te_type 11, te_nodeid 2, te_network -1
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 13
local node vrmf is 7114
cluster fix level is “4”
The following timer(s) are currently active:
Event error node list: MFNBU01
Current DNP values
DNP Values for NodeId – 1 NodeName – MFNBU01
PgSpFree = 4190764 PvPctBusy = 0 PctTotalTimeIdle = 99.886272
DNP Values for NodeId – 2 NodeName – MFNBU02
PgSpFree = 4190430 PvPctBusy = 0 PctTotalTimeIdle = 99.854695
trcOn 0, kTraceOn 0, stopTraceOnExit 0, cdNodeOn 0
Last event run was FAIL_NODE on node 2
MFNBU01:/#cd /var/hacmp
MFNBU01:/var/hacmp#ls
adm clcomd clverify log odmcache
MFNBU01:/var/hacmp#cd log
MFNBU01:/var/hacmp/log#ls
autoclstrcfgmonitor.out clevmgrdevents.1 clstrmgr.debug.long.1 dnssa.log migration.log
autoverify.log clinfo.log clstrmgr.debug.long.2 domino_server.log oraappsa.log
autoverify.log.1 clinfo.rc.out clstrmgr.debug.long.3 emuhacmp.out oraclesa.log
cl2siteconfig_assist.log clstrmgr.debug clstrmgr.debug.long.4 filenetsa.log printServersa.log
cl_event_summaries.txt clstrmgr.debug.1 clstrmgr.debug.long.5 hacmp.out sa.log
cl_testtool.log clstrmgr.debug.2 clstrmgr.debug.long.6 hacmp.out.1 sapsa.log
clavan.log clstrmgr.debug.3 clstrmgr.debug.long.7 hacmprd_run_rcovcmd.debug sax.log
clconfigassist.log clstrmgr.debug.4 clutils.log hacmprd_run_rcovcmd.debug.1 tsm_admin.log
clevents clstrmgr.debug.5 cspoc.log hswizard.log tsm_client.log
clevents.1 clstrmgr.debug.6 cspoc.log.long ihssa.log tsm_server.log
clevents.2 clstrmgr.debug.7 cspoc.log.remote lsvg.err wmqsa.log
clevmgrdevents clstrmgr.debug.long dhcpsa.log maxdbsa.log
MFNBU01:/var/hacmp/log#tail -f hacmp.out
:cl_sel[144] wc -l
:cl_sel[144] 2> /dev/null
:cl_sel[144] FFDC_COUNT=’ 1′
:cl_sel[145] [ ‘ 1’ -gt 5 ]
:cl_sel[155] dspmsg scripts.cat 10059 ‘FFDC event log collection saved to /tmp/ibmsupt/hacmp/eventlogs.2015.03.25.09.50\n’ /tmp/ibmsupt/hacmp/eventlogs.2015.03.25.09.50
FFDC event log collection saved to /tmp/ibmsupt/hacmp/eventlogs.2015.03.25.09.50
:cl_sel[157] exit 0
:event_error[+137] exit 0
Mar 25 09:50:20 EVENT COMPLETED: event_error 1 TE_RG_MOVE 0

^CMFNBU01:/var/hacmp/log#ls -l
total 38120
-rw-r–r– 1 root system 16316811 Mar 25 00:00 autoclstrcfgmonitor.out
-rw-r–r– 1 root system 225013 Mar 24 15:52 autoverify.log
-rw——- 1 root system 224200 Mar 24 15:51 autoverify.log.1
-rw-r–r– 1 root system 0 Feb 16 00:00 cl2siteconfig_assist.log
-rw-r–r– 1 root system 0 Jan 24 2013 cl_event_summaries.txt
-rw-r–r– 1 root system 0 Feb 16 00:00 cl_testtool.log
-rw-r–r– 1 root system 5642 Mar 25 09:50 clavan.log
-rw-r–r– 1 root system 0 Feb 16 00:00 clconfigassist.log
-rw-r–r– 1 root system 1796 Mar 25 09:50 clevents
-rw-r–r– 1 root system 2994 Mar 24 15:51 clevents.1
-rw-r–r– 1 root system 10585 Mar 24 15:38 clevents.2
-rw-rw-rw- 1 root system 893 Mar 24 15:52 clevmgrdevents
-rw-rw-rw- 1 root system 893 Mar 24 15:46 clevmgrdevents.1
-rw-r–r– 1 root system 20484 Mar 25 09:50 clinfo.log
-rw-r–r– 1 root system 233 Mar 25 09:49 clinfo.rc.out
-rw-r–r– 1 root system 466291 Mar 25 09:50 clstrmgr.debug
-rw-r–r– 1 root system 510075 Mar 24 15:49 clstrmgr.debug.1
-rw-r–r– 1 root system 2681 Mar 24 09:56 clstrmgr.debug.2
-rw-r–r– 1 root system 2086 Mar 18 15:46 clstrmgr.debug.3
-rw-r–r– 1 root system 1931 Mar 18 15:31 clstrmgr.debug.4
-rw-r–r– 1 root system 2085 Mar 18 13:31 clstrmgr.debug.5
-rw-r–r– 1 root system 2085 Mar 18 11:28 clstrmgr.debug.6
-rw-r–r– 1 root system 1600 Oct 20 13:19 clstrmgr.debug.7
-rw-r–r– 1 root system 757279 Mar 25 09:50 clstrmgr.debug.long
-rw-r–r– 1 root system 116785 Mar 24 15:49 clstrmgr.debug.long.1
-rw-r–r– 1 root system 1437 Mar 24 00:00 clstrmgr.debug.long.2
-rw-r–r– 1 root system 815 Mar 18 15:34 clstrmgr.debug.long.3
-rw-r–r– 1 root system 815 Mar 18 13:35 clstrmgr.debug.long.4
-rw-r–r– 1 root system 815 Mar 18 11:32 clstrmgr.debug.long.5
-rw-r–r– 1 root system 3667 Mar 18 11:03 clstrmgr.debug.long.6
-rw-r–r– 1 root system 815 Oct 20 13:19 clstrmgr.debug.long.7
-rw——- 1 root system 58969 Mar 25 00:00 clutils.log
-rw-r–r– 1 root system 9934 Mar 25 09:51 cspoc.log
-rw-r–r– 1 root system 0 Feb 16 00:00 cspoc.log.long
-rw-r–r– 1 root system 0 Feb 16 00:00 cspoc.log.remote
-rw-r–r– 1 root system 0 Feb 16 00:00 dhcpsa.log
-rw-r–r– 1 root system 0 Feb 16 00:00 dnssa.log
-rw-r–r– 1 root system 0 Feb 16 00:00 domino_server.log
-rw-r–r– 1 root system 0 Feb 16 00:00 emuhacmp.out
-rw-r–r– 1 root system 0 Feb 16 00:00 filenetsa.log
-rw-r–r– 1 root system 639346 Mar 25 09:50 hacmp.out
-rw-r–r– 1 root system 870 Jan 23 2013 hacmp.out.1
-rw-r–r– 1 root system 23493 Mar 25 09:50 hacmprd_run_rcovcmd.debug
-rw-r–r– 1 root system 28095 Mar 24 15:49 hacmprd_run_rcovcmd.debug.1
-rw-r–r– 1 root system 0 Feb 16 00:00 hswizard.log
-rw-r–r– 1 root system 0 Feb 16 00:00 ihssa.log
-rw-r–r– 1 root system 0 Mar 24 15:52 lsvg.err
-rw-r–r– 1 root system 0 Feb 16 00:00 maxdbsa.log
-rw-r–r– 1 root system 0 Mar 24 15:49 migration.log
-rw-r–r– 1 root system 0 Feb 16 00:00 oraappsa.log
-rw-r–r– 1 root system 0 Feb 16 00:00 oraclesa.log
-rw-r–r– 1 root system 0 Feb 16 00:00 printServersa.log
-rw-r–r– 1 root system 0 Feb 16 00:00 sa.log
-rw-r–r– 1 root system 0 Feb 16 00:00 sapsa.log
-rw-r–r– 1 root system 0 Feb 16 00:00 sax.log
-rw-r–r– 1 root system 0 Feb 16 00:00 tsm_admin.log
-rw-r–r– 1 root system 0 Feb 16 00:00 tsm_client.log
-rw-r–r– 1 root system 0 Feb 16 00:00 tsm_server.log
-rw-r–r– 1 root system 0 Feb 16 00:00 wmqsa.log
MFNBU01:/var/hacmp/log#more hacmp.out
Warning: There is no cluster found.
cllsclstr: No cluster defined.
cllsclstr: Error reading configuration.

Reference string: Wed.Mar.25.09:50:19.BEIST.2015.release_service_addr.All_service_addrs.MFNBU_RG.ref
+MFNBU_RG:release_service_addr[+190] clgetif -a MFNBU_SVR
+MFNBU_RG:release_service_addr[+190] LC_ALL=C
MFNBU_SVR: hostname not found.
+MFNBU_RG:release_service_addr[+191] return_code=1
+MFNBU_RG:release_service_addr[+192] [ 1 -ne 0 ]
+MFNBU_RG:release_service_addr[+196] [ 1 -eq 1 ]
+MFNBU_RG:release_service_addr[+196] [[ UNDEFINED != UNDEFINED ]]
+MFNBU_RG:release_service_addr[+201] export NSORDER=
+MFNBU_RG:release_service_addr[+201] [[ true = true ]]
+MFNBU_RG:release_service_addr[+205] cl_RMupdate resource_error MFNBU_SVR release_service_addr
2015-03-25T09:50:19.952857

MFNBU01:/var/hacmp/log#smit hacmp
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
PowerHA SystemMirror

Move cursor to desired item and press Enter.

Cluster Nodes and Networks
Cluster Applications and Resources

System Management (C-SPOC)
Problem Determination Tools
Custom Cluster Configuration

Can’t find what you are looking for ?

F1=Help F2=Refresh F3=Cancel Esc+8=Image
PowerHA SystemMirror

Move cursor to desired item and press Enter.

Cluster Nodes and Networks
Cluster Applications and Resources
Cluster Applications and Resources
F1=Help F2=Refresh Esc+8=Image
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
Cluster Applications and Resources

Move cursor to desired item and press Enter.

Make Applications Highly Available (Use Smart Assists)
Resources
Resource Groups

Verify and Synchronize Cluster Configuration

Cluster Applications and Resources

Move cursor to desired item and press Enter.

Make Applications Highly Available (Use Smart Assists)
Resources
F1=Help F2=Refresh Esc+8=Image
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
Resource Groups

Move cursor to desired item and press Enter.

Add a Resource Group
Change/Show Nodes and Policies for a Resource Group
Change/Show Resources and Attributes for a Resource Group
Remove a Resource Group
Configure Resource Group Run-Time Policies
Show All Resources by Node or Resource Group

Verify and Synchronize Cluster Configuration

Resource Groups

Move cursor to desired item and press Enter.

Add a Resource Group
Change/Show Nodes and Policies for a Resource Group
F1=Help F2=Refresh Esc+8=Image
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk

F1=Help F2=R Esc+8=Image
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
x x
x x
x x
x x
x x
x x
x x
x x
F1=Help F2=Rx x Esc+8=Image
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
x Change/Show Resources and Attributes for a Resource Group x
x x
x F1=Help F2=Refresh F3=Cancel x
x Esc+8=Image Esc+0=Exit Enter=Do x
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
x Change/Show Resources and Attributes for a Resource Group x
x x
x Move cursor to desired item and press Enter. x
x x

lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
F1=Help F2=Refresh x Processing data … x Esc+8=Image
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
Change/Show All Resources and Attributes for a Resource Group

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[Entry Fields]
Resource Group Name MFNBU_RG
Participating Nodes (Default Node Priority) MFNBU01 MFNBU02

Startup Policy Online On Home Node Only
Fallover Policy Fallover To Next Priority Node In The List
Fallback Policy Never Fallback

Service IP Labels/Addresses [MFNBU_SVR] +
Application Controllers [MFNBU_AS] +

Volume Groups [vgnbu ] +
Use forced varyon of volume groups, if necessary false +
Automatically Import Volume Groups false +

Filesystems (empty is ALL for VGs specified) [/opt/VRTSnbu ] +
Filesystems Consistency Check fsck +
Filesystems Recovery Method sequential +
Filesystems mounted before IP configured false +

Filesystems/Directories to Export (NFSv2/3) [] +
Filesystems/Directories to Export (NFSv4) [] +
Stable Storage Path (NFSv4) [] +
Filesystems/Directories to NFS Mount []
Network For NFS Mount [] +

Tape Resources [] +
Raw Disk PVIDs [] +

Primary Workload Manager Class [] +
Secondary Workload Manager Class [] +

Miscellaneous Data []
WPAR Name [] +
F1=Help F2=Refresh F3=Cancel F4=List
F1=Help F2=Refresh F4=List
Esc+5=Reset Esc+6=Command Esc+8=Image
F1=Help F2=Refresh lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk F4=List
COMMAND STATUS

Command: running stdout: no stderr: no

Before command completion, additional instructions may appear below.

COMMAND STATUS

Command: running stdout: yes stderr: no

Before command completion, additional instructions may appear below.

Service label ‘MFNBU’ did not pass validation.
Please verify that this service label is properly
defined within the cluster configuration.
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
COMMAND STATUS

Command: failed stdout: yes stderr: no

Before command completion, additional instructions may appear below.

Service label ‘MFNBU’ did not pass validation.
Please verify that this service label is properly
F1=Help F2=Refresh F3=Cancel Esc+6=Command
Esc+8=Image Esc+9=Shell Esc+0=Exit /=Find
COMMAND STATUS

Command: failed stdout: yes stderr: no

Before command completion, additional instructions may appear below.

Change/Show All Resources and Attributes for a Resource Group

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[Entry Fields]
Resource Group Name MFNBU_RG
Participating Nodes (Default Node Priority) MFNBU01 MFNBU02

Startup Policy Online On Home Node Only
Fallover Policy Fallover To Next Priority Node In The List
Fallback Policy Never Fallback

Service IP Labels/Addresses [MFNBU] +
Application Controllers [MFNBU_AS] +

Volume Groups [vgnbu ] +
Use forced varyon of volume groups, if necessary false +
Automatically Import Volume Groups false +

Filesystems (empty is ALL for VGs specified) [/opt/VRTSnbu ] +
Filesystems Consistency Check fsck +
Filesystems Recovery Method sequential +
Filesystems mounted before IP configured false +

Filesystems/Directories to Export (NFSv2/3) [] +
Filesystems/Directories to Export (NFSv4) [] +
Stable Storage Path (NFSv4) [] +
Filesystems/Directories to NFS Mount []
Network For NFS Mount [] +

Tape Resources [] +
Raw Disk PVIDs [] +

Primary Workload Manager Class [] +
Secondary Workload Manager Class [] +

Miscellaneous Data []
WPAR Name [] +
F1=Help F2=Refresh F3=Cancel F4=List
Esc+5=Reset Esc+6=Command Esc+7=Edit Esc+8=Image
Resource Groups

Move cursor to desired item and press Enter.

Add a Resource Group
Change/Show Nodes and Policies for a Resource Group
Change/Show Resources and Attributes for a Resource Group
Remove a Resource Group
Configure Resource Group Run-Time Policies
Show All Resources by Node or Resource Group

Verify and Synchronize Cluster Configuration

Learn more about Resource Groups

Cluster Applications and Resources

Move cursor to desired item and press Enter.

Make Applications Highly Available (Use Smart Assists)
Resources
Resource Groups

Verify and Synchronize Cluster Configuration

PowerHA SystemMirror

Move cursor to desired item and press Enter.

Cluster Nodes and Networks
Cluster Applications and Resources

System Management (C-SPOC)
Problem Determination Tools
Custom Cluster Configuration

Can’t find what you are looking for ?
PowerHA SystemMirror

Move cursor to desired item and press Enter.

Cluster Nodes and Networks
MFNBU01:/var/hacmp/log#lsha
Current state: ST_RP_FAILED
sccsid = “@(#)36 1.135.7.2 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 61haes_r711, 1225A_hacmp711 5/22/12 11:46:10”
build = “Dec 5 2012 11:50:45 1241C_hacmp711”
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0 ml_idx[2]=1
tp is 20433718
Events on event queue:
te_type 4, te_nodeid 1, te_network -1
te_type 4, te_nodeid 2, te_network -1
te_type 36, te_nodeid 1, te_network 1
te_type 11, te_nodeid 1, te_network -1
te_type 11, te_nodeid 2, te_network -1
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 13
local node vrmf is 7114
cluster fix level is “4”
The following timer(s) are currently active:
Event error node list: MFNBU01
Current DNP values
DNP Values for NodeId – 1 NodeName – MFNBU01
PgSpFree = 4190764 PvPctBusy = 0 PctTotalTimeIdle = 99.886272
DNP Values for NodeId – 2 NodeName – MFNBU02
PgSpFree = 4190430 PvPctBusy = 0 PctTotalTimeIdle = 99.854695
trcOn 0, kTraceOn 0, stopTraceOnExit 0, cdNodeOn 0
Last event run was FAIL_NODE on node 2
MFNBU01:/var/hacmp/log#lsha
Current state: ST_RP_FAILED
sccsid = “@(#)36 1.135.7.2 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 61haes_r711, 1225A_hacmp711 5/22/12 11:46:10”
build = “Dec 5 2012 11:50:45 1241C_hacmp711”
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0 ml_idx[2]=1
tp is 20433718
Events on event queue:
te_type 4, te_nodeid 1, te_network -1
te_type 4, te_nodeid 2, te_network -1
te_type 36, te_nodeid 1, te_network 1
te_type 11, te_nodeid 1, te_network -1
te_type 11, te_nodeid 2, te_network -1
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 13
local node vrmf is 7114
cluster fix level is “4”
The following timer(s) are currently active:
Event error node list: MFNBU01
Current DNP values
DNP Values for NodeId – 1 NodeName – MFNBU01
PgSpFree = 4190764 PvPctBusy = 0 PctTotalTimeIdle = 99.886272
DNP Values for NodeId – 2 NodeName – MFNBU02
PgSpFree = 4190430 PvPctBusy = 0 PctTotalTimeIdle = 99.854695
trcOn 0, kTraceOn 0, stopTraceOnExit 0, cdNodeOn 0
Last event run was FAIL_NODE on node 2
MFNBU01:/var/hacmp/log#lsha
Current state: ST_RP_FAILED
sccsid = “@(#)36 1.135.7.2 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 61haes_r711, 1225A_hacmp711 5/22/12 11:46:10”
build = “Dec 5 2012 11:50:45 1241C_hacmp711”
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0 ml_idx[2]=1
tp is 20433718
Events on event queue:
te_type 4, te_nodeid 1, te_network -1
te_type 4, te_nodeid 2, te_network -1
te_type 36, te_nodeid 1, te_network 1
te_type 11, te_nodeid 1, te_network -1
te_type 11, te_nodeid 2, te_network -1
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 13
local node vrmf is 7114
cluster fix level is “4”
The following timer(s) are currently active:
Event error node list: MFNBU01
Current DNP values
DNP Values for NodeId – 1 NodeName – MFNBU01
PgSpFree = 4190764 PvPctBusy = 0 PctTotalTimeIdle = 99.886272
DNP Values for NodeId – 2 NodeName – MFNBU02
PgSpFree = 4190430 PvPctBusy = 0 PctTotalTimeIdle = 99.854695
trcOn 0, kTraceOn 0, stopTraceOnExit 0, cdNodeOn 0
Last event run was FAIL_NODE on node 2
MFNBU01:/var/hacmp/log#clRGinfo
—————————————————————————–
Group Name State Node
—————————————————————————–
MFNBU_RG ERROR MFNBU01
OFFLINE MFNBU02

MFNBU01:/var/hacmp/log#lsha
Current state: ST_RP_FAILED
sccsid = “@(#)36 1.135.7.2 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 61haes_r711, 1225A_hacmp711 5/22/12 11:46:10”
build = “Dec 5 2012 11:50:45 1241C_hacmp711”
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0 ml_idx[2]=1
tp is 20433718
Events on event queue:
te_type 4, te_nodeid 1, te_network -1
te_type 4, te_nodeid 2, te_network -1
te_type 36, te_nodeid 1, te_network 1
te_type 11, te_nodeid 1, te_network -1
te_type 11, te_nodeid 2, te_network -1
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 13
local node vrmf is 7114
cluster fix level is “4”
The following timer(s) are currently active:
Event error node list: MFNBU01
Current DNP values
DNP Values for NodeId – 1 NodeName – MFNBU01
PgSpFree = 4190764 PvPctBusy = 0 PctTotalTimeIdle = 99.886272
DNP Values for NodeId – 2 NodeName – MFNBU02
PgSpFree = 4190430 PvPctBusy = 0 PctTotalTimeIdle = 99.854695
trcOn 0, kTraceOn 0, stopTraceOnExit 0, cdNodeOn 0
Last event run was FAIL_NODE on node 2
MFNBU01:/var/hacmp/log#lsha
Current state: ST_RP_FAILED
sccsid = “@(#)36 1.135.7.2 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 61haes_r711, 1225A_hacmp711 5/22/12 11:46:10”
build = “Dec 5 2012 11:50:45 1241C_hacmp711”
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0 ml_idx[2]=1
tp is 20433718
Events on event queue:
te_type 4, te_nodeid 1, te_network -1
te_type 4, te_nodeid 2, te_network -1
te_type 36, te_nodeid 1, te_network 1
te_type 11, te_nodeid 1, te_network -1
te_type 11, te_nodeid 2, te_network -1
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 13
local node vrmf is 7114
cluster fix level is “4”
The following timer(s) are currently active:
Event error node list: MFNBU01
Current DNP values
DNP Values for NodeId – 1 NodeName – MFNBU01
PgSpFree = 4190764 PvPctBusy = 0 PctTotalTimeIdle = 99.886272
DNP Values for NodeId – 2 NodeName – MFNBU02
PgSpFree = 4190430 PvPctBusy = 0 PctTotalTimeIdle = 99.854695
trcOn 0, kTraceOn 0, stopTraceOnExit 0, cdNodeOn 0
Last event run was FAIL_NODE on node 2
MFNBU01:/var/hacmp/log#lssrc -g cluster
Subsystem Group PID Status
clstrmgrES cluster 16187542 active
clinfoES cluster 15532074 active
MFNBU01:/var/hacmp/log#stopsrc -g cluster
0513-044 The clstrmgrES Subsystem was requested to stop.
0513-044 The clinfoES Subsystem was requested to stop.
MFNBU01:/var/hacmp/log#lssrc -g cluster
Subsystem Group PID Status
clstrmgrES cluster 16187542 stopping
MFNBU01:/var/hacmp/log#clRGinfo
—————————————————————————–
Group Name State Node
—————————————————————————–
MFNBU_RG ERROR MFNBU01
OFFLINE MFNBU02

MFNBU01:/var/hacmp/log#lsha
Current state: ST_RP_FAILED
sccsid = “@(#)36 1.135.7.2 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 61haes_r711, 1225A_hacmp711 5/22/12 11:46:10”
build = “Dec 5 2012 11:50:45 1241C_hacmp711”
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0 ml_idx[2]=1
tp is 20433718
Events on event queue:
te_type 4, te_nodeid 1, te_network -1
te_type 4, te_nodeid 2, te_network -1
te_type 36, te_nodeid 1, te_network 1
te_type 11, te_nodeid 1, te_network -1
te_type 11, te_nodeid 2, te_network -1
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 13
local node vrmf is 7114
cluster fix level is “4”
The following timer(s) are currently active:
Event error node list: MFNBU01
Current DNP values
DNP Values for NodeId – 1 NodeName – MFNBU01
PgSpFree = 4190764 PvPctBusy = 0 PctTotalTimeIdle = 99.886272
DNP Values for NodeId – 2 NodeName – MFNBU02
PgSpFree = 4190430 PvPctBusy = 0 PctTotalTimeIdle = 99.854695
trcOn 0, kTraceOn 0, stopTraceOnExit 0, cdNodeOn 0
Last event run was FAIL_NODE on node 2
MFNBU01:/var/hacmp/log#lssrc -g cluster
Subsystem Group PID Status
clstrmgrES cluster 16187542 stopping
MFNBU01:/var/hacmp/log#lssrc -g cluster
Subsystem Group PID Status
clstrmgrES cluster 16187542 stopping
MFNBU01:/var/hacmp/log#clRGinfo
—————————————————————————–
Group Name State Node
—————————————————————————–
MFNBU_RG ERROR MFNBU01
OFFLINE MFNBU02

MFNBU01:/var/hacmp/log#clRGinfo
—————————————————————————–
Group Name State Node
—————————————————————————–
MFNBU_RG ERROR MFNBU01
OFFLINE MFNBU02

MFNBU01:/var/hacmp/log#lssrc -g cluster
Subsystem Group PID Status
clstrmgrES cluster 16187542 stopping
MFNBU01:/var/hacmp/log#cldump

Obtaining information via SNMP from Node: MFNBU01…

_____________________________________________________________________________
Cluster Name: MFNBU_clu
Cluster State: UP
Cluster Substate: STABLE
_____________________________________________________________________________

Node Name: MFNBU01 State: UP

Network Name: net_ether_01 State: UP

Address: 2.43.192.136 Label: MFNBU01_bt1 State: UP
Address: 3.43.192.136 Label: MFNBU01_bt2 State: UP
Address: 40.43.192.138 Label: MFNBU_SVR State: UP

Node Name: MFNBU02 State: UP

Network Name: net_ether_01 State: UP

Address: 2.43.192.137 Label: MFNBU02_bt1 State: UP
Address: 3.43.192.137 Label: MFNBU02_bt2 State: UP

Cluster Name: MFNBU_clu

Resource Group Name: MFNBU_RG
Startup Policy: Online On Home Node Only
Fallover Policy: Fallover To Next Priority Node In The List
Fallback Policy: Never Fallback
Site Policy: ignore
Node Group State
—————————- —————
MFNBU01 ERROR
MFNBU02 OFFLINE
MFNBU01:/var/hacmp/log#cldisp
Cluster: MFNBU_clu
Cluster services: active
State of cluster: up
Substate: stable

#############
APPLICATIONS
#############
Cluster MFNBU_clu provides the following applications: MFNBU_AS
Application: MFNBU_AS
MFNBU_AS is started by /hacmp/MFNBU_start.sh
MFNBU_AS is stopped by /hacmp/MFNBU_stop.sh
No application monitors are configured for MFNBU_AS.
This application is part of resource group ‘MFNBU_RG’.
Resource group policies:
Startup: on home node only
Fallover: to next priority node in the list
Fallback: never
State of MFNBU_AS: error
Nodes configured to provide MFNBU_AS: MFNBU01 {up} MFNBU02 {up}
Resources associated with MFNBU_AS:
Service Labels
MFNBU_SVR(40.43.192.138)
Interfaces configured to provide MFNBU_SVR:
MFNBU01_bt1
with IP address: 2.43.192.136
on interface: en4
on node: MFNBU01
on network: net_ether_01
MFNBU01_bt2
with IP address: 3.43.192.136
on interface: en8
on node: MFNBU01
on network: net_ether_01
MFNBU02_bt2
with IP address: 3.43.192.137
on interface: en8
on node: MFNBU02
on network: net_ether_01
MFNBU02_bt1
with IP address: 2.43.192.137
on interface: en4
on node: MFNBU02
on network: net_ether_01
Shared Volume Groups:
vgnbu

#############
TOPOLOGY
#############
MFNBU_clu consists of the following nodes: MFNBU01 MFNBU02
MFNBU01
Network interfaces:
MFNBU01_bt1
with IP address: 2.43.192.136
on interface: en4
on network: net_ether_01
MFNBU01_bt2
with IP address: 3.43.192.136
on interface: en8
on network: net_ether_01
MFNBU02
Network interfaces:
MFNBU02_bt2
with IP address: 3.43.192.137
on interface: en8
on network: net_ether_01
MFNBU02_bt1
with IP address: 2.43.192.137
on interface: en4
on network: net_ether_01
MFNBU01:/var/hacmp/log#cltopinfo
Cluster Name: MFNBU_clu
Cluster Connection Authentication Mode: Standard
Cluster Message Authentication Mode: None
Cluster Message Encryption: None
Use Persistent Labels for Communication: No
Repository Disk: hdisk11
Cluster IP Address: 238.43.192.138
There are 2 node(s) and 1 network(s) defined

NODE MFNBU01:
Network net_ether_01
MFNBU_SVR 40.43.192.138
MFNBU01_bt1 2.43.192.136
MFNBU01_bt2 3.43.192.136

NODE MFNBU02:
Network net_ether_01
MFNBU_SVR 40.43.192.138
MFNBU02_bt2 3.43.192.137
MFNBU02_bt1 2.43.192.137

Resource Group MFNBU_RG
Startup Policy Online On Home Node Only
Fallover Policy Fallover To Next Priority Node In The List
Fallback Policy Never Fallback
Participating Nodes MFNBU01 MFNBU02
Service IP Label MFNBU_SVR
MFNBU01:/var/hacmp/log#clshowsrv -a
Subsystem Group PID Status
clstrmgrES cluster 16187542 stopping
clinfoES cluster inoperative
clcomd caa 14549194 active
MFNBU01:/var/hacmp/log#cllsserv
MFNBU_AS /hacmp/MFNBU_start.sh /hacmp/MFNBU_stop.sh background
MFNBU01:/var/hacmp/log#clconfig -v -O
ksh: clconfig: not found.
MFNBU01:/var/hacmp/log#/usr/es/sbin/cluster/diag/clconfig -v -O
HACMPnode ODM on node MFNBU02 verified.

HACMPnetwork ODM on node MFNBU02 verified.

HACMPcluster ODM on node MFNBU02 verified.

HACMPnim ODM on node MFNBU02 verified.

HACMPadapter ODM on node MFNBU02 verified.

HACMPtopsvcs ODM on node MFNBU02 verified.

HACMPsite ODM on node MFNBU02 verified.

HACMPsircol ODM on node MFNBU02 verified.

HACMPeventmgr ODM on node MFNBU02 verified.

HACMPnode ODM on node MFNBU02 verified.

HACMPgroup ODM on node MFNBU02 verified.

HACMPresource ODM on node MFNBU02 verified.

HACMPserver ODM on node MFNBU02 verified.

HACMPcommadapter ODM on node MFNBU02 verified.

HACMPcommlink ODM on node MFNBU02 verified.

HACMPx25 ODM on node MFNBU02 verified.

HACMPsna ODM on node MFNBU02 verified.

HACMPevent ODM on node MFNBU02 verified.

HACMPcustom ODM on node MFNBU02 verified.

HACMPlogs ODM on node MFNBU02 verified.

HACMPtape ODM on node MFNBU02 verified.

HACMPmonitor ODM on node MFNBU02 verified.

HACMPpager ODM on node MFNBU02 verified.

HACMPport ODM on node MFNBU02 verified.

HACMPnpp ODM on node MFNBU02 verified.

HACMPude ODM on node MFNBU02 verified.

HACMPdisksubsys ODM on node MFNBU02 verified.

HACMPpprc ODM on node MFNBU02 verified.

HACMPpairtasks ODM on node MFNBU02 verified.

HACMPpathtasks ODM on node MFNBU02 verified.

HACMPercmf ODM on node MFNBU02 verified.

HACMPercmfglobals ODM on node MFNBU02 verified.

HACMPtimer ODM on node MFNBU02 verified.

HACMPsiteinfo ODM on node MFNBU02 verified.

HACMPtimersvc ODM on node MFNBU02 verified.

HACMPfilecollection ODM on node MFNBU02 verified.

HACMPfcfile ODM on node MFNBU02 verified.

HACMPrgdependency ODM on node MFNBU02 verified.

HACMPrg_loc_dependency ODM on node MFNBU02 verified.

HACMPsvc ODM on node MFNBU02 verified.

HACMPsvcpprc ODM on node MFNBU02 verified.

HACMPsvcrelationship ODM on node MFNBU02 verified.

HACMPsa_metadata ODM on node MFNBU02 verified.

HACMPcsserver ODM on node MFNBU02 verified.

HACMPoemfsmethods ODM on node MFNBU02 verified.

HACMPoemvgmethods ODM on node MFNBU02 verified.

HACMPoemvolumegroup ODM on node MFNBU02 verified.

HACMPoemfilesystem ODM on node MFNBU02 verified.

HACMPdisktype ODM on node MFNBU02 verified.

HACMPpprcconsistgrp ODM on node MFNBU02 verified.

HACMPsr ODM on node MFNBU02 verified.

HACMPtc ODM on node MFNBU02 verified.

HACMPras ODM on node MFNBU02 verified.

HACMPresourcetype ODM on node MFNBU02 verified.

HACMPudresource ODM on node MFNBU02 verified.

HACMPudres_def ODM on node MFNBU02 verified.

HACMPLDAP ODM on node MFNBU02 verified.

Verification to be performed on the following:
Cluster Topology
Cluster Resources

Retrieving data from available cluster nodes. This could take a few minutes.

Start data collection on node MFNBU01
Start data collection on node MFNBU02
Collector on node MFNBU02 completed
Collector on node MFNBU01 completed
Data collection complete
WARNING: Cluster verification detected that some cluster components are
inactive. Please use the matrix below to verify the status of
inactive components:
Node: MFNBU01
Resource Group: MFNBU_RG State: ERROR

Verifying Cluster Topology…

Completed 10 percent of the verification checks
Completed 20 percent of the verification checks
A corrective action is available for the condition reported below:

ERROR: /etc/hosts on node MFNBU01 contains IP address ‘40.43.192.138’, but it does
not map to IP label ‘MFNBU_SVR’.

To correct the above condition, run verification & synchronization with
“Automatically correct errors found during verification?” set to either ‘Yes’
or ‘Interactive’. The cluster must be down for the corrective action to run.

Corrective actions can be enabled for Verification and Synchronization in the
PowerHA SystemMirror extended Verification and Synchronization SMIT fastpath “cl_sync”.
Alternatively use the Initialization and Standard Configuration -> Verification
and Synchronization path where corrective actions are always executed in
interactive mode.
A corrective action is available for the condition reported below:

ERROR: /etc/hosts on node MFNBU02 contains IP address ‘40.43.192.138’, but it does
not map to IP label ‘MFNBU_SVR’.

To correct the above condition, run verification & synchronization with
“Automatically correct errors found during verification?” set to either ‘Yes’
or ‘Interactive’. The cluster must be down for the corrective action to run.

Corrective actions can be enabled for Verification and Synchronization in the
PowerHA SystemMirror extended Verification and Synchronization SMIT fastpath “cl_sync”.
Alternatively use the Initialization and Standard Configuration -> Verification
and Synchronization path where corrective actions are always executed in
interactive mode.
Completed 30 percent of the verification checks
Saving existing /var/hacmp/clverify/ver_mping/ver_mping.log to /var/hacmp/clverify/ver_mping/ver_mping.log.bak
Verifying clcomd communication, please be patient.

Verifying multicast communication with mping.

Verifying Cluster Resources…

Completed 40 percent of the verification checks

WARNING: Application monitors are required for detecting application failures
in order for PowerHA SystemMirror to recover from them. Application monitors are started
by PowerHA SystemMirror when the resource group in which they participate is activated.
The following application(s), shown with their associated resource group,
do not have an application monitor configured:

Application Server Resource Group
——————————– ———————————
MFNBU_AS MFNBU_RG
Completed 50 percent of the verification checks
Completed 60 percent of the verification checks
Completed 70 percent of the verification checks
Completed 80 percent of the verification checks
WARNING: MFNBU01 has an active aliased service IP label MFNBU_SVR
attached to physical interface: en8. This service IP label
is part of resource group MFNBU_RG, which is in the OFFLINE state
on node MFNBU01.
WARNING:
To ensure PowerHA SystemMirror brings the resource group online on this node:
1. It is recommended that you start cluster services on
this node using the SMIT option to “Manage Resource Groups: Manually”.
2. Once cluster services are running on this node, then use
SMIT to bring the resource group ONLINE on this node.

To bring the resource group online on another node, you must:
1. Manually bring the resources offline.
2. Using SMIT, move the resource group to the node of your choosing.
Completed 90 percent of the verification checks
Completed 100 percent of the verification checks
WARNING: Node MFNBU01 has cluster.es.nfs.rte installed however grace periods are not fully enabled on this node. Grace periods must be enabled before NFSv4 stable storage can be used.

PowerHA SystemMirror will attempt to fix this opportunistically when acquiring NFS resources on this node however the change won’t take effect until the next time that nfsd is started.

If this warning persists, the administrator should perform the following steps to enable grace periods on MFNBU01 at the next planned downtime:
1. stopsrc -s nfsd
2. smitty nfsgrcperiod
3. startsrc -s nfsd

WARNING: Node MFNBU02 has cluster.es.nfs.rte installed however grace periods are not fully enabled on this node. Grace periods must be enabled before NFSv4 stable storage can be used.

PowerHA SystemMirror will attempt to fix this opportunistically when acquiring NFS resources on this node however the change won’t take effect until the next time that nfsd is started.

If this warning persists, the administrator should perform the following steps to enable grace periods on MFNBU02 at the next planned downtime:
1. stopsrc -s nfsd
2. smitty nfsgrcperiod
3. startsrc -s nfsd

Verification exiting with error count: 2

clconfig: Verification determined 2 error(s) occurred. Please correct
any errors and retry.
MFNBU01:/var/hacmp/log#clcycle
0513-095 The request for subsystem refresh was completed successfully.
MFNBU01:/var/hacmp/log#lssrc -g cluster
Subsystem Group PID Status
clstrmgrES cluster 16187542 stopping
MFNBU01:/var/hacmp/log#clrsh
Usage: clrsh [-p] [-n] host command
MFNBU01:/var/hacmp/log#odmget HACMPlogs

HACMPlogs:
name = “clstrmgr.debug”
description = “Generated by the clstrmgr daemon”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “cluster.log”
description = “Generated by cluster scripts and daemons”
defaultdir = “/var/hacmp/adm”
value = “/var/hacmp/adm”
rfs = “”

HACMPlogs:
name = “cluster.mmddyyyy”
description = “Cluster history files generated daily”
defaultdir = “/var/hacmp/adm/history/”
value = “/var/hacmp/adm/history”
rfs = “”

HACMPlogs:
name = “cspoc.log”
description = “Generated by CSPOC commands”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “emuhacmp.out”
description = “Generated by the event emulator scripts”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “hacmp.out”
description = “Generated by event scripts and utilities”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “clavan.log”
description = “Generated by Application Availability Analysis tool”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “clverify.log”
description = “Generated by Cluster Verification utility”
defaultdir = “/var/hacmp/clverify”
value = “/var/hacmp/clverify”
rfs = “”

HACMPlogs:
name = “clcomd.log”
description = “Generated by clcomd daemon”
defaultdir = “/var/log/clcomd”
value = “/var/log/clcomd”
rfs = “”

HACMPlogs:
name = “clcomddiag.log”
description = “Generated by clcomd daemon, debug information”
defaultdir = “/var/log/clcomd”
value = “/var/log/clcomd”
rfs = “”

HACMPlogs:
name = “clconfigassist.log”
description = “Generated by Two-Node Cluster Configuration Assistant”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “cl2siteconfig_assist.log”
description = “Generated by Two-Site Cluster Configuration Assistant”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “clutils.log”
description = “Generated by cluster utilities and file propagation”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “cl_testtool.log”
description = “Generated by the Cluster Test Tool”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “autoverify.log”
description = “Generated by Auto Verify and Synchronize”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “sa.log”
description = “Generated by Application Discovery”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “clstrmgr.debug.long”
description = “Detail information from the clstrmgr daemon”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “cspoc.log.long”
description = “Detail information from CSPOC commands”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “cspoc.log.remote”
description = “Generated by remote node running CSPOC commands”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “clinfo.log”
description = “Generated by client node running clinfo”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “migration.log”
description = “Generated by cluster migration”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “dnssa.log”
description = “DNS Smart Assist Log”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “dhcpsa.log”
description = “DHCP Smart Assist Log”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “domino_server.log”
description = “Domino server Log”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “sax.log”
description = “Generated by utilities which serves director requests”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “ihssa.log”
description = “Smart Assist for IBM HTTP Server Log”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “maxdbsa.log”
description = “Smart Assist for MaxDB Log”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “clevents”
description = “Events log for Director Interface”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “hswizard.log”
description = “SAP liveCache Hot Standby Configiuration Wizard Log”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”

HACMPlogs:
name = “wmqsa.log”
description = “MQ Series Smart Assist log”
defaultdir = “/var/hacmp/log”
value = “/var/hacmp/log”
rfs = “”
MFNBU01:/var/hacmp/log#clstop -f -N MFNBU01

Broadcast message from root@MFNBU01 (tty) at 10:10:09 …

PowerHA SystemMirror on MFNBU01 shutting down.

Please exit any cluster applications…
MFNBU01:/var/hacmp/log#clRGinfo
—————————————————————————–
Group Name State Node
—————————————————————————–
MFNBU_RG ERROR MFNBU01
OFFLINE MFNBU02

MFNBU01:/var/hacmp/log#lssrc -g cluster
Subsystem Group PID Status
clstrmgrES cluster 16187542 stopping
MFNBU01:/var/hacmp/log#lsha
Current state: ST_RP_FAILED
sccsid = “@(#)36 1.135.7.2 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 61haes_r711, 1225A_hacmp711 5/22/12 11:46:10”
build = “Dec 5 2012 11:50:45 1241C_hacmp711”
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0 ml_idx[2]=1
tp is 20433718
Events on event queue:
te_type 4, te_nodeid 1, te_network -1
te_type 4, te_nodeid 2, te_network -1
te_type 36, te_nodeid 1, te_network 1
te_type 11, te_nodeid 1, te_network -1
te_type 11, te_nodeid 2, te_network -1
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 13
local node vrmf is 7114
cluster fix level is “4”
The following timer(s) are currently active:
Event error node list: MFNBU01
Current DNP values
DNP Values for NodeId – 1 NodeName – MFNBU01
PgSpFree = 4190764 PvPctBusy = 0 PctTotalTimeIdle = 99.886272
DNP Values for NodeId – 2 NodeName – MFNBU02
PgSpFree = 4190430 PvPctBusy = 0 PctTotalTimeIdle = 99.854695
trcOn 0, kTraceOn 0, stopTraceOnExit 0, cdNodeOn 0
Last event run was FAIL_NODE on node 2
MFNBU01:/var/hacmp/log#lssrc -g cluster
Subsystem Group PID Status
clstrmgrES cluster 16187542 stopping
MFNBU01:/var/hacmp/log#cllslv
MFNBU_RG lvdiskstu
MFNBU_RG lvnbu
MFNBU_RG caalv_private1
MFNBU_RG caalv_private2
MFNBU_RG caalv_private3
MFNBU_RG powerha_crlv
MFNBU01:/var/hacmp/log#stopsrc -s clstrmgrES
0513-006 The Subsystem, clstrmgrES, is currently stopping its execution.
MFNBU01:/var/hacmp/log#lssrc -s clstrmgrES
Subsystem Group PID Status
clstrmgrES cluster 16187542 stopping
MFNBU01:/var/hacmp/log#lsha
Current state: ST_RP_FAILED
sccsid = “@(#)36 1.135.7.2 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 61haes_r711, 1225A_hacmp711 5/22/12 11:46:10”
build = “Dec 5 2012 11:50:45 1241C_hacmp711”
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0 ml_idx[2]=1
tp is 20433718
Events on event queue:
te_type 4, te_nodeid 1, te_network -1
te_type 4, te_nodeid 2, te_network -1
te_type 36, te_nodeid 1, te_network 1
te_type 11, te_nodeid 1, te_network -1
te_type 11, te_nodeid 2, te_network -1
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 13
local node vrmf is 7114
cluster fix level is “4”
The following timer(s) are currently active:
Event error node list: MFNBU01
Current DNP values
DNP Values for NodeId – 1 NodeName – MFNBU01
PgSpFree = 4190764 PvPctBusy = 0 PctTotalTimeIdle = 99.886272
DNP Values for NodeId – 2 NodeName – MFNBU02
PgSpFree = 4190430 PvPctBusy = 0 PctTotalTimeIdle = 99.854695
trcOn 0, kTraceOn 0, stopTraceOnExit 0, cdNodeOn 0
Last event run was FAIL_NODE on node 2

MFNBU01:/var/hacmp/log#stopsrc -f -s clstrmgrES
0513-006 The Subsystem, clstrmgrES, is currently stopping its execution.
MFNBU01:/var/hacmp/log#lssrc -s clstrmgrES
Subsystem Group PID Status
clstrmgrES cluster 16187542 stopping
MFNBU01:/var/hacmp/log#ps -ef|grep clstrmgrES
MFNBU01:/var/hacmp/log#ps -ef|grep cls
root 8847482 16187542 0 09:56:18 – 0:00 /usr/es/sbin/cluster/clstrmgr
root 16187542 4063468 0 15:49:15 – 0:02 /usr/es/sbin/cluster/clstrmgr
MFNBU01:/var/hacmp/log#kill -9 8847482
MFNBU01:/var/hacmp/log#ps -ef|grep cls
root 14680082 12582988 0 10:17:56 pts/1 0:00 grep cls
root 16187542 4063468 0 15:49:15 – 0:02 /usr/es/sbin/cluster/clstrmgr
MFNBU01:/var/hacmp/log#kill -9 16187542
MFNBU01:/var/hacmp/log#lssrc -s clstrmgrES
Subsystem Group PID Status
clstrmgrES cluster inoperative
MFNBU01:/var/hacmp/log#lsha
0513-036 The request could not be passed to the clstrmgrES subsystem.
Start the subsystem and try your command again.
MFNBU01:/var/hacmp/log#startsrc -s clstrmgrES
0513-059 The clstrmgrES Subsystem has been started. Subsystem PID is 16187546.
MFNBU01:/var/hacmp/log#lsha
Current state: ST_INIT
sccsid = “@(#)36 1.135.7.2 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 61haes_r711, 1225A_hacmp711 5/22/12 11:46:10”
build = “Dec 5 2012 11:50:45 1241C_hacmp711”
MFNBU01:/var/hacmp/log#smit hacmp

Starting Cluster Services on node: MFNBU01
This may take a few minutes. Please wait…
MFNBU01: start_cluster: Starting PowerHA SystemMirror
MFNBU01: 6750218 – 0:00 syslogd
MFNBU01: Setting routerevalidate to 1
MFNBU01: 0513-059 The clevmgrdES Subsystem has been started. Subsystem PID is 10551344.
MFNBU01: Mar 25 2015 10:23:31 Starting execution of /usr/es/sbin/cluster/etc/rc.cluster
MFNBU01: with parameters: -boot -N -A -C interactive -P cl_rc_cluster
[MORE…14]

WARNING: MFNBU01 has an active aliased service IP label MFNBU
attached to physical interface: en8. This service IP label
is part of resource group MFNBU_RG, which is in the OFFLINE state
on node MFNBU01.

WARNING: Application monitors are required for detecting application failures
in order for PowerHA SystemMirror to recover from them. Application monitors are started
by PowerHA SystemMirror when the resource group in which they participate is activated.
The following application(s), shown with their associated resource group,
do not have an application monitor configured:

Application Server Resource Group
——————————– ———————————
MFNBU_AS MFNBU_RG
WARNING: MFNBU01 has an active aliased service IP label MFNBU
attached to physical interface: en8. This service IP label
is part of resource group MFNBU_RG, which is in the OFFLINE state
on node MFNBU01.

Would you like to bring the resources of this resource group:
MFNBU_RG offline, so that you can then move the resource group
to a node other than node:MFNBU01 [Yes / No]:
Starting Corrective Action: cl_resource_resources_offline.
<01> Issued resource offline event to clean resources that belong to
resource group MFNBU_RG on node MFNBU01.
The output of the resource_offline event is logged in /tmp/hamcp.out.
The output of the resource_offline event is logged in /tmp/hamcp.out.
WARNING: Node MFNBU01 has cluster.es.nfs.rte installed however grace periods are not fully enabled on this node. Grace periods must be enabled before NFSv4 stable storag
e can be used.

PowerHA SystemMirror will attempt to fix this opportunistically when acquiring NFS resources on this node however the change won’t take effect until the next time that n
fsd is started.

If this warning persists, the administrator should perform the following steps to enable grace periods on MFNBU01 at the next planned downtime:
1. stopsrc -s nfsd
2. smitty nfsgrcperiod
3. startsrc -s nfsd

WARNING: Node MFNBU02 has cluster.es.nfs.rte installed however grace periods are not fully enabled on this node. Grace periods must be enabled before NFSv4 stable storag
e can be used.

#ps -ef |grep clsmgrES 杀掉进程后解决

ST_RP_FAILED: event script failed正常来讲是脚本异常引起的,正常操作为 smit hacmp -> Problem Determination Tools -> Recover From HACMP Script Failure

发表在 AIX, 操作系统 | 标签为 | 留下评论

hacmp 7.1集群同步报错clodmget:could not retrieve object for CuAT,odm errno 5904

MFNBU01:/tmp#clmgr sync cluster verify=yes fix=yes
Saving existing /var/hacmp/clverify/ver_mping/ver_mping.log to /var/hacmp/clverify/ver_mping/ver_mping.log.bak
Verifying clcomd communication, please be patient.

Verifying multicast communication with mping.

Committing any changes, as required, to all available nodes…
Adding any necessary PowerHA SystemMirror entries to /etc/inittab and /etc/rc.net for IPAT on node MFNBU01.

Verification to be performed on the following:
Cluster Topology
Cluster Resources

Verification will automatically correct verification errors.

Retrieving data from available cluster nodes. This could take a few minutes.

Start data collection on node MFNBU01
Start data collection on node MFNBU02
Collector on node MFNBU02 completed
Collector on node MFNBU01 completed
Data collection complete

Verifying Cluster Topology…

Completed 10 percent of the verification checks
Completed 20 percent of the verification checks
Completed 30 percent of the verification checks

Verifying Cluster Resources…

Completed 40 percent of the verification checks

WARNING: Application monitors are required for detecting application failures
in order for PowerHA SystemMirror to recover from them. Application monitors are started
by PowerHA SystemMirror when the resource group in which they participate is activated.
The following application(s), shown with their associated resource group,
do not have an application monitor configured:

Application Server Resource Group
——————————– ———————————
MFNBU_AS MFNBU_RG
Completed 50 percent of the verification checks
Completed 60 percent of the verification checks
Completed 70 percent of the verification checks
Completed 80 percent of the verification checks
Completed 90 percent of the verification checks
Completed 100 percent of the verification checks
WARNING: Node MFNBU01 has cluster.es.nfs.rte installed however grace periods are not fully enabled on this node. Grace periods must be enabled before NFSv4 stable storage can be used.

PowerHA SystemMirror will attempt to fix this opportunistically when acquiring NFS resources on this node however the change won’t take effect until the next time that nfsd is started.

If this warning persists, the administrator should perform the following steps to enable grace periods on MFNBU01 at the next planned downtime:
1. stopsrc -s nfsd
2. smitty nfsgrcperiod
3. startsrc -s nfsd

WARNING: Node MFNBU02 has cluster.es.nfs.rte installed however grace periods are not fully enabled on this node. Grace periods must be enabled before NFSv4 stable storage can be used.

PowerHA SystemMirror will attempt to fix this opportunistically when acquiring NFS resources on this node however the change won’t take effect until the next time that nfsd is started.

If this warning persists, the administrator should perform the following steps to enable grace periods on MFNBU02 at the next planned downtime:
1. stopsrc -s nfsd
2. smitty nfsgrcperiod
3. startsrc -s nfsd

Remember to redo automatic error notification if configuration has changed.

Verification has completed normally.
Timer object autoclverify already exists
clodmget: Could not retrieve object for CuAt, odm errno 5904
ERROR: Cannot synchronize cluster changes without a cluster repository defined. //这里已经很明显提示了,没有repository定义。

ERROR: Creating the cluster in AIX failed. Check output for errors in local cluster configuration, correct them, and try synchronization again.

ERROR: Updating the cluster in AIX failed. Check output for errors in local cluster configuration, correct them, and try synchronization again.

MFNBU01:/tmp#odmget -q “type=hdisk” CuDv
0518-507 odmget: Could not retrieve object for CuDv, ODM error number 5904
MFNBU01:/tmp#odmget -q type=vgtype PdDv

PdDv:
type = “vgtype”
class = “logical_volume”
subclass = “vgsubclass”
prefix = “vg”
devid = “”
base = 1
has_vpd = 0
detectable = 0
chgstatus = 0
bus_ext = 0
fru = 0
led = 0
setno = 1
msgno = 698
catalog = “cmdlvm.cat”
DvDr = “”
Define = “”
Configure = “”
Change = “”
Unconfigure = “”
Undefine = “”
Start = “”
Stop = “”
inventory_only = 0
uniquetype = “logical_volume/vgsubclass/vgtype”
MFNBU01:/tmp#bosboot -ad /dev/hdisk0

bosboot: Boot image is 55324 512 byte blocks.
MFNBU01:/tmp#bosboot -ad /dev/hdisk1

bosboot: Boot image is 55324 512 byte blocks.
MFNBU01:/tmp#odmget |grep -p hdisk9
0518-501 usage: odmget [-q criteria] {Classname . . . }
Retrieves objects from an object class.
MFNBU01:/tmp#lspv
hdisk0 00f99f6134f35f8a rootvg active
hdisk1 00f99f61d86573dc rootvg active
hdisk2 none None
hdisk3 none None
hdisk4 none None
hdisk5 none None
hdisk6 none None
hdisk7 none None
hdisk8 none None
hdisk9 none None
hdisk10 none None
hdisk11 none None
hdisk12 none None
hdisk13 none None
hdiskpower0 00f99f613eb22dd6 None
hdiskpower1 00f99f613e0a4f13 vgnbu active
hdiskpower2 00f99f613e11c926 vgnbu active

我们再去集群拓扑里对照下整个集群的配置:
Cluster Nodes and Networks

Move cursor to desired item and press Enter.

Initial Cluster Setup (Typical)

Manage the Cluster
Manage Nodes
Manage Networks and Network Interfaces

Discover Network Interfaces and Disks

Verify and Synchronize Cluster Configuration

Manage the Cluster

Move cursor to desired item and press Enter.

Display PowerHA SystemMirror Configuration
Remove the Cluster Definition
Snapshot Configuration

COMMAND STATUS

Command: OK stdout: yes stderr: no

Before command completion, additional instructions may appear below.

Cluster Name: MFNBU_clu
Cluster Connection Authentication Mode: Standard
Cluster Message Authentication Mode: None
Cluster Message Encryption: None
Use Persistent Labels for Communication: No
Repository Disk: None
Cluster IP Address: None
There are 2 node(s) and 1 network(s) defined

NODE MFNBU01:
Network net_ether_01
MFNBU 40.43.192.138
MFNBU01_bt1 2.43.192.136
MFNBU01_bt2 3.43.192.136

NODE MFNBU02:
Network net_ether_01
MFNBU 40.43.192.138
MFNBU02_bt2 3.43.192.137
MFNBU02_bt1 2.43.192.137

Resource Group nbu_group
Startup Policy Online On Home Node Only
Fallover Policy Fallover To Next Priority Node In The List
Fallback Policy Fallback To Higher Priority Node In The List
Participating Nodes MFNBU01 MFNBU02
Service IP Label MFNBU

终于找到问题了,原来是smit漏了配置repository disk和cluster ip address,添加后问题解决。

发表在 AIX, 操作系统 | 标签为 , | 留下评论

密码保护:HACMP 7.1 smit配置和脚本配置

这是一篇受密码保护的文章,您需要提供访问密码:

发表在 AIX, 操作系统 | 标签为 | 要查看留言请输入您的密码。

HACMP NFS CROSS MOUNT

一.NFS CROSS MOUNT实战
一次一套系统搭建HA配置中需要配置NFS CROSS MOUNT,那么有人会问了,什么是NFS CROSS MOUNT?NFS CROSS MOUNT到底实现并解决了什么功能呢?传统的NFS分为NFS SERVER和CLIENT,server端提供NFS挂载目录抛出转换成ip形式供客户端挂载,客户端则是直接启动服务挂载ip:文件系统形式即可,那 么NFS CROSS MOUNT就可以理解为双NFS SERVER主备模式的提供NFS服务的HACMP中的冗余NFS模式。

通过smit hacmp方式我们可以看到:

Change/Show All Resources and Attributes for a Resource Group

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[Entry Fields]
Resource Group Name rg_sap_ppi_nfs
Participating Nodes (Default Node Priority) MPIPRDA MPIPRDB

Startup Policy Online On Home Node Only
Fallover Policy Fallover To Next Priority Node In The List
Fallback Policy Never Fallback

Service IP Labels/Addresses [MPINFS] +
Application Controllers [] +

Volume Groups [ppinfsvg ] +
Use forced varyon of volume groups, if necessary false +
Automatically Import Volume Groups false +

Filesystems (empty is ALL for VGs specified) [ ] +
Filesystems Consistency Check fsck +
Filesystems Recovery Method parallel +
Filesystems mounted before IP configured false +

Filesystems/Directories to Export (NFSv2/3) [/export/sapmnt /export/usr/sap/trans /export/forsap] +
Filesystems/Directories to Export (NFSv4) [] +
Stable Storage Path (NFSv4) [] +
Filesystems/Directories to NFS Mount [/sapmnt;/export/sapmnt /usr/sap/trans;/export/usr/sap/trans /forsap;/export/forsap]
Network For NFS Mount [net_ether_01] +

Tape Resources [] +
Raw Disk PVIDs [] +

Primary Workload Manager Class [] +
Secondary Workload Manager Class [] +

Miscellaneous Data []
WPAR Name [] +
F1=Help F2=Refresh F3=Cancel F4=List
Esc+5=Reset Esc+6=Command Esc+7=Edit Esc+8=Image
Resource Groups
再次用df -g查看:
MPIPRDA:/dev#df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 5.00 4.90 3% 6779 1% /
/dev/hd2 12.00 9.48 21% 60445 3% /usr
/dev/hd9var 10.00 9.87 2% 1022 1% /var
/dev/hd3 10.00 9.96 1% 742 1% /tmp
/dev/hd1 10.00 10.00 1% 25 1% /home
/dev/hd11admin 0.50 0.50 1% 9 1% /admin
/proc – – – – – /proc
/dev/hd10opt 4.00 3.81 5% 2484 1% /opt
/dev/livedump 0.50 0.50 1% 4 1% /var/adm/ras/livedump
/dev/lvcmbc_admin 20.00 19.81 1% 15 1% /cmbc_admin
/dev/lvdb2 20.00 19.92 1% 13 1% /db2
/dev/lvopenv 20.00 11.83 41% 4201 1% /usr/openv
/dev/lvprecise 5.00 4.98 1% 4 1% /precise
/aha – – – 70 1% /aha
/dev/lvforsap 300.00 299.45 1% 4 1% /export/forsap
/dev/lvtrans 100.00 99.59 1% 4 1% /export/usr/sap/trans
/dev/lvsapmnt 80.00 79.68 1% 4 1% /export/sapmnt
MPINFS:/export/forsap 300.00 299.45 1% 4 1% /forsap
MPINFS:/export/usr/sap/trans 100.00 99.59 1% 4 1% /usr/sap/trans
MPINFS:/export/sapmnt 80.00 79.68 1% 4 1% /sapmnt
197.0.88.110:/vol/instmedia 1000.00 3.54 100% 633950 1% /InstMedia
/dev/lvSMDA94 10.00 9.96 1% 4 1% /usr/sap/DAA/SMDA94
/dev/lvSCS20 20.00 19.92 1% 4 1% /usr/sap/PPI/SCS20
/dev/lvASCS10 20.00 19.92 1% 4 1% /usr/sap/PPI/ASCS10
/dev/lvSMDA96 10.00 9.96 1% 4 1% /usr/sap/DAA/SMDA96

二节点df -g状态:
MPIPRDB:/#df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 5.00 4.87 3% 6874 1% /
/dev/hd2 12.00 9.39 22% 60569 3% /usr
/dev/hd9var 10.00 9.81 2% 1038 1% /var
/dev/hd3 10.00 8.52 15% 2764 1% /tmp
/dev/hd1 10.00 9.28 8% 4416 1% /home
/dev/hd11admin 0.50 0.50 1% 9 1% /admin
/proc – – – – – /proc
/dev/hd10opt 4.00 3.81 5% 2484 1% /opt
/dev/livedump 0.50 0.50 1% 4 1% /var/adm/ras/livedump
/dev/lvcmbc_admin 20.00 19.81 1% 15 1% /cmbc_admin
/dev/lvdb2 20.00 17.71 12% 5728 1% /db2
/dev/lvopenv 10.00 3.43 66% 4212 1% /usr/openv
/dev/lvprecise 5.00 4.98 1% 4 1% /precise
/aha – – – 70 1% /aha
197.0.88.110:/vol/instmedia 1000.00 2.33 100% 636547 1% /InstMedia
MPINFS:/export/forsap 300.00 271.37 10% 2739 1% /forsap
MPINFS:/export/sapmnt 80.00 76.49 5% 10171 1% /sapmnt
MPINFS:/export/usr/sap/trans 100.00 99.46 1% 21 1% /usr/sap/trans
/dev/lvdb2PPI 20.00 15.77 22% 55 1% /db2/PPI
/dev/lvarc_log 200.00 149.36 26% 307 1% /db2/PPI/arc_log
/dev/lvlog_dir 100.00 89.35 11% 59 1% /db2/PPI/log_dir
/dev/lvsapdata1 500.00 385.52 23% 127 1% /db2/PPI/sapdata1
/dev/lvsapdata2 500.00 385.52 23% 127 1% /db2/PPI/sapdata2
/dev/lvsapdata3 500.00 385.52 23% 127 1% /db2/PPI/sapdata3
/dev/lvsapdata4 500.00 385.52 23% 127 1% /db2/PPI/sapdata4
/dev/lvSMDA95 10.00 9.96 1% 4 1% /usr/sap/DAA/SMDA95
MPIPRDB:/#

MPIPRDA:/#ifconfig -a
en1: flags=1e084863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),LARGESEND,CHAIN>
inet 2.56.0.161 netmask 0xffffff00 broadcast 2.56.0.255
inet 40.56.0.161 netmask 0xffffff00 broadcast 40.56.0.255
inet 40.56.0.164 netmask 0xffffff00 broadcast 40.56.0.255
tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0
en2: flags=1e080863,100c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),LARGESEND,CHAIN>
inet 40.56.0.163 netmask 0xffffff00 broadcast 40.56.0.255
inet 3.56.0.161 netmask 0xffffff00 broadcast 3.56.0.255
inet 40.56.0.166 netmask 0xffffff00 broadcast 40.56.0.255
tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0
lo0: flags=e08084b,c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,LARGESEND,CHAIN>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1%1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
MPIPRDA:/#cat /etc/hosts
# IBM_PROLOG_BEGIN_TAG
# This is an automatically generated prolog.
#
# bos61D src/bos/usr/sbin/netstart/hosts 1.2
#
# Licensed Materials – Property of IBM
#
# COPYRIGHT International Business Machines Corp. 1985,1989
# All Rights Reserved
#
# US Government Users Restricted Rights – Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
# @(#)47 1.2 src/bos/usr/sbin/netstart/hosts, cmdnet, bos61D, d2007_49A2 10/1/07 13:57:52
# IBM_PROLOG_END_TAG
#
# COMPONENT_NAME: TCPIP hosts
#
# FUNCTIONS: loopback
#
# ORIGINS: 26 27
#
# (C) COPYRIGHT International Business Machines Corp. 1985, 1989
# All Rights Reserved
# Licensed Materials – Property of IBM
#
# US Government Users Restricted Rights – Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
# /etc/hosts
#
# This file contains the hostnames and their address for hosts in the
# network. This file is used to resolve a hostname into an Internet
# address.
#
# At minimum, this file must contain the name and address for each
# device defined for TCP in your /etc/net file. It may also contain
# entries for well-known (reserved) names such as timeserver
# and printserver as well as any other host name and address.
#
# The format of this file is:
# Internet Address Hostname # Comments
# Internet Address can be either IPv4 or IPv6 address.
# Items are separated by any number of blanks and/or tabs. A ‘#’
# indicates the beginning of a comment; characters up to the end of the
# line are not interpreted by routines which search this file. Blank
# lines are allowed.

# Internet Address Hostname # Comments
# 192.9.200.1 net0sample # ethernet name/address
# 128.100.0.1 token0sample # token ring name/address
# 10.2.0.2 x25sample # x.25 name/address
# 2000:1:1:1:209:6bff:feee:2b7f ipv6sample # ipv6 name/address
127.0.0.1 loopback localhost # loopback (lo0) name/address
::1 loopback localhost # IPv6 loopback (lo0) name/address

40.43.192.6 NIMPBAC1
197.0.83.32 SZNIM
197.3.137.241 zwnim
197.3.137.228 TAIX

40.56.0.161 MPIPRDA
40.56.0.162 MPIPRDB

40.56.0.163 MPINFS
40.56.0.164 MPIPRD
40.56.0.165 MPIDB
40.56.0.166 MPIERS

2.56.0.161 MPIPRDA_bt1
2.56.0.162 MPIPRDB_bt1
3.56.0.161 MPIPRDA_bt2
3.56.0.162 MPIPRDB_bt2

#nbu master server
197.0.86.13 SBNBU

40.56.0.167 MPIAPPA
40.56.0.168 MPIAPPB
40.56.0.169 MPIAPPC

MPIPRDB:/#ifconfig -a
en1: flags=1e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),LARGESEND,CHAIN>
inet 2.56.0.162 netmask 0xffffff00 broadcast 2.56.0.255
inet 40.56.0.162 netmask 0xffffff00 broadcast 40.56.0.255
tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0
en2: flags=1e080863,100c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),LARGESEND,CHAIN>
inet 40.56.0.165 netmask 0xffffff00 broadcast 40.56.0.255
inet 3.56.0.162 netmask 0xffffff00 broadcast 3.56.0.255
tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0
lo0: flags=e08084b,c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,LARGESEND,CHAIN>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1%1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
MPIPRDB:/#

system mirror7.1 redbook上对于nfs mount_system参数的解释:

RECOVERY_METHOD

parallel

Parallel preferred as the recovery method for this resource group. (The default is sequential.)

EXPORT_FILESYSTEM

/nfsdir

The file system for NFS to export.

MOUNT_FILESYSTEM

“/sap;/nfsdir”

same syntax because we used it in smit to define the NFS cross mount.

Filesystems recovery method

确定文件系统恢复方法:parallel(并行)(用于快速恢复)或 sequential(串行)(缺省)。如果您有共享的嵌套文件系统,请不要将此字段设置为 parallel。这些文件系统必须按串行方式进行恢复。

注意:集群验证实用程序 clverify 不会报告文件系统和快速恢复的不一致性。

二. NFS 简介

nfs cross mount 1

NFS (network filesystem )是一种客户端/服务器端(client/server)应用,它可以在tcp/ip的网络基础上提供文件共享。

任何支持nfs的系统本身都可以做nfs客户端或服务器端,把自己本地硬盘资源分享给其他机器或从其他机器上获得硬盘资源。

NFS 服务器端通过编辑/etc/exports 文件来输出本地硬盘资源和光驱资源,并且运行MOUNTD 和NFSD 进程。任何本地硬盘上的文件,目录,文件系统都可以以读/写权限或只读权限输出。

通过mount 命令,/etc/filesystems 和 biod 进程,NFS客户端可以通过网络远程mount NFS 服务器端的硬盘或者 是光驱资源。

NFS 具有很灵活的配置参数,比如在服务器端的 /etc/exports文件和 客户端的/etc/filesystems 文件。
在服务器端可以输出各种文件,目录和文件系统;
1.read only (所有主机都只有读权限)
2.read/write (所有主机获得读写权限)
3.read mostly (在一个特殊列表里的主机获得读写权限,其他主机只有只读权限)
4.root equivalency (只有列出节点享有root访问权限)
5.host access list (可以mount 资源的主机列表)

客户端有以下mount 选项:
1.foreground (前台mount)
2.background (如果第一次mount 失败,在后台继续mount )
3.hard (一直继续试图mount ,没有过期时间)
4.soft (在进行了指定次数的mount 尝试后放弃mount)

三. 如何使nfs具有高可靠性

当NFS 与hacmo结合起来后,可以使NFS 高可用。把NFS 放入到HACMP资源组中,当hacmp接管时,允许NFS客户端
重新连接。因为NFS mount 可以设置重新试图mount的次数,所以对NFS客户端来说试透明的,客户端在HACMP接管
过程中是感觉不到的。

四. 两种nfs 在hacmp 中的应用及配置

a. NFS mount

nfs cross mount 2

当servera 宕机后, HACMP进行正常的接管过程,serverb 重新varyon sharedvg , 并且mount sharedfs 并把它输出,客户端重新mount sharedfs .

nfs cross mount 3

配置:
Change/Show Resources/Attributes for a Resource Group

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

Resource Group Name shardvg
Node Relationship cascading
Site Relationship ignore
Participating Node Names / Default Node Priority servera serverb
Dynamic Node Priority []

Service IP label []
Filesystems (default is All) [/sharedfs]
Filesystems Consistency Check fsck
Filesystems Recovery Method sequential
Filesystems/Directories to Export [/sharedfs]
Filesystems/Directories to NFS mount []
Network For NFS Mount []

b. Cross -mounting

通过cross-mount 机制,一个集群(cluster)中的其他节点(node)可以通过NFS来共享文件系统。
这种机制被用在当两个节点上都有高可用性应用在运行而又要求要共享某一个文件系统,如下例:
servera 和serverb 都要共享sharevg 上的 /sharedfs ,因此server b 在本地mount /sharedfs并
把它export 出去,这样客户端都可以nfs mount serverb 上的 /sharedfs .再同时server a
也cross-mount (NFS mount) /sharedfs 文件系统。

nfs cross mount 4

当server b 发生宕机时,server a 首先umount NFS /sharedfs 文件系统,然后把sharedvg
varyon 到本地,并且把/sharedfs mount 到本地,同时把它export 出去,这样客户端又可以
访问/sharedfs 了, 过程如下:

nfs cross mount 5

配置方法:
Change/Show Resources/Attributes for a Resource Group

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

Resource Group Name shardvg
Node Relationship cascading
Site Relationship ignore
Participating Node Names / Default Node Priority serverb servera
Dynamic Node Priority []

Service IP label []
Filesystems (default is All) [/sharedfs]
Filesystems Consistency Check fsck
Filesystems Recovery Method sequential
Filesystems/Directories to Export [/sharedfs]
Filesystems/Directories to NFS mount [/sharedfs]
Filesystem mountd before ip configued [ true]

五.注意事项:
a. Cross-mount 这种配置方式不支持cllockd , 因此在两个节点都对/sharedfs 进行读写访问时,无法保持数据的一致性,所以如果要使用这种方式,/shardfs 的属性就只能是只读方式(read- only )

b. cross -mount 这种配置方式只适用于casading 的资源组,而且该资源组必须要设置成IPAT方式。

c. 如果要设置cross-mount, Filesystem mountd before ip configued 必须设置为 true。

发表在 AIX, 操作系统 | 标签为 | 留下评论