Cluster cannot detect my Oracle Apps resource is up
Hi, i am trying to configure the Oracle Apps under the Veritas CLuster but when i put all the required variables the Cluster coudn't detect that the resource is up ,herunder is the environement that i used ,please help me with this Oracle Apps Version 12 User Name applprod Oracle Home /binaries/apps/tech_st/10.1.2 Script Home $INST_TOP/admin/scripts SCRIPT NAME: adstrtal.sh Server Type WebServer Monitor Environment $INST_TOP/ora/10.1.2/ERPPROD_erpapp-lh.env536Views0likes1CommentSG is not switching to next node.
Hi All, I am new to VCS but good in HACMP. In our environment we are using VCS-6.0, I one server we found that the SG is not moving from one node to another node when we tried manual failover using the bellow command. hagrp -switch <SGnamg> -to <sysname> We able to see that the SG is offline in the currnent node but it's not coming online in the secondary node. There is no error locked in engine_A.log except the bellow entry cpus load more than 60% <Secondary node name> Can anyone help me to find the solution for this. I will provide the output of any commands if you need more info to help me out to get this trouble shooted :) Thanks,Solved1.8KViews1like8CommentsOracle Enterprise Manager 12c monitoring of VCS
Hi, If there are any users in the community who wish to trial a plugin for monitoring VCS through Oracle Enterprise Manager 12c, please let me know. I've been working with a number of clients to develop a custom EM 12c plugin to allow transparent integration of VCS targets into the EM framework. Benefits include centralised visbility, out of the box monitoring, direct integration into the EM12c alerting framework and configuration management. More info and screenshots are available at http://www.aidev.uk Any questions, drop an email to info@aidev.uk many thanks, Scott624Views0likes0CommentsVCS: NBU failover impeded by nagios monitoring - any experience?
OS: SUSE Linux; VCS: 5.1 NBU: 7.1 clustered Master Server in own SG scenario: Nagios regularly starts diverse CLI commands on NBU to check for e.g. long-running jobs, nbemm still responding etc... It seems that these checks stop VCS from doing a switch/failover of the NBU SG. "Workaround" is to disable the Master Server monitoring in Nagios; I am looking for a way disable these checks locally. Once I know how, I can have VCS do it. Any experience with this scenario? Regards, Bert627Views0likes2CommentsFencing and Reservation Conflict
Hi to all I have redhat linux 5.9 64bit with SFHA 5.1 SP1 RP4 with fencing enable ( our storage device is IBM .Storwize V3700 SFF scsi3 compliant [root@mitoora1 ~]# vxfenadm -d I/O Fencing Cluster Information: ================================ Fencing Protocol Version: 201 Fencing Mode: SCSI3 Fencing SCSI3 Disk Policy: dmp Cluster Members: * 0 (mitoora1) 1 (mitoora2) RFSM State Information: node 0 in state 8 (running) node 1 in state 8 (running) ******************************************** in /etc/vxfenmode (scsi3_disk_policy=dmp and vxfen_mode=scsi3) vxdctl scsi3pr scsi3pr: on [root@mitoora1 etc]# more /etc/vxfentab # # /etc/vxfentab: # DO NOT MODIFY this file as it is generated by the # VXFEN rc script from the file /etc/vxfendg. # /dev/vx/rdmp/storwizev70000_000007 /dev/vx/rdmp/storwizev70000_000008 /dev/vx/rdmp/storwizev70000_000009 ****************************************** [root@mitoora1 etc]# vxdmpadm listctlr all CTLR-NAME ENCLR-TYPE STATE ENCLR-NAME ===================================================== c0 Disk ENABLED disk c10 StorwizeV7000 ENABLED storwizev70000 c7 StorwizeV7000 ENABLED storwizev70000 c8 StorwizeV7000 ENABLED storwizev70000 c9 StorwizeV7000 ENABLED storwizev70000 main.cf cluster drdbonesales ( UserNames = { admin = hlmElgLimHmmKumGlj } ClusterAddress = "10.90.15.30" Administrators = { admin } UseFence = SCSI3 ) ********************************************** I configured the coordinator fencing so I have 3 lun in a veritas disk group ( dmp coordinator ) All seems works fine but I noticed a lot of reservation conflict in the messages of both nodes On the log of the server I am constantly these messages: /var/log/messages Nov 26 15:14:09 mitoora2 kernel: sd 7:0:1:1: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 8:0:0:1: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 8:0:1:1: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 10:0:0:1: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 10:0:1:1: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 9:0:1:1: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 9:0:0:1: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 7:0:1:3: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 8:0:0:3: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 8:0:1:3: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 10:0:1:3: reservation conflict You have any idea? Best Regards VincenzoSolved11KViews1like15CommentsLLT: node1 in trouble
Hello All, Recently this message started appearing on the server. Oct 14 08:38:15 db1 llt: [ID 140958 kern.notice] LLT INFO V-14-1-10205 link 0 (ce1) node 1 in trouble Oct 14 08:38:15 db1 llt: [ID 860062 kern.notice] LLT INFO V-14-1-10024 link 0 (ce1) node 1 active Oct 14 08:38:42 db1 llt: [ID 140958 kern.notice] LLT INFO V-14-1-10205 link 0 (ce1) node 1 in trouble Oct 14 08:38:43 db1 llt: [ID 860062 kern.notice] LLT INFO V-14-1-10024 link 0 (ce1) node 1 active Oct 14 08:38:45 db1 llt: [ID 140958 kern.notice] LLT INFO V-14-1-10205 link 0 (ce1) node 1 in trouble Oct 14 08:38:45 db1 llt: [ID 860062 kern.notice] LLT INFO V-14-1-10024 link 0 (ce1) node 1 active Oct 14 08:38:55 db1 llt: [ID 140958 kern.notice] LLT INFO V-14-1-10205 link 0 (ce1) node 1 in trouble Oct 14 08:38:56 db1 llt: [ID 860062 kern.notice] LLT INFO V-14-1-10024 link 0 (ce1) node 1 active Oct 14 08:39:01 db1 llt: [ID 140958 kern.notice] LLT INFO V-14-1-10205 link 0 (ce1) node 1 in trouble Oct 14 08:39:05 db1 llt: [ID 860062 kern.notice] LLT INFO V-14-1-10024 link 0 (ce1) node 1 active Oct 14 08:39:05 db1 llt: [ID 794702 kern.notice] LLT INFO V-14-1-10019 delayed hb 600 ticks from 1 link 0 (ce1) Oct 14 08:39:05 db1 llt: [ID 602713 kern.notice] LLT INFO V-14-1-10023 lost 11 hb seq 19344285 from 1 link 0 (ce1) The messages dates back to sept 20 till today. Message is from Oct 14. bash-2.05$ lltstat -nvv|head LLT node information: Node State Link Status Address * 0 db1 OPEN ce1 UP 00:03:BA:93: ce6 UP 00:03:BA:85: 1 db2 OPEN ce1 UP 00:03:BA:93: ce6 UP 00:03:BA:95: 2 CONNWAIT ce1 DOWN Any advice is greatly apperciated, thank you.1.8KViews0likes4CommentsVCS failovers and copies the crontabs
Hello, I am using VCS on Oracle M9000 machines. I have three node cluster. The question is when I failover the services from one node to another I want all the crontabs to be copied to the other live node as well. Which doesnt seem to be working fine for now in my Domain. Can you please help me out that where to define this 'copy cron' procedure so evertime when one enviorment fails over to another node it also copies the same cron from the previous system. Or if there is any procedure which copies the crontabs of every user daily on all cluster nodes. I need to know if this can be configured in VCS. All useful replies are welcome. Best Regards, Mohammad Ali Sarwar1.2KViews1like3Commentsneed to know the meaning of logs
Can sommeone please tell me what has happened? 2013 Jun 19 17:42:20 kyornas051_01 kernel: LLT INFO V-14-1-10205 link 1 (priveth1) node 1 in trouble 2013 Jun 19 17:42:20 kyornas051_01 kernel: LLT INFO V-14-1-10205 link 0 (priveth0) node 1 in trouble 2013 Jun 19 17:42:26 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 1 (priveth1) node 1 inactive 8 sec (1698350566) 2013 Jun 19 17:42:26 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 0 (priveth0) node 1 inactive 8 sec (1698356198) 2013 Jun 19 17:42:27 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 1 (priveth1) node 1 inactive 9 sec (1698350566) 2013 Jun 19 17:42:27 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 0 (priveth0) node 1 inactive 9 sec (1698356198) 2013 Jun 19 17:42:28 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 1 (priveth1) node 1 inactive 10 sec (1698350566) 2013 Jun 19 17:42:28 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 0 (priveth0) node 1 inactive 10 sec (1698356198) 2013 Jun 19 17:42:29 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 1 (priveth1) node 1 inactive 11 sec (1698350566) 2013 Jun 19 17:42:29 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 0 (priveth0) node 1 inactive 11 sec (1698356198) 2013 Jun 19 17:42:30 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 1 (priveth1) node 1 inactive 12 sec (1698350566) 2013 Jun 19 17:42:30 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 0 (priveth0) node 1 inactive 12 sec (1698356198) 2013 Jun 19 17:42:31 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 1 (priveth1) node 1 inactive 13 sec (1698350566) 2013 Jun 19 17:42:31 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 0 (priveth0) node 1 inactive 13 sec (1698356198) 2013 Jun 19 17:42:32 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 1 (priveth1) node 1. 4 more to go. 2013 Jun 19 17:42:32 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 1 (priveth1) node 1 inactive 14 sec (1698350566) 2013 Jun 19 17:42:32 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 1 (priveth1) node 1. 3 more to go. 2013 Jun 19 17:42:32 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 0 (priveth0) node 1. 4 more to go. 2013 Jun 19 17:42:32 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 1 (priveth1) node 1. 2 more to go. 2013 Jun 19 17:42:32 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 0 (priveth0) node 1 inactive 14 sec (1698356198) 2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 0 (priveth0) node 1. 3 more to go. 2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 1 (priveth1) node 1. 1 more to go. 2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 0 (priveth0) node 1. 2 more to go. 2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 1 (priveth1) node 1. 0 more to go. 2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 1 (priveth1) node 1 inactive 15 sec (1698350566) 2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 0 (priveth0) node 1. 1 more to go. 2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10509 link 1 (priveth1) node 1 expired 2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 0 (priveth0) node 1. 0 more to go. 2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 0 (priveth0) node 1 inactive 15 sec (1698356198) 2013 Jun 19 17:42:34 kyornas051_01 kernel: LLT INFO V-14-1-10509 link 0 (priveth0) node 1 expired 2013 Jun 19 17:42:38 kyornas051_01 kernel: GAB INFO V-15-1-20036 Port h gen 1132317 membership 0 2013 Jun 19 17:42:38 kyornas051_01 kernel: GAB INFO V-15-1-20036 Port v gen 113231a membership 0 2013 Jun 19 17:42:38 kyornas051_01 kernel: GAB INFO V-15-1-20036 Port w gen 113231c membership 0 2013 Jun 19 17:42:38 kyornas051_01 kernel: GAB INFO V-15-1-20036 Port a gen 1132305 membership 0 2013 Jun 19 17:42:38 kyornas051_01 kernel: GAB INFO V-15-1-20036 Port b gen 1132314 membership 0 2013 Jun 19 17:42:38 kyornas051_01 kernel: GAB INFO V-15-1-20036 Port f gen 113231e membership 0 2013 Jun 19 17:42:38 kyornas051_01 Had[30829]: VCS INFO V-16-1-10077 Received new cluster membership 2013 Jun 19 17:42:38 kyornas051_01 kernel: VXFEN INFO V-11-1-68 Completed ejection of leaving node(s) from data disks. 2013 Jun 19 17:42:38 kyornas051_01 vxvm:vxconfigd: V-5-1-7899 CVM_VOLD_CHANGE command received 2013 Jun 19 17:42:38 kyornas051_01 vxvm:vxconfigd: V-5-1-13170 Preempting CM NID 1 2013 Jun 19 17:42:38 kyornas051_01 vxvm:vxconfigd: V-5-1-0 Calling join complete 2013 Jun 19 17:42:38 kyornas051_01 vxvm:vxconfigd: V-5-1-8062 master: not a cluster startup 2013 Jun 19 17:42:38 kyornas051_01 vxvm:vxconfigd: V-5-1-10994 join completed for node 0 2013 Jun 19 17:42:38 kyornas051_01 vxvm:vxconfigd: V-5-1-4123 cluster established successfully 2013 Jun 19 17:42:39 kyornas051_01 Had[30829]: VCS ERROR V-16-1-10079 System kyornas051_02 (Node \'1\') is in Down State - Membership: 0x1 2013 Jun 19 17:42:39 kyornas051_01 Had[30829]: VCS ERROR V-16-1-10322 System kyornas051_02 (Node \'1\') changed state from RUNNING to FAULTED 2013 Jun 19 17:42:39 kyornas051_01 sfsfs_event.network.alert: Node kyornas051_02 went offline. 2013 Jun 19 17:42:39 kyornas051_01 sshd[17509]: Accepted publickey for root from 172.16.0.3 port 42449 ssh2 2013 Jun 19 17:42:39 kyornas051_01 sshd[17515]: Accepted publickey for root from 172.16.0.3 port 42450 ssh2 2013 Jun 19 17:42:41 kyornas051_01 kernel: vxfs: msgcnt 617 Phase 0 - /dev/vx/dsk/sfsdg/_nlm_ - Blocking buffer reads for recovery. gencnt 1 primary 0 leavers: 0x2 0x0 0x0 0x0 2013 Jun 19 17:42:41 kyornas051_01 kernel:Solved1.3KViews0likes3Commentsswitching of service group didnot work vcs 5.0
we tried to do failover , however it didnot worked please find below logs and please help in finding the cause 2013/07/22 20:11:17 VCS INFO V-16-1-50859 Attempting to switch group Oss from system dukosgbs to system dukosgas 2013/07/22 20:11:17 VCS INFO V-16-1-50135 User root fired command: hagrp -switch Oss dukosgas from localhost 2013/07/22 20:11:17 VCS NOTICE V-16-1-50929 Initial tests indicate group Oss is able to switch to system dukosgas. Initiating offline of group on system dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10167 Initiating manual offline of group Oss on system dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10300 Initiating Offline of Resource activemq (Owner: unknown, Group: Oss) on System dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10300 Initiating Offline of Resource alex (Owner: unknown, Group: Oss) on System dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10300 Initiating Offline of Resource apache (Owner: unknown, Group: Oss) on System dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10300 Initiating Offline of Resource cron (Owner: unknown, Group: Oss) on System dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10300 Initiating Offline of Resource ddc (Owner: unknown, Group: Oss) on System dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10300 Initiating Offline of Resource glassfish (Owner: unknown, Group: Oss) on System dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10300 Initiating Offline of Resource imgr_httpd (Owner: unknown, Group: Oss) on System dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10300 Initiating Offline of Resource imgr_tomcat (Owner: unknown, Group: Oss) on System dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10300 Initiating Offline of Resource ldap_mon (Owner: unknown, Group: Oss) on System dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10300 Initiating Offline of Resource log_service (Owner: unknown, Group: Oss) on System dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10300 Initiating Offline of Resource netmgt_nettl (Owner: unknown, Group: Oss) on System dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10300 Initiating Offline of Resource netmgt_ov (Owner: unknown, Group: Oss) on System dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10300 Initiating Offline of Resource ovtrc (Owner: unknown, Group: Oss) on System dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10300 Initiating Offline of Resource restart_mc (Owner: unknown, Group: Oss) on System dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10300 Initiating Offline of Resource syb_log_mon (Owner: unknown, Group: Oss) on System dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10300 Initiating Offline of Resource syb_proc_mon (Owner: unknown, Group: Oss) on System dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10300 Initiating Offline of Resource time_service (Owner: unknown, Group: Oss) on System dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10300 Initiating Offline of Resource trapdist (Owner: unknown, Group: Oss) on System dukosgbs 2013/07/22 20:11:17 VCS NOTICE V-16-1-10300 Initiating Offline of Resource vrsnt_log_mon (Owner: unknown, Group: Oss) on System dukosgbs 2013/07/22 20:45:26 VCS INFO V-16-1-50135 User root fired command: hagrp -switch Oss dukosgas from localhost 2013/07/22 22:19:35 VCS INFO V-16-2-13075 (dukosgbs) Resource(activemq) has reported unexpected OFFLINE 1 times, which is still within the ToleranceLimit(2). 2013/07/22 22:20:35 VCS INFO V-16-2-13075 (dukosgbs) Resource(activemq) has reported unexpected OFFLINE 2 times, which is still within the ToleranceLimit(2). 2013/07/22 22:21:35 VCS ERROR V-16-2-13067 (dukosgbs) Agent is calling clean for resource(activemq) because the resource became OFFLINE unexpectedly, on its own. 2013/07/22 22:21:36 VCS INFO V-16-2-13068 (dukosgbs) Resource(activemq) - clean completed successfully. 2013/07/22 22:21:36 VCS ERROR V-16-2-13073 (dukosgbs) Resource(activemq) became OFFLINE unexpectedly on its own. Agent is restarting (attempt number 1 of 3) the resource. 2013/07/22 22:21:36 VCS INFO V-16-10001-3 (dukosgbs) Application:activemq:online:Executed /ericsson/hacs/scripts/svc.sh 2013/07/22 22:21:37 VCS INFO V-16-2-13001 (dukosgbs) Resource(activemq): Output of the completed operation (online) svcadm: Instance "svc:/ericsson/eric_3pp/activemq:default" is not in a maintenance or degraded state. 2013/07/22 22:21:38 VCS NOTICE V-16-2-13076 (dukosgbs) Agent has successfully restarted resource(activemq). 2013/07/22 22:23:05 VCS INFO V-16-1-50135 User root fired command: hagrp -clear Oss dukosgbs from localhost 2013/07/22 22:27:08 VCS INFO V-16-1-50135 User root fired command: hagrp -flush Oss dukosgbs from localhost 2013/07/22 22:29:40 VCS INFO V-16-1-50135 User root fired command: hagrp -clearadminwait Oss dukosgbs from localhost 2013/07/22 22:37:21 VCS INFO V-16-1-50135 User root fired command: hagrp -flush ClusterService dukosgbs from localhost 2013/07/22 22:37:21 VCS INFO V-16-1-50135 User root fired command: hagrp -flush Oss dukosgbs from localhost 2013/07/22 22:37:21 VCS INFO V-16-1-50135 User root fired command: hagrp -flush Ossfs dukosgbs from localhost 2013/07/22 22:37:21 VCS INFO V-16-1-50135 User root fired command: hagrp -flush Sybase1 dukosgbs from localhost 2013/07/22 22:38:14 VCS INFO V-16-1-50135 User root fired command: hagrp -flush ClusterService dukosgbs from localhost 2013/07/22 22:38:14 VCS INFO V-16-1-50135 User root fired command: hagrp -flush Oss dukosgbs from localhost 2013/07/22 22:38:14 VCS INFO V-16-1-50135 User root fired command: hagrp -flush Ossfs dukosgbs from localhost 2013/07/22 22:38:14 VCS INFO V-16-1-50135 User root fired command: hagrp -flush Sybase1 dukosgbs from localhost 2013/07/22 22:39:08 VCS INFO V-16-1-50135 User root fired command: hares -refreshinfo activemq from localhost 2013/07/22 22:40:06 VCS INFO V-16-1-50135 User root fired command: hares -refreshinfo activemq localclus from localhost 2013/07/22 22:42:09 VCS INFO V-16-1-50135 User root fired command: hares -flushinfo activemq localclus from localhost 2013/07/22 22:47:41 VCS INFO V-16-1-50135 User root fired command: hagrp -switch Oss dukosgbs from localhost 2013/07/22 22:50:52 VCS INFO V-16-1-50859 Attempting to switch group Oss from system dukosgbs to system dukosgas 2013/07/22 22:50:52 VCS INFO V-16-1-50135 User root fired command: hagrp -switch Oss dukosgas from localhost 2013/07/22 22:50:52 VCS NOTICE V-16-1-50929 Initial tests indicate group Oss is able to switch to system dukosgas. Initiating offline of group on system dukosgbs ===================================================================== the resources ware initiating to offline. But they did not come down. root@dukosgbs> hastatus -sum -- SYSTEM STATE -- System State Frozen A dukosgbs RUNNING 0 -- GROUP STATE -- Group System Probed AutoDisabled State B ClusterService dukosgbs Y N ONLINE B Oss dukosgbs Y N ONLINE|STOPPING B Ossfs dukosgbs Y N ONLINE B Sybase1 dukosgbs Y N ONLINE -- RESOURCES OFFLINING -- Group Type Resource System IState F Oss Application activemq dukosgbs W_OFFLINE_PROPAGATE F Oss Application alex dukosgbs W_OFFLINE_PROPAGATE F Oss Application apache dukosgbs W_OFFLINE_PROPAGATE F Oss Application cron dukosgbs W_OFFLINE_PROPAGATE F Oss Application ddc dukosgbs W_OFFLINE_PROPAGATE F Oss Application glassfish dukosgbs W_OFFLINE_PROPAGATE F Oss Application imgr_httpd dukosgbs W_OFFLINE_PROPAGATE F Oss Application imgr_tomcat dukosgbs W_OFFLINE_PROPAGATE F Oss Application ldap_mon dukosgbs W_OFFLINE_PROPAGATE F Oss Application log_service dukosgbs W_OFFLINE_PROPAGATE F Oss Application netmgt_nettl dukosgbs W_OFFLINE_PROPAGATE F Oss Application netmgt_ov dukosgbs W_OFFLINE_PROPAGATE F Oss Application ovtrc dukosgbs W_OFFLINE_PROPAGATE F Oss Application restart_mc dukosgbs W_OFFLINE_PROPAGATE F Oss Application syb_log_mon dukosgbs W_OFFLINE_PROPAGATE F Oss Application syb_proc_mon dukosgbs W_OFFLINE_PROPAGATE F Oss Application time_service dukosgbs W_OFFLINE_PROPAGATE F Oss Application trapdist dukosgbs W_OFFLINE_PROPAGATE F Oss Application vrsnt_log_mon dukosgbs W_OFFLINE_PROPAGATE -- WAN HEARTBEAT STATE -- Heartbeat To State L Icmp gran_cluster1 ALIVE -- REMOTE CLUSTER STATE -- Cluster State M gran_cluster1 RUNNING -- REMOTE SYSTEM STATE -- cluster:system State Frozen N gran_cluster1:dukosgas RUNNING 0Solved764Views0likes1CommentVCS Error Codes for all platforms
Hello Gents, Do we have a list of all erro codes for VCS ? also if all the error codes are generic and are common for all platforms (including Linux,Solaris, Windows,AIX) Need this confirmation urgently, planning to design a common monitoring agent. Best Regards, NimishSolved6.3KViews2likes6Comments