Veritas InfoScale 7.2: Documentation available
The documentation for Veritas InfoScale 7.2 is now available at the following locations: PDF and HTML versions: SORT documentation page Late Breaking News: https://www.veritas.com/support/en_US/article.000116047 Hardware Compatibility List: https://www.veritas.com/support/en_US/article.000116023 Software Compatibility: https://www.veritas.com/support/en_US/article.000116038 Manual pages: AIX,Linux,Solaris The Veritas InfoScale 7.2 documentation set includes the following manuals: Getting Started Veritas InfoScale What's New Veritas InfoScale Solutions Getting Started Guide Veritas InfoScale Readme First Release Notes Veritas InfoScale Release notes Installation guide Veritas InfoScale Installation guide Configuration and Upgrade guides Storage Foundation Configuration and Upgrade guide Storage Foundation and High Availability Configuration and Upgrade guide Storage Foundation Cluster File System High Availability Configuration and Upgrade guide Storage Foundation for Oracle RAC Configuration and Upgrade guide Storage Foundation for Sybase ASE CE Configuration and Upgrade guide Cluster Server Configuration and Upgrade guide Legal Notices Veritas InfoScale Third-party Software License Agreements For the complete Veritas InfoScale documentation set, see the SORT documentation page.6KViews0likes0CommentsInfoscale 7 for linux 7.1 is not confiured
Hi, i am trying to install and configure infoscale 7 on RHEL 7.1 on two physical servers. the product is installed on both nodes , but when trying to cinfigure to fencing mode, i faced an error ,Volume Manger is not runnig, when tried to start VxVm module, it will not start and i got:VxVM vxdisk ERROR V-5-1-684 IPC failure: Configuration daemon is not accessible knowing that on virtual machine, the configuration is working fine. anyone has any ideaSolved4.9KViews0likes6CommentsNVMe drives disappear after upgrade to the RHEL7.7 kernel.
Hi, I'm using Infoscale 7.4.1.1300 on RHEL 7.x Tonight, as I was running RHEL7.7 with the latest RHEL7.6 kernel, I decided to upgrade to the RHEL7.7 kernel (the only part of 7.7 which was missing). This had the nasty side effect of making NVMe drives disappear. 1) before upgrade: # modinfo vxio filename: /lib/modules/3.10.0-957.27.2.el7.x86_64/veritas/vxvm/vxio.ko license: VERITAS retpoline: Y supported: external version: 7.4.1.1300 license: Proprietary. Send bug reports to enterprise_technical_support@veritas.com retpoline: Y rhelversion: 7.6 depends: veki vermagic: 3.10.0-957.el7.x86_64 SMP mod_unload modversions # vxdmpadm listctlr CTLR_NAME ENCLR_TYPE STATE ENCLR_NAME PATH_COUNT ========================================================================= c515 Samsung_NVMe ENABLED daltigoth_samsung_nvme1 1 c0 Disk ENABLED disk 3 # vxdisk list DEVICE TYPE DISK GROUP STATUS nvme0n1 auto:cdsdisk - (nvm01dg) online ssdtrim sda auto:LVM - - LVM sdb auto:cdsdisk loc01d00 local01dg online sdc auto:cdsdisk - (ssd01dg) online 2) after upgrade: # modinfo vxio filename: /lib/modules/3.10.0-1062.1.1.el7.x86_64/veritas/vxvm/vxio.ko license: VERITAS retpoline: Y supported: external version: 7.4.1.1300 license: Proprietary. Send bug reports to enterprise_technical_support@veritas.com retpoline: Y rhelversion: 7.7 depends: veki vermagic: 3.10.0-1062.el7.x86_64 SMP mod_unload modversions # vxdmpadm listctlr CTLR_NAME ENCLR_TYPE STATE ENCLR_NAME PATH_COUNT ========================================================================= c0 Disk ENABLED disk 3 # vxdisk list DEVICE TYPE DISK GROUP STATUS sda auto:LVM - - LVM sdb auto:cdsdisk loc01d00 local01dg online sdc auto:cdsdisk - (ssd01dg) online I've reverted to the latest z-stream RHEL7.6 kernel (3.10.0-957.27.2.el7) while I research this issue. Has this been reported already?4.5KViews0likes9CommentsInfoscale command information
Hi, I need to know if there is a way how to track which disk with UDID is which drive/mount point? I have this exisiting environment with oracle server have storage cluster using infoscale storage foundation but i dont have any information all about the configuration or disk assignment. anyone can help?Solved4.5KViews0likes5CommentsFailed to install EAT on system
Am evaluating the Infoscale Availability tool. While install over RHEL6.4(On VMware) getting error says that "Failed to install EAT on system". Tried to install several times after proper uninstall and getting the same. Tried to install on single server option even the same error. Can anyone point how to resolve this issue.Solved4.3KViews1like12CommentsNew Infoscale v7 Installation - Can't add Hosts
This is a new system of infoscale v7 on RHEL v6. Bidirectional port 5634 is open between the Infoscale management Server (RHEL) and host. (solaris 10 -sparc). Also one way port 22 is open from mgmt server to managed host. Host has VRTSsfmh running and listening on port 5634: solvcstst01:/etc/ssh {root}: ps -ef|grep xprtld root 3893 1 0 Mar 01 ? 0:47 /opt/VRTSsfmh/bin/xprtld -X 1 /etc/opt/VRTSsfmh/xprtld.conf root 7477 24284 0 08:28:34 pts/1 0:00 grep xprtld I've allowed (temporary) direct root login from the mgmt server to the managed host and entered those credentials. Error when adding host from infoscale server: "Registration with Management Server failed" Error log: Add Host Log ------------ Started [04/12/2016 08:30:23] [04/12/2016 08:30:23] [solvcstst01.vch.ca] type rh solvcstst01.vch.ca cms [04/12/2016 08:30:23] [solvcstst01.vch.ca] creating task for Add host [04/12/2016 08:30:24] [solvcstst01.vch.ca] Check if MH is pingable from MS and get vital information from MH [04/12/2016 08:30:24] [solvcstst01.vch.ca] Output: { "XPRTLD_VERSION" : "5.0.196.0", "LOCAL_NAME" : "solvcstst01.vch.ca", "LOCAL_ADDR" : "139.173.8.6", "PEER_NAME" : "UNKNOWN", "PEER_ADDR" : "10.248.224.116", "LOCAL_TIME" : "1460475024", "LOCALE" : "UNKNOWN", "DOMAIN_MODE" : "FALSE", "QUIESCE_MODE" : "RUNNING", "OSNAME" : "SunOS", "OSRELEASE" : "5.10", "CPUTYPE" : "sparc", "OSUUID" : "{00020014-4ffa-b092-0000-000084fbfc3f}", "DOMAINS" : { } } [04/12/2016 08:30:24] [solvcstst01.vch.ca] Return code: 0 [04/12/2016 08:30:24] [solvcstst01.vch.ca] Checking if MH version [5.0.196.0] is same or greater than as that of least supported MH version [5.0.0.0] [04/12/2016 08:30:24] [solvcstst01.vch.ca] ADD_HOST_PRECONFIG_CHK [04/12/2016 08:30:24] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_PRECONFIG_CHK","STATE":"SUCCESS","PROGRESS":1}} [04/12/2016 08:30:24] [solvcstst01.vch.ca] retrieving Agent password [04/12/2016 08:30:24] [solvcstst01.vch.ca] ADD_HOST_INPUT_PARAM_CHK [04/12/2016 08:30:24] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INPUT_PARAM_CHK","STATE":"SUCCESS","PROGRESS":6}} [04/12/2016 08:30:24] [solvcstst01.vch.ca] user name is "root" [04/12/2016 08:30:24] [solvcstst01.vch.ca] ADD_HOST_CONTACTING_MH [04/12/2016 08:30:24] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_CONTACTING_MH","STATE":"SUCCESS","PROGRESS":20}} [04/12/2016 08:30:25] [solvcstst01.vch.ca] Output: HTTP/1.1 302 OK Status: 307 Moved Location: /admin/htdocs/cs_config.htm [04/12/2016 08:30:25] [solvcstst01.vch.ca] Return code: 768 [04/12/2016 08:30:25] [solvcstst01.vch.ca] Checking to see if CS is reachable from MH [04/12/2016 08:30:25] [solvcstst01.vch.ca] Output: { "XPRTLD_VERSION" : "7.0.0.0", "LOCAL_NAME" : "lvmvom01.healthbc.org", "LOCAL_ADDR" : "10.248.224.116", "PEER_NAME" : "solvcstst01.vch.ca", "PEER_ADDR" : "139.173.8.6", "LOCAL_TIME" : "1460475025", "LOCALE" : "en_US.UTF-8", "DOMAIN_MODE" : "TRUE", "QUIESCE_MODE" : "RUNNING", "OSNAME" : "Linux", "OSRELEASE" : "2.6.32-573.22.1.el6.x86_64", "CPUTYPE" : "x86_64", "OSUUID" : "{00010050-56ad-1e25-0000-000000000000}", "DOMAINS" : { "sfm://lvmvom01.healthbc.org:5634/" : { "admin_url" : "vxss://lvmvom01.healthbc.org:14545/sfm_admin/sfm_domain/vx", "primary_broker" : "vxss://lvmvom01.healthbc.org:14545/sfm_agent/sfm_domain/vx" } } } [04/12/2016 08:30:25] [solvcstst01.vch.ca] Return code: 0 [04/12/2016 08:30:25] [solvcstst01.vch.ca] CS host (lvmvom01.healthbc.org) is resolvable [04/12/2016 08:30:25] [solvcstst01.vch.ca] Trying to figure out if host is already part of the domain [04/12/2016 08:30:25] [solvcstst01.vch.ca] ADD_HOST_SEND_CRED_MH [04/12/2016 08:30:25] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_SEND_CRED_MH","STATE":"SUCCESS","PROGRESS":30}} [04/12/2016 08:30:26] [solvcstst01.vch.ca] Output: SUCCESS [04/12/2016 08:30:26] [solvcstst01.vch.ca] Return code: 0 [04/12/2016 08:30:28] [solvcstst01.vch.ca] push_exec command succeeded [/opt/VRTSsfmh/bin/getvmid_script] [04/12/2016 08:30:29] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:29] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":75}} [04/12/2016 08:30:29] [solvcstst01.vch.ca] Executing /opt/VRTSsfmh/bin/xprtlc -u "root" -t 1200 -j /var/opt/VRTSsfmh/xprtlc-payload-x2s4xFEb -l https://solvcstst01.vch.ca:5634/admin/cgi-bin/sfme.pl operation=configure_mh&cs-hostname=lvmvom01.healthbc.org&cs-ip=10.248.224.116&mh-hostname=solvcstst01.vch.ca&agent-password=****** [04/12/2016 08:30:32] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:32] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:32] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:33] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:33] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:33] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:45] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:45] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:45] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:56] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:56] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:56] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:56] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:56] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:56] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:57] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:57] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:57] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:57] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:57] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:57] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:57] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:57] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:57] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:58] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:58] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:58] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:58] [solvcstst01.vch.ca] fancy_die [04/12/2016 08:30:58] [solvcstst01.vch.ca] CONFIGURE_MH_REG_FAILED [04/12/2016 08:30:58] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":-1,"ERROR":"CONFIGURE_MH_REG_FAILED","NAME":"job_add_host","OUTPUT":"","STATE":"FAILED","PROGRESS":100}}{"RESULT":{"RETURNCODE":-1,"UMI":"V-383-50513-5760","ERROR":"CONFIGURE_MH_REG_FAILED","NAME":"add_host","TASKID":"{iHUXu2IK1ZRkTo7H}"}} [04/12/2016 08:30:58] [solvcstst01.vch.ca] fancy_deadSolved3.3KViews0likes3CommentsInfoscale cluster software installation aborted due to netbackup client
Hello All, Facing the issue with the Infoscale 7.0 Veritas RAC software installation, the installation got aborted with error, "CPI ERROR V-9-40-6501 Entered systems have different products installed: Product Installed - Product Version - System Name None - None - hostname InfoScale Enterprise - 7.0.0.000 - hostname Systems running different products must be operated independently The following warnings were discovered on the systems: CPI WARNING V-9-40-3861 NetBackup 7.6.0.4 was installed on hostname. The VRTSpbx rpms on hostname will not be uninstalled " Is thera anybody face this issue earlier. We have Linux 6.6 host where we get this error/issue. Need help to get this resolve.Solved2.6KViews0likes3CommentsAdding New Node Veritas Cluster Server with different hardware specification
Dear Experts, I need your suggestion on the below: Currently we have Two Node Veritas Cluster 6.2 running Windows 2008 R2 hosted on HPE DL380 G7 Servers. We are planning to refresh the hardware and want to move all workloads to new HPE DL380 G9/G10 Servers with Veritas Cluster 6.2 being deployed on Windows 2008 R2. It will only hardware refresh without any Application OR OS Upgrade. Currently Oracle 10gR2 is configured in Failover cluster mode. Application binaries are installed in C:\ drives on all cluster nodes. Would like to know whether I can deploy New VCS 6.2 node on New HPE DL380 G9/G10 Server and Add to existing cluster? If possible, what is the way around. OR this will not work? I tried to search articles, but no luck. Since the hardware architecture will be different, what will be the consequences when we do failover manually OR if we shutdown the Resource Group and start on Newly deployed server? Appreciate you feedback, answers, and any ideas with new approach. Thanks RaneSolved2.5KViews0likes5Commentsclone disk group
Greetings, I need to migrate disk groups between hosts. The current aging server runs vxvm 5.x on solaris 10. The proposed work loads are to be taken on by a combination of Solaris 11 and Solaris 10 logocal domains, split between application and database. I'm not using vcs only vxvm. Thevxvm version on the new platform is 7.1. Due to limitations on the storage arrays, i cannot create clones on the array and map them to the new host. Does vxvm have a cloning mechanism ? Is there is a better approach to migrate the data across different vxvm versions and maintain a point of failback. I would like to maintain the the diskgroups separately until the cutover. The DG configs: # # app-dg # # Lun Veritas Disk Veritas DiskGroup 6000144000000010A00CB5581BC5169F app-disk0 app-dg 6000144000000010A00CB5581BC51699 app-disk1 app-dg 6000144000000010A00CB5581BC516A6 app-disk2 app-dg # # applocal-dg # # Lun Veritas Disk Veritas DiskGroup 6000144000000010A00CB5581BC5161A applocal-disk0 applocal-dg 6000144000000010A00CB5581BC51626 applocal-disk1 applocal-dg 6000144000000010A00CB5581BC51627 applocal-disk2 applocal-dg 6000144000000010A00CB5581BC51619 applocal-disk3 applocal-dg # # db_ora-dg # # Lun Veritas Disk Veritas DiskGroup 6000144000000010A00CB5581BC5161D db_ora-disk0 db_ora-dg 6000144000000010A00CB5581BC5161C db_ora-disk1 db_ora-dg 6000144000000010A00CB5581BC5161B db_ora-disk2 db_ora-dg 6000144000000010A00CB5581BC515F7 db_ora-disk3 db_ora-dg # # db_ora02-dg # # Lun Veritas Disk Veritas DiskGroup 6000144000000010A00CB5581BC51714 db_ora02-disk1 db_ora02-dg 6000144000000010A00CB5581BC51719 db_ora02-disk2 db_ora02-dg 6000144000000010A00CB5581BC5170D db_ora02-disk3 db_ora02-dg cheers MB2.4KViews0likes3Comments- 2.3KViews0likes3Comments