Where install VIOM: on Windows? or on Linux?
Hi every one: I want to install VIOM, I never do it before, and my first question is: Where I should do it? In Windows envioroment? or in Linux Envioroment? All my clusters are in Linux Envioroment: Infoscale 8.0 on Red Hat 8.9. Other question: Is it posible install Cordination Point Service in the same server? All my clusters have 2 nodos: node one in one CPD (site 1) and the node two in other CPD (site 2). I think that the new server (VIOM+CP) shultd be another CPD (site 3). Correct? Thanks Jaime.364Views0likes1CommentApache doesn't start
Hi! I got a VCS configured with apache web server agent. I tried to put it online (from command line it works) in the cluster and I see the following in the Apache_A.log 2016/06/16 12:45:52 VCS ERROR V-16-10061-20318 Apache:ApacheDMZ:online:<Apache::Start> Could not determine Apache Version. 2016/06/16 12:45:52 VCS ERROR V-16-10061-20133 Apache:ApacheDMZ:online:Failed to online resource Anyone can tell me why is it? I run "httpd -v" from command line and I see apache version: 2.2.15 Thanks!1.4KViews1like2CommentsVCS on Linux using NPAR and VLAN tagging
I’m trying to enable a new configuration using NPAR and VLAN tagging to create private heartbeat networks. Which tool should I use to create my heartbeat interfaces which need to be vlan tagged? Does anyone know if the tagged interface should share the MAC of the real interface or should I try to create a unique MAC through software? Thanks975Views0likes0Commentsvxlicrep ERROR V-21-3-1015 Failed to prepare report for key
Dear all, we got a INFOSCALE FOUNDATION LNX 1 CORE ONPREMISE STANDARD PERPETUAL LICENSE CORPORATE. I have installed key using the vxlicinst -k <key> command. But when I want to check it using vxlicrep I'm getting this error for the given key: vxlicrep ERROR V-21-3-1015 Failed to prepare report for key = <key> We have Veritas Volume Manager 5.1 (VRTSvxvm-5.1.100.000-SP1_RHEL5 and VRTSvlic-3.02.51.010-0) running on RHEL 5.7 on 64 bits. I've read that the next step is to run vxkeyless set NONE, but I'm afraid to run this until I cannot see the license reported correctly by vxlicrep. What can I do to fix it? Thank you in advance. Kind regards, Laszlo4.2KViews0likes7CommentsInfoScale 7.1 and large disks (8Tb) with FSS
Hi everyone, I had been successfully running FSS with (thin) 8Tb disk drives on SFCFSHA 6.1 and 6.2.1 (see:http://vcojot.blogspot.ca/2015/01/storage-foundation-ha-61-and-flexible.html) I am trying to reproduce the same kind of setup with InfoScale 7.1 and it seems to have issues with 8Tb drives. Here's the full setup description: 2 * RHEL6.8 hosts with 16gb RAM. 4 LSI virtual adapters, each with 15 drives. c0* and c1* have 2Tb drives. c2* and c3* have 8Tb drives. Both 2tb and 8tb drives are 'exported' and the cluster is stable. Here's what I noticed.. Creating an FSS DG works on 2tb drives but not on 8tb drives (it used to on 6.1 and 6.2.1): [root@vcs18 ~]# /usr/sbin/vxdg -s -o fss=on init FSS00dg ssd_2T_00 [root@vcs18 ~]# vxdg list FSS00dg Group: FSS00dg dgid: 1466522672.427.vcs18 import-id: 33792.426 flags: shared cds version: 220 alignment: 8192 (bytes) local-activation: shared-write cluster-actv-modes: vcs18=sw vcs19=sw ssb: on autotagging: on detach-policy: local dg-fail-policy: obsolete ioship: on fss: on storage-sources: vcs18 copies: nconfig=default nlog=default config: seqno=0.1027 permlen=51360 free=51357 templen=2 loglen=4096 config disk ssd_2T_00 copy 1 len=51360 state=clean online log disk ssd_2T_00 copy 1 len=4096 On the 8Tb drives, it fails with: [root@vcs18 ~]# vxdg destroy FSS00dg [root@vcs18 ~]# /usr/sbin/vxdg -s -o fss=on init FSS00dg ssd_8T_00 VxVM vxdg ERROR V-5-1-585 Disk group FSS00dg: cannot create: Record not in disk group One thing that I noticed is that the 8Tb drives, even though exported, do -not- show up on the remote machine: [root@vcs18 ~]# vxdisk list|grep _00 ssd_2T_00 auto:cdsdisk - - online exported ssd_2T_00_1 auto:cdsdisk - - online remote ssd_8T_00 auto:cdsdisk - - online exported One other thing to note is that the 'connectivity' seems a bit messed up on the 8Tb drives: [root@vcs18 ~]# vxdisk list ssd_2T_00|grep conn connectivity: vcs18 [root@vcs18 ~]# vxdisk list ssd_2T_00_1|grep conn connectivity: vcs19 [root@vcs18 ~]# vxdisk list ssd_8T_00|grep conn connectivity: vcs18 vcs19 That's (IMHO) an error since those 'virtual'drives are local to each of the nodes and the SCSI busses aren't shared vcs18 and vcs19 are two fully independent VMWare machines. This looks like a bug to me but since I don't work for a company with a vx software support contrat anymore I cannot report the issue. Thanks for reading, Vincent3KViews0likes7CommentsBuilding NetBackup Global Cluster with VVR Option... need main.cf
Hello, I am trying to build a global cluster with VVR option. Where i have one NetBackup cluster in Site A with two nodes and one Netbackup Cluster in Site B with single node. I know how to do replication manually for the catalog but would like to add it to global cluster configuration to automate everything. I would appreciate if anyone can share the details what services need to be modified or new service group need to be created to accomodate replication part. If i can have a working main.cf for the above solution it will be really helpfull. Best Regards4.6KViews0likes4CommentsGrow a CVM_CFS on Linux 6.5 VCS 6.2
Hello I'm some newe whit Veritas Cluster, so i'll apreciate your valious help !! I have this scenario 1 ) 3 node Cluster whit CVM_CFS sharing same FS at same time 2 ) a Resource group whit NFS Sharing the CFS of the cluster to a couple o Clients Requirement : 1) grow one of the CFS ( they are VxFS aswell ) from 1 Tb to 2 Tb So i have made a lab and work but i wanna now if i'm missing some important step that i obviusly don't know ==================================== ==================================== Grow CFS Lab ==================================== ==================================== 1) Recognice New luns to Linux more /proc/scsi/scsi echo "- - -" > /sys/class/scsi_host/host3/scan echo "1" > /sys/class/fc_host/host3/issue_lip more /proc/scsi/scsi 2) Show the new Disks [root@cen-tlg-bil-01 ~]# vxdisk list DEVICE TYPE DISK GROUP STATUS ams_21000_23 auto:cdsdisk DGWORK_CRI_DSK01 DG_WORK_CRI online shared ams_21000_24 auto:cdsdisk DGWORK_CRI_DSK02 DG_WORK_CRI online shared ams_21000_26 auto:cdsdisk DGWORK_CRI_DSK03 DG_WORK_CRI online shared ams_21000_27 auto:cdsdisk DGWORK_CRI_DSK06 DG_WORK_CRI online shared ams_21000_28 auto:cdsdisk DGWORK_CRI_DSK07 DG_WORK_CRI online shared ams_21000_29 auto:cdsdisk DGWORK_CRI_DSK08 DG_WORK_CRI online shared ams_21000_30 auto:cdsdisk DGWORK_CRI_DSK09 DG_WORK_CRI online shared ams_21000_31 auto:cdsdisk DGWORK_CRI_DSK10 DG_WORK_CRI online shared ams_21000_42 auto:cdsdisk DGWORK_CRI_DSK04 DG_WORK_CRI online shared ams_21000_43 auto:cdsdisk DGWORK_CRI_DSK05 DG_WORK_CRI online shared ams_21000_44 auto:cdsdisk DGDATA_CRI_DSK01 DG_DATA_CRI online shared ams_21000_45 auto:cdsdisk DGDATA_CRI_DSK02 DG_DATA_CRI online shared ams_21000_46 auto:cdsdisk DGDATA_CRI_DSK03 DG_DATA_CRI online shared ams_21000_52 auto:cdsdisk - - online ams_21000_53 auto:cdsdisk - - online ams_21000_54 auto:cdsdisk - - online sda auto:LVM - - online invalid 3) initialize new disks vxdisksetup -i ams_21000_52 <------- New Disks vxdisksetup -i ams_21000_53 vxdisksetup -i ams_21000_54 4) Verify actual size on DG previously to add disks : [root@cen-tlg-bil-01 ~]# vxassist -g DG_DATA_CRI maxsize VxVM vxassist ERROR V-5-1-15809 No free space remaining in diskgroup DG_DATA_CRI with given constraints 5) Adding disks to the DG : [root@cen-tlg-bil-01 ~]# vxdg -g DG_DATA_CRI adddisk DGDATA_CRI_DSK04=ams_21000_52 DGDATA_CRI_DSK05=ams_21000_53 DGDATA_CRI_DSK06=ams_21000_54 6) Verify the new Maxsize of the DG : [root@cen-tlg-bil-01 ~]# vxassist -g DG_DATA_CRI maxsize Maximum volume size: 125626368 (61341Mb) 7) Verify FS size PreviouslyGrow: df -h /DATA_CRI /dev/vx/dsk/DG_DATA_CRI/VOL_DATA_CRI 60G 221M 56G 1% /DATA_CRI <---- Its 60Gb original size 9) Grow the FS : [root@cen-tlg-bil-01 ~]# vxassist -g DG_DATA_CRI maxsize Maximum volume size: 125626368 (61341Mb) < ------ Doing the grow with Mb Size it fail :( ([root@cen-tlg-bil-01 ~]# vxresize -b -F vxfs -g DG_DATA_CRI VOL_DATA_CRI +61341M [root@cen-tlg-bil-01 ~]# VxVM vxassist ERROR V-5-1-436 Cannot allocate space to grow volume to 313573376 blocks VxVM vxresize ERROR V-5-1-4703 Problem running vxassist command for volume VOL_DATA_CRI, in diskgroup DG_DATA_CRI [root@cen-tlg-bil-01 ~]# vxresize -b -F vxfs -g DG_DATA_CRI VOL_DATA_CRI +125626368 <----- So i made it with the other value 10) Verifying Cluster and FS status : df -h /dev/vx/dsk/DG_DATA_CRI/VOL_DATA_CRI 120G 236M 113G 1% /DATA_CRI <----- Grow OK to 120 Gb [root@cen-tlg-bil-01 ~]# hastatus -sum -- SYSTEM STATE -- System State Frozen A cen-tlg-bil-01 RUNNING 0 A cen-tlg-bil-02 RUNNING 0 A cen-tlg-rtg-01 RUNNING 0 -- GROUP STATE -- Group System Probed AutoDisabled State B ClusterService cen-tlg-bil-01 Y N ONLINE B ClusterService cen-tlg-bil-02 Y N OFFLINE B ClusterService cen-tlg-rtg-01 Y N OFFLINE B DATA_CRI-sg cen-tlg-bil-01 Y N ONLINE B DATA_CRI-sg cen-tlg-bil-02 Y N ONLINE B DATA_CRI-sg cen-tlg-rtg-01 Y N ONLINE B WORK_CRI-sg cen-tlg-bil-01 Y N ONLINE <------ Resource Group is OK B WORK_CRI-sg cen-tlg-bil-02 Y N ONLINE B WORK_CRI-sg cen-tlg-rtg-01 Y N ONLINE B cvm cen-tlg-bil-01 Y N ONLINE B cvm cen-tlg-bil-02 Y N ONLINE B cvm cen-tlg-rtg-01 Y N ONLINE [root@cen-tlg-bil-01 ~]# Could you please helpme to check if my steps are fine and solve my question why the grow fails if i give the value in Mb thank you in advanceSolved1.2KViews0likes1CommentConfiguring VCS with main.cf and types.cf files
From release 7.0 onwards, VCS is a component that is bundled with InfoScale Availability and InfoScale Enterprise products. When you configure VCS, the Veritas High Availability Engine needs to know definitions of the cluster, service groups, resources, and dependencies among service groups and resources. VCS uses the main.cf and types.cf configuration files to convey the cluster, service groups, and resource definitions. The main.cf file comprises include clauses and definitions for the cluster, systems, service groups, and resources. The SystemList attribute designates the priority order and the list of systems where a service group can come up online. The types.cf defines the standard resource types for the VCS engine and the data type that can be set for an attribute. It also defines the parameters that are passed to the VCS engine. These configuration files can be generated in a variety of ways. For more information, see VCS configuration language. By default, both these files reside in the /etc/VRTSvcs/conf/config file. Only the first online system in the cluster reads the configuration files and keeps it in memory. Systems that are brought online after the first system derive configuration information from the existing systems in the cluster. You can also define environment variables to further configure VCS. For more information, see VCS environment variables. You can find other versions of Cluster Server on the SORT documentation page.2.1KViews0likes1Comment