Where install VIOM: on Windows? or on Linux?
Hi every one: I want to install VIOM, I never do it before, and my first question is: Where I should do it? In Windows envioroment? or in Linux Envioroment? All my clusters are in Linux Envioroment: Infoscale 8.0 on Red Hat 8.9. Other question: Is it posible install Cordination Point Service in the same server? All my clusters have 2 nodos: node one in one CPD (site 1) and the node two in other CPD (site 2). I think that the new server (VIOM+CP) shultd be another CPD (site 3). Correct? Thanks Jaime.362Views0likes1CommentDRL not working on mirrored volumes in VVR - RVG (Mirrored volume doing full resync of plexes)
I think I am hitting a major issue here with a mirrored volume in RVG. SRL is supposed to provide the DRL functionality . Hence DRL logging is explicitly disabled when a volume is added to RVG. However my testing shows that DRL is not working and in the case of a mirror plex out of sync due to a server crash etc, full resync of mirror plexes is happening. (not just the dirty regions). Here is a quick and easy way to recreate the issue: My configuration: Infoscale 8 Redhat 8.7 I have a mirrored volume sourcevol2 (2 plexes) which I created like below: #vxassist -g dg1 make sourcevol2 1g logtype=dco drl=on dcoversion=20ndcomirror=1 regionsz=256 init=active #vxassist -b -g dg1 mirror sourcevol2 I wait for the synchronization to complete #/opt/VRTS/bin/mkfs -t vxfs -o nomaxlink /dev/vx/rdsk/dg1/sourcevol2 # mount /dev/vx/dsk/dg1/sourcevol2 /sourcevol2 I create SRL as below: #vxassist -g dg1 make dg1_srl 1g layout=concat init=active I create primary rvg as below: #vradmin -g dg1 createpri dg1_rvg sourcevol2 dg1_srl Verified dcm in dco flag is on. #vxprint -g dg1 -VPl dg1_rvg |grep flag flags: closed primary enabled attached bulktransfer dcm_in_dco Added secondary #vradmin -g dg1 addsec dg1_rvg primarynode1 primarynode2 Started initial replication #vradmin -g dg1 -a startrep dg11_rvg primarynode2 Verified replication is uptodate #vxrlink -g dg1 -T status rlk_primarynode2_dg1_rvg VxVM VVR vxrlink INFO V-5-1-4467 Rlink rlk_primarynode2_dg1_rvg is up to date Here is the actual scenario to simulate mirror plexes out of sync : On primary: Run a DD command to put some IO on sourcevol2 #dd if=/dev/zero of=/sourcevol2/8krandomreads.0.0 bs=512 count=1000 oflag=direct In another terminal , force stop the sourcevol2 while dd is going on. #vxvol -g dg1 -f stop sourcevol2 #umount /sourcevol2 Start the sourcevol2 #vxvol -g dg1 start sourcevol2 #vxtask -g dg1 list -l Task: 160 RUNNING Type: RDWRBACK Operation: VOLSTART Vol sourcevol2 Dg dg1 Even though I only changed only few regions on the sourcevol2 (sequential writes of 512b), the volume goes through a full plex resync (as indicated by the time to start the volume). Summary: DRL on a Volume added to an RVG is not working . Hence mirrored volumes are going through a full plex resync as opposed to only resync of dirty regions.1.7KViews0likes5CommentsDoes Infoscale Storage (VVR) support cascaded space-optimized snapshot?
Configuration: Infoscale Storage 8.0 on Linux Infoscale storage foundation supports cascaded snapshot using vxsnap infrontof= to do cascaded snapshots Infoscale storage (with Volume replicator) documentation doesn't describe cascaded snapshot. I checked manpage for vxrvg. Does not have an infront of attribute. Does that mean cascaded space-optimized snapshots are not supported/permitted on RVG?1.2KViews0likes2Commentsfencing driver fail after storage firmware
In emc unity 380, fencing disk abnormal after firmware. So the b port doesn't go up. Do you have any why this is? https://sort.veritas.com/public/documents/sf/5.0/solaris64/html/sf_rac_install/sfrac_error_messages5.html Does this problem occur when the serial number of the disk is changed?1.2KViews0likes1CommentVCS Simulator 7.0
Hello all. VCS simulator has been around for quite a while and it is a really usefull tool both for testing cluster configuration files before put them in production and for DEMO purposes which is my case and that's why I'm writing this post. Is there any user guide or document explaining how to use or how are configured the included cluster configuration examples? I've been able to understand some cluster examples and use some features, but some other cluster examples seem like some configuration is required to make them work but as there's no information (to my knowledge) on how they are configured, I haven't been able to find a way to make this cluster examples work. Really appreciate any guidance that points my into the right direction.1.5KViews0likes2Comments- 2.3KViews0likes3Comments
NVMe drives disappear after upgrade to the RHEL7.7 kernel.
Hi, I'm using Infoscale 7.4.1.1300 on RHEL 7.x Tonight, as I was running RHEL7.7 with the latest RHEL7.6 kernel, I decided to upgrade to the RHEL7.7 kernel (the only part of 7.7 which was missing). This had the nasty side effect of making NVMe drives disappear. 1) before upgrade: # modinfo vxio filename: /lib/modules/3.10.0-957.27.2.el7.x86_64/veritas/vxvm/vxio.ko license: VERITAS retpoline: Y supported: external version: 7.4.1.1300 license: Proprietary. Send bug reports to enterprise_technical_support@veritas.com retpoline: Y rhelversion: 7.6 depends: veki vermagic: 3.10.0-957.el7.x86_64 SMP mod_unload modversions # vxdmpadm listctlr CTLR_NAME ENCLR_TYPE STATE ENCLR_NAME PATH_COUNT ========================================================================= c515 Samsung_NVMe ENABLED daltigoth_samsung_nvme1 1 c0 Disk ENABLED disk 3 # vxdisk list DEVICE TYPE DISK GROUP STATUS nvme0n1 auto:cdsdisk - (nvm01dg) online ssdtrim sda auto:LVM - - LVM sdb auto:cdsdisk loc01d00 local01dg online sdc auto:cdsdisk - (ssd01dg) online 2) after upgrade: # modinfo vxio filename: /lib/modules/3.10.0-1062.1.1.el7.x86_64/veritas/vxvm/vxio.ko license: VERITAS retpoline: Y supported: external version: 7.4.1.1300 license: Proprietary. Send bug reports to enterprise_technical_support@veritas.com retpoline: Y rhelversion: 7.7 depends: veki vermagic: 3.10.0-1062.el7.x86_64 SMP mod_unload modversions # vxdmpadm listctlr CTLR_NAME ENCLR_TYPE STATE ENCLR_NAME PATH_COUNT ========================================================================= c0 Disk ENABLED disk 3 # vxdisk list DEVICE TYPE DISK GROUP STATUS sda auto:LVM - - LVM sdb auto:cdsdisk loc01d00 local01dg online sdc auto:cdsdisk - (ssd01dg) online I've reverted to the latest z-stream RHEL7.6 kernel (3.10.0-957.27.2.el7) while I research this issue. Has this been reported already?4.5KViews0likes9CommentsInfoscale command information
Hi, I need to know if there is a way how to track which disk with UDID is which drive/mount point? I have this exisiting environment with oracle server have storage cluster using infoscale storage foundation but i dont have any information all about the configuration or disk assignment. anyone can help?Solved4.6KViews0likes5Commentsvxconfigd core dumps at vxdisk scandisks after zpool removed from ldom
Hi I'm testing InfoScale 7.0 on Solaris with LDoms. Creating a ZPOOL in the LDom works. It seems there is something not working properly. On the LDom Console I see May 23 16:19:45 g0102 vxdmp: [ID 557473 kern.warning] WARNING: VxVM vxdmp V-5-3-2065 dmp_devno_to_devidstr ldi_get_devid failed for devno 0x11500000000 May 23 16:19:45 g0102 vxdmp: [ID 423856 kern.warning] WARNING: VxVM vxdmp V-5-0-2046 : Failed to get devid for device 0x20928e88 After I destroy the ZPOOL, I would like to remove the Disk from the LDom. To be able to do that I remove and disable the disk /usr/sbin/vxdmpadm -f disable path=c1d1s2 /usr/sbin/vxdisk rm c1d1s2 After this I'm able to remove the Disk from the LDom using ldm remove-vdisk. The dmp configuration is not cleaned up. # /usr/sbin/vxdmpadm getsubpaths ctlr=c1 NAME STATE[A] PATH-TYPE[M] DMPNODENAME ENCLR-TYPE ENCLR-NAME ATTRS ================================================================================ NONAME DISABLED(M) - NONAME OTHER_DISKS other_disks STANDBY c1d0s2 ENABLED(A) - c1d0s2 OTHER_DISKS other_disks - # If I run vxdisk scandisks at this stage, the vxdisk command hangs and the vxconfigd core dumps: # file core core: ELF 32-bit MSB core file SPARC Version 1, from 'vxconfigd' # pstack core core 'core' of 378: vxconfigd -x syslog -m boot ------------ lwp# 1 / thread# 1 --------------- 001dc018 ddl_get_disk_given_path (0, 0, 0, 0, 66e140, 0) 001d4230 ddl_reconfigure_all (49c00, 0, 400790, 3b68e8, 404424, 404420) + 690 001b0bfc ddl_find_devices_in_system (492e4, 3b68e8, 42fbec, 4007b4, 4db34, 0) + 67c 0013ac90 find_devices_in_system (2, 3db000, 3c00, 50000, 0, 3d9400) + 38 000ae630 ddl_scan_devices (3fc688, 654210, 0, 0, 0, 3fc400) + 128 000ae4f4 req_scan_disks (660d68, 44fde8, 0, 654210, ffffffec, 3fc400) + 18 00167958 request_loop (1, 44fde8, 3eb2e8, 1800, 19bc, 1940) + bfc 0012e1e8 main (3d8000, ffbffcd4, ffffffff, 42b610, 0, 33bb7c) + f2c 00059028 _start (0, 0, 0, 0, 0, 0) + 108 Thanks, Marcel1.7KViews0likes1Comment