Forum Discussion

Home_224's avatar
Home_224
Level 6
4 years ago

VCS status show starting

Dear All,

The cluster is active /active, try to startup the VCS by hastart, but the status show starting, check the log find the issue below 

2020/12/24 15:19:18 VCS NOTICE V-16-1-10232 Clearing Restart attribute for group vrts_vea_cfs_int_cfsmount1 on node oemapp41
2020/12/24 15:19:18 VCS NOTICE V-16-1-10460 Clearing start attribute for resource cfsmount1 of group vrts_vea_cfs_int_cfsmount1 on node oemapp42
2020/12/24 15:19:18 VCS NOTICE V-16-1-10460 Clearing start attribute for resource cvmvoldg1 of group vrts_vea_cfs_int_cfsmount1 on node oemapp42
2020/12/24 15:19:18 VCS NOTICE V-16-1-10232 Clearing Restart attribute for group vrts_vea_cfs_int_cfsmount1 on node oemapp42
2020/12/24 15:19:18 VCS WARNING V-16-1-10294 Faulted resource cvmvoldg1 is part of the online dependency;clear the faulted resource
2020/12/24 15:19:30 VCS INFO V-16-1-50135 User root fired command: hagrp -online vrts_vea_cfs_int_cfsmount1 devuaeapp32 from localhost
2020/12/24 15:19:30 VCS NOTICE V-16-1-10166 Initiating manual online of group vrts_vea_cfs_int_cfsmount1 on system oemapp42
2020/12/24 15:19:30 VCS NOTICE V-16-1-10232 Clearing Restart attribute for group vrts_vea_cfs_int_cfsmount1 on node oemapp41
2020/12/24 15:19:30 VCS NOTICE V-16-1-10232 Clearing Restart attribute for group vrts_vea_cfs_int_cfsmount1 on node oemapp42
2020/12/24 15:19:30 VCS WARNING V-16-1-10294 Faulted resource cvmvoldg1 is part of the online dependency;clear the faulted resource

I try to stop and start, even reboot the server, the status same as below,  then stop the vcs by hastop I try to manual mount by   manual  not to using the VCS to control   , but it returns the error unable to get disk layout version.  I can see the disk group and disk show on the node . I really don't know what happen.   May I know if there is any solutin to fix the problem?  Thanks you 

  • here is what you can do to determine the root cause of the issue.

    1, stop VCS and manually import the dg as a local dg by running

    vxdg -Cf import <dg_name>  (run the commmand on one node only)

    2.  start up the volumes by running

    vxvol -g <dg_name> startall 

    3. manually mount the file system in question with mount command

    mount -v /dev/vx/dsk/dg/vol /mnt_point     (https://sort.veritas.com/public/documents/vie/7.1/linux/productguides/html/sfcfs_admin/ch09s07.htm

    4. if the mount command failed, run the command below

    fstyp -v /dev/vx/dsk/dg/vol | head -50   (post the output here)

    5. if the output of step 4 looks ok but unable to mount, create a new temp mount point then mount the fifle system to the new temp mount point.

     

    PS - make sure that yjr file system disk layout version of the volumes are supported by the Veritas version running on the cluster  (that is the storage is not newly allocated to this cluster from another system which runs a newer version of Veritas).

    for disk layout version and storage foundation version support info matrix, please visit and search Veritas online KB.

     

  • In general, for anything VCS, run all commands manually. If they don’t run at the command line, don’t bother adding to VCS config.

    In this case, import the disks at the command line.

    It looks like you are trying to mound a CFS disk group. Using the commands from the Admin Guide for ClusterFileSystem, manually start each of the CFS daemons on each node, then, import the diskgroup on one server. That will help isolate any problems.

    Cheers