Forum Discussion

enzo68's avatar
enzo68
Level 4
11 years ago

Fencing and Reservation Conflict

Hi to all

 

I have redhat linux 5.9 64bit with SFHA 5.1 SP1 RP4 with fencing enable ( our storage device is IBM .Storwize V3700 SFF  scsi3 compliant

[root@mitoora1 ~]# vxfenadm -d

I/O Fencing Cluster Information:
================================

 Fencing Protocol Version: 201
 Fencing Mode: SCSI3
 Fencing SCSI3 Disk Policy: dmp
 Cluster Members: 

        * 0 (mitoora1)
          1 (mitoora2)

 RFSM State Information:
        node   0 in state  8 (running)
        node   1 in state  8 (running)

 

********************************************

 in /etc/vxfenmode   (scsi3_disk_policy=dmp    and   vxfen_mode=scsi3)

vxdctl scsi3pr
scsi3pr: on

 [root@mitoora1 etc]# more /etc/vxfentab
#
# /etc/vxfentab:
# DO NOT MODIFY this file as it is generated by the
# VXFEN rc script from the file /etc/vxfendg.
#
/dev/vx/rdmp/storwizev70000_000007
/dev/vx/rdmp/storwizev70000_000008
/dev/vx/rdmp/storwizev70000_000009

******************************************

 [root@mitoora1 etc]# vxdmpadm listctlr all
CTLR-NAME       ENCLR-TYPE      STATE      ENCLR-NAME
=====================================================
c0              Disk            ENABLED      disk
c10             StorwizeV7000   ENABLED      storwizev70000
c7              StorwizeV7000   ENABLED      storwizev70000
c8              StorwizeV7000   ENABLED      storwizev70000
c9              StorwizeV7000   ENABLED      storwizev70000

main.cf

 

cluster drdbonesales (
        UserNames = { admin = hlmElgLimHmmKumGlj }
        ClusterAddress = "10.90.15.30"
        Administrators = { admin }
        UseFence = SCSI3
        )

**********************************************

 I configured the coordinator fencing so I have 3 lun in a veritas disk group ( dmp coordinator )
All seems works fine but I noticed a lot of  reservation conflict in the messages
of both nodes

On the log of the server I am constantly these messages:   /var/log/messages

Nov 26 15:14:09 mitoora2 kernel: sd 7:0:1:1: reservation conflict
Nov 26 15:14:09 mitoora2 kernel: sd 8:0:0:1: reservation conflict
Nov 26 15:14:09 mitoora2 kernel: sd 8:0:1:1: reservation conflict
Nov 26 15:14:09 mitoora2 kernel: sd 10:0:0:1: reservation conflict
Nov 26 15:14:09 mitoora2 kernel: sd 10:0:1:1: reservation conflict
Nov 26 15:14:09 mitoora2 kernel: sd 9:0:1:1: reservation conflict
Nov 26 15:14:09 mitoora2 kernel: sd 9:0:0:1: reservation conflict
Nov 26 15:14:09 mitoora2 kernel: sd 7:0:1:3: reservation conflict
Nov 26 15:14:09 mitoora2 kernel: sd 8:0:0:3: reservation conflict
Nov 26 15:14:09 mitoora2 kernel: sd 8:0:1:3: reservation conflict
Nov 26 15:14:09 mitoora2 kernel: sd 10:0:1:3: reservation conflict

 

 

 

You have any idea?

 

Best Regards

 

Vincenzo

 

 

 
  • Hi

    As I mentioned in first post on this thread ,I was in the same opinion that these messages are ignorable (if there are no operational issues) .. was expecting that support would say the same .. however good to have confirmation that its identified bug & would be fixed...

     

    thx for the update ..

     

    G

  • All outputs looks OK , ASL is also claiming the devices .. nothing wrong here ..

    Do you know what failover mode is set in the array ? DMP has recommendation to run best in ALUA mode (worth looking at this as well)

     

    G

     

  • Hi Gaurav,

    The vendor IBM has confirmed to me that the  Storwize V3700   type is ALUA.

     

     

    The problem may be the library?

     

    rpm -qa|grep VRTSaslapm
    VRTSaslapm-5.1.134.000-SP1_RHEL5

     

    vxddladm listsupport all |grep -i alua (I don't see IBM alua)
    libvxhdsalua.so     HITACHI             DF600, DF600-V, DF600F, DF600F-V
    libvxhpalua.so      HP, COMPAQ          HSV101, HSV111 (C)COMPAQ, HSV111, HSV200, HSV210, HSV300, HSV400, HSV450, HSV340, HSV360

     

    vxdmpadm list dmpnode all |grep array-type
    array-type      = Disk
    array-type      = A/A-A-IBMSVC
    array-type      = A/A-A-IBMSVC
    array-type      = A/A-A-IBMSVC
    array-type      = A/A-A-IBMSVC
    array-type      = A/A-A-IBMSVC
    array-type      = A/A-A-IBMSVC
    array-type      = A/A-A-IBMSVC
    array-type      = A/A-A-IBMSVC
    array-type      = A/A-A-IBMSVC
    array-type      = A/A-A-IBMSVC

     vxdmpadm listenclosure all
    ENCLR_NAME        ENCLR_TYPE     ENCLR_SNO      STATUS       ARRAY_TYPE     LUN_COUNT
    =======================================================================================
    disk              Disk           DISKS                CONNECTED    Disk        1
    storwizev70000    StorwizeV7000  00c020207110XX00     CONNECTED    A/A-A-IBMSVC  10

     

    Best Regards

    Vincenzo

  • Hi ,

    Yep, its worth to ask support on this .. as per Symantec in the below article

    http://www.symantec.com/business/support/index?page=content&id=TECH47728

    page 35 says Storwise arrays are best supported by DMP in ALUA mode ..

     

    & as per below article

    http://www.symantec.com/business/support/index?page=content&id=TECH77062 .. there is no addition of ALUA support from change log

    & unfortunately this is the last updated ASL/APM software package for Linux ..  Support or backend teams can answer if there is an upcoming plan to upgrade libvxibmsvc.so for ALUA support ..

    also, if there are any known issues (recently found) can be answered by support ..

     

    All d best

     

    G

  • Hi,

    this is the answer of the support symantec:

     

    "......As per the discussion with you because of these messages there will be no impact on the functionality of the product.


    You may also refer :

    http://www.symantec.com/docs/TECH170352

    However will try to give the feedback internally so it get addressed in the newer releases."

     

    thank you for the support
     
    have a nice weekend
     

    Vincenzo

  • Hi

    As I mentioned in first post on this thread ,I was in the same opinion that these messages are ignorable (if there are no operational issues) .. was expecting that support would say the same .. however good to have confirmation that its identified bug & would be fixed...

     

    thx for the update ..

     

    G