Forum Discussion

watsonbp's avatar
watsonbp
Level 0
12 months ago

Still receiving email alerts for fixed issues

Hi all

Every day I get an Alert Summary email as below showing my Unresolved Alerts.  However this issue has long since been fixed, the log filesystem on the NetBackup virtual appliance is only 27% full (see below), but I keep getting this email anyway.  How can I mark this as resolved so I stop getting the email?  Virtual appliance version is 5.0.0.1.  Thanks in advance.

 

  • The partition usage has exceeded warning threshold and will soon reach full capacity. Cleanup the partition and re-check status. If the issue is not resolved, contact Veritas Technical Support for assistance.
    • Time of event: 2023-10-10 14:49:26 (+13:00)
    • UMI Event code: V-475-103-1001
    • Component Type: Partition
    • Component: Log
    • Status: 88%
    • State: WARNING
    • Additional information about this error is available at following link:

 

Filesystem Size Used Avail Use% Mounted on
devtmpfs 63G 0 63G 0% /dev
tmpfs 63G 76K 63G 1% /dev/shm
tmpfs 63G 4.1G 59G 7% /run
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/mapper/system-root 49G 17G 30G 36% /
tmpfs 63G 764K 63G 1% /tmp
/dev/sda1 477M 56M 392M 13% /boot
/dev/mapper/system-home 4.8G 122M 4.5G 3% /home
/dev/mapper/system-var 9.8G 6.2G 3.1G 67% /var
/dev/mapper/system-rep 4.9G 33M 4.6G 1% /repository
/dev/mapper/system-log 98G 25G 68G 27% /log
/dev/mapper/system-inst 49G 2.8G 44G 6% /inst
/dev/mapper/system-audit 4.8G 2.9G 1.7G 64% /var/log/audit

 

  • Support couldn't figure this out ? I'm finding that hard to believe - stuck alerts are hardly an unknown issue with mongod .

    For example, here's a technote from 2+ years ago about cleaning this kind of thing out. This particular fix may or may not clear your issue, but it's worth a shot at the least. 

    NetBackup 3.x/4.x/5.x NetBackup (non-Flex) appliance autosupport database may contain corrupt, incomplete or erroneous information that cannot be removed. An alternative solution to reimaging an appliance.
    https://www.veritas.com/content/support/en_US/article.100045217

    If that doesn't work, reopen your case and tell support you need help purging the mongoDB and rebuilding it because of a stuck alert. Once you do that it'll take an hour or so for it to finish rescanning everything but, if the issue is truly resolved, that should take care of your problem. This ought to take you maybe 5 minutes counting the time to elevate. =) 

     

    • VGilmore's avatar
      VGilmore
      Level 2

      You're a legend, thanks for replying. We had come across the KB you linked previously but unfortunately the cleanmoninvdata command didn't resolve it. Funnily enough, we had an issue where a different partition filled up and we received a fresh alert for that, once we rectified the issue and received the resolved email for it we noticed that we were no longer receiving the stale alert emails either. Very odd, but we're happy that it's worked itself out. If the issue ever does return then i'll be sure to pose your suggestions to Veritas support. Thanks again!

  • I realise this is an old thread but i'm experiencing the same issue and this is the first post i've seen about it. Virtual appliance version 5.1.1, have this alert come through daily:

    • The partition usage has exceeded warning threshold and will soon reach full capacity. Cleanup the partition and re-check status. If the issue is not resolved, contact Veritas Technical Support for assistance.
      • Time of event: 2024-07-18 12:20:22 (+10:00)
      • UMI Event code: V-475-103-1001
      • Component Type: Partition
      • Component: Log
      • Status: 80%
      • State: WARNING

        This has since been resolved and as you can see by the timestamp it's been almost a month. There does not appear to be any way to acknowledge or suppress this stale alert. Raised a case with Veritas support who were unable to resolve the issue, they suggested a reboot of the appliance which we've done several times. If anyone knows how to fix this i'd be eternally grateful.
  • I came for the exact issue and was happy to find the answer in jnardello's response.