Get-BEBackupDefinition not showing all results
Giving BEMCLI a test with a view to automating switching duplicate jobs to new S3 buckets every 3 months. Get-BEBackupDefinition in conjuction withSet-BEDuplicateStageBackupTask seems to be exactly what is required from the examples in the help guide... Get-BEBackupDefinition "Backup Definition 01" | Set-BEDuplicateStageBackupTask -Name "Duplicate 2" -Storage "Any disk storage" | Save-BEBackupDefinition However, runningGet-BEBackupDefinition only returns a few results (from what i can see only agent based jobs). None of the VM based jobs show up. Running Get-BEJob shows everything as expected. Any pointers on how to use bemcli/powershell to automate changing jobs to use the new s3 bucket?154Views0likes0CommentsBack up to Local Disk Storage and then Duplicate to Cloud Deduplication Storage
We would like to have a local backup of our servers to a normal disk storage device in Backup Exec. This will allow for fast restore times. But we would also like the benefits of ransomware protection that cloud deduplication with immutable storage provides. So, we created a job that backs up to the local disk storage device and then runs a Duplicate job that has the cloud deduplication storage device as the destination. There are no errors from the job configured this way and we verified that the retention lock is being enabled properly in the immutable cloud storage. The problem is that the Duplicate job log shows this. Deduplication stats: scanned: 0 KB, CR sent: 0 KB, CR sent over FC: 0 KB, dedup: 0.0%, cache hits: 0, rebased: 0, where dedup space saving:0.0%, compression space saving:0.0% It seems that with this method we are getting immutable backups but not any deduplicated data. Is the log incorrect or does this method really not deduplicate anything? I don't know if it makes a difference, but the cloud storage is in Azure. We properly created the local deduplication volume and the cloud deduplication device with immutability support. I'm not asking for any help in setting that up and I have verified that the deduplication part of that is working if we backup straight from the server to the cloud deduplication device as shown here. Deduplication stats: scanned: 1129257857 KB, CR sent: 11590580 KB, CR sent over FC: 0 KB, dedup: 98.0%, cache hits: 8887711, rebased: 2994, where dedup space saving:98.0%, compression space saving:0.0%279Views0likes0CommentsRestore BEMCLI
Hi. I'm trying to automate the restore of files using the BEMCLI powershell scripts. However since some of our job history has been cleared, we're unable to find the files using BEMCLI, only through the UI search option are they found using file search in the restore panel. Is there any way to search for a file using BEMCLI, in the same way you do when you use restore -> search for file in the UI and restore based on this "selection"? Thanks213Views0likes0CommentsAlerts and Notifictions
Good Day.... I am demonstrating a trial for a client and am using Backup Exec 22.2 1193. I am trying to demonstrate how alerts can be automatically answered but I am not allowed to change many enable to disable and/or change the time and action to take. This trial is supposed to be a full blown version. How do I make it possible to change any of the information in this section. Attacked is an example. Thanks, Sig208Views0likes0CommentsNetwork File Share Backup Not Using Proper Credential
License: BE23.0 1250 Simple Core (Agent for Win, Agent for VMWare, Agent for Linux, tape, etc) Host OS: Windows Server 2022 Standard 20348.2402 Target: Synology NAS with CIFS3 Problem: The BE agent on the host is connecting to the NAS using the default credential instead of the credential that I assign during the job setup. I know this is happening because the logs on the synology show the user being used as "Administrator" which is not the user I setup for the credential used in the job. This results in the job failing every time. Note that when I use the "Test Run" job that was setup along with the backup job, the test run job seems to use the correct credential. It's the backup job that uses the wrong credential. Secondary Annoyance: The synology shows up as a "Windows" computer in BE, despite it being a CIFS target (even if I tell it to use Unix, it still says Windows).399Views0likes1CommentV23 and Office 365 Tenant
Actually version 23. I've set up the office 365 tenant. Backup has been running for 24 hours. Has backed up 2.54 MB of data. Is there a step I'm missing? I deleted the tenant and recreated it. Same behavior. Finding help on this is kind of difficult. Thanks, Kelly286Views0likes0CommentsFinal error: 0xe00095a7 - The operation failed because the vCenter or ESX server reported that the
The product Version is 23 vSphere 8.x Error on Job Log: Completed status: Failed Final error: 0xe00095a7 - The operation failed because the vCenter or ESX server reported that the virtual machine's configuration is invalid. Final error category: Resource Errors For additional information regarding this error refer to link V-79-57344-38311 vSphere error message Invalid virtual machine configuration. Virtual NUMA cannot be configured when CPU hotadd is enabled. We are able to reproduce this issue with other servers and able to backup/restore as long as hot swap CPU is not enabled. Issue is we are unable to restore VM Server due to hot swap enabled CPU on VM in VSPHERE https://www.veritas.com/support/en_US/article.100058556 it says their is a hotfix says only applies toProduct(s): NetBackup & Alta Data Protection. We already have a case with Veritas for a week now and they are not able to produce a hotfix or any fix. They were able to provide a workaround of restoring VM files on our backup server and importing them into vSphere, but that is not a solution.627Views0likes1CommentCatalog job paused
Hi, Does anyone know why a catalog job paused and how to resolve the issue on B2D disk? It says the Device is paused and that's fault because the Device is not paused. Restarting the Backup Exec services does not help, the catalog job still paused after I start a new catalog job. I can cancel the job and start a new one and it will pause again at some point. Please advise! Thank you!296Views0likes0CommentsVeritas Backup Exec Job Failures (network unstable)
Hi, Is there any way to improve reliability of backups on slow/unstable/wide networks? I have a Veritas BE in the cloud that pulls backups from multiple sites around the world and saves them to S3 bucket. Some of these backups could be up to 200-300 GB and take several hours (up to 24) to complete. Quite often they end up failing after several hours with error codes like E00084F8, E000848C,E00084EC. Most likely there could be some occasional network issues, which is not unexpected if the backup runs for such a long time. The question is: are there any settings or configs in Veritas BE that I can use to improve reliability? Maybe some better buffering, etc.?720Views0likes5CommentsVeritas Backup Exec and Hyper-V VM's
Hi all, I hope someone can help me clarify this. I have a Hyper-V hosts with several VM's on it. I am backing them all up using Veritas Backup Exec v22. What I am unsure of is the correct way to sele/deselect vhdx files to a) ensure a proper backup that can be used to restore the VM; b) avoid duplicating very large amount of data. My VM <xxxx03> has 2 HDD's definined (hence 2 hhdx files). Within the VM I have disks C: and D: defined using their separate vhdx files. When I select my VM to be backed up under Microsoft Hyper-V -> Virtual Machines, it shows me both vhdx files. LEt's say I want top backup the entire VM and both disks. I suspect I check all the boxes here. So far, all clear. But then I realise that both these vhdx files (and some extra) are also shown under my Hyper-V host itself (drive E:). So, do I also need to check all everything in here? Or will it double the size of the backup?551Views0likes2Comments