unable to select and add drives of Quantum tape library
Hello, Recently I have installed a new Quantum Scalar i3 tape library with one robot, 3 drives, and 50 slots, I have attached the library configuration images, the problem is when I want to "Define New Storage Devices", Netbackup finds the robot and the drives under it but I am unable to select them and I get this error : The following robot(s) are enabled but do not have enabled drives configured. I have tried different drive topology connections (loop preferred, point to point, loop, and point preferred ).Solved6.4KViews0likes20CommentsCertificate on master server expired
i have a problem with a NB master server 8.0 which was not used for more than one year. Now i cannot connect since the security certificate is expired. It is the master server itself. I tried to renew on the server with "nbcertcmd -renewCertificate" but without success. Then i tried to revoke the certificate and reissue with a generated token but to do so i have to login to WEB Mangement service with "bpnbat -login -logintype WEB". But this is also failing. (Login to Authentication Broker only is successful but does not help) Any help is appreciated thanks KaiSolved8.9KViews1like11CommentsMigration of data from veeam to netbackup
Hello Team, In one of my project Veeam backup software is running and now we are planning to migrate all clients from Veeam to Vritas netbackup. so to pursue with above have some queries. How to migrate existing data from Veeam backup software to Veritas netbackup tool. How to perform restore from existing backed up data done by Veeam software through Veritas netbackup software ? we have IBM and Oracle tape libraries which is used for Veeam . Can we use that tape libraries with Veritas netbackup? what will be the pre-requisites? Through Veeam some data is backed up on disk and some data is on tape. Once netbackup infra is UP can we take backup of disk data through netbackup on tape ,if yes, are we good to perform restore if any disaster happens? And the data which is on tape how we can migrate that one?Solved3.5KViews0likes8CommentsUnable to retrieve snapshot parameters for method vmware_v2
Hi to all, After an upgrade from 7.7.2 to 8.0 i have a problem with vmware policies, now when i try to edit a backup policy i have the following error Unable to retrieve snapshot parameters for method vmware_v2 i can close this alert but into the tab vmwarei cannot edit the following fields: - Primary VM Identifier - Existing snaphost handling there is no value. Thanks in advance RegardsSolved4.3KViews0likes11CommentsNbServerMigrator Tool
Hello NBU world! I've just completed two master server migrations using the NbServerMigrator tool (HP-UX > RHEL), and I was curious to see if anyone else had experience with this tool? Veritas has a pretty comprehensive document to using it, but I still encountered issues that were not documented. Aside fromthis guide,there is a lack of documentation related to its usage. As NBU 8.1.1 nears its EOL, I think there will be an influx of administrators using this tool to migrate off of HP-UX and AIX servers. I've documented the issues I experienced and how they were resolved, but I wanted to receive input from any other admins who have used this tool before compiling a TechNote or forum post. I'll be posting either a TechNote or forum post with more detailed info, but I've pasted a general overview of the issues I encountered below. Issues I encountered: 1. In both migrations, the pre-check did not flag the root/sys user/group combo being required. Once we began the migration, the transfer failed and told us to fix that. 2. The data transfer takes a LONG time. There is some area for improvement here for how I performed the migration (used default compression and other settings, which could have minimized the transfer time if tuned better). One domain had a catalog ~250GB in size, and the other was ~1 TB in size. 3. The documentation says that the -clean_up switch removes ALL temporary data, including the image data that had already been transferred. I had to perform this twice due to interrupts in the transfer, and neither time caused the tempdb directory (which contains the image information) to be removed. 4. Both migrations required quite a bit of manual intervention to get the certificates straightened out on the target servers. Once you perform the final -overwrite step, the source sends its certificates to the new master, which already has certificates configured as part of the initial NBU install. This part (after a 12+ hour migration), is quite painful, as it is one of the last steps before the migration concludes. 5. During the second migration, the terminal session between the target and source was terminated due to an ISP outage by the person helping me with the migration (we were sharing screens on his computer). This caused quite a lot of trouble with the migration (I'll detail it in my in-depth write up). There are a few more things I observed during the process, but I was curious if anyone else had observations or experience with this tool.1.4KViews2likes3CommentsBackup image expiration cannot be modified because its SLP processing is not yet complete
Hi there, We found a really old catalog backup in the Catalog which should long be expired. When trying to expire it in the GUI it says Backup image expiration cannot be modified because its SLP processing is not yet complete (1573). Doing this from commandline it says about the same (of course). When trying to cancel its replication it gives also an error: # nbstlutil cancel -backupid<Imagename> Operation not successful: status = 220 (database system error) According to Marianne in a previous issue 220 means 'no image found'. The image is still active in a way: # nbstlutil list -backupid <Imagename> V7.6.0 I <Masterserverhostname> <Imagename><Masterserverhostname> 1384237218 prd-cat-netbackup_catalog 7 0 zm_catalog_slp 3 false 1384242929 *NULL* 1 {00000000-0000-0000-0000-000000000000} 0 0 1384237225 V7.6.0 C<Masterserverhostname> <Imagename> 1 2147483647 1385360418 <stuname> 1 0 0 0 0 *NULL* 2147483647 0 2147483647 0 0 0 1 1 V7.6.0 F<Masterserverhostname> <Imagename> 1 1 0 aaaa2 <Mediaserverhostname> *NULL* 0 6 1 3488240640 0 aaaa2 *NULL* 1;DataDomain;<>DDhostname>;<diskpoolname>;<lsuname>;0 Now I am kind of stuck.Solved13KViews0likes16CommentsNetbackup Tape writing behaviour
Hello All, I have a question about tape retention. The question may be stupid and /i think i know the answer, but I am not sure. In a situation where netbackup is writing a frontend(client to tape) or SLP driven deduplication backup (disk to tape).. and for some reason the backup fails. What happens to the data that has already been written on the tape? Is it retained as per the policy schedule retention that is defined or is it discarded? Also what happens to such a tape where the backup that was writing to it failed before it could complete? Is there a way of identifying such tapes? Maybe erasing them and putting them i nthe scratch pool? Any help is fully appreciated. Kind Regards, Jay1.8KViews0likes7CommentsActive Directory Granular suddenly fails
Hi all, We have 4 MS-Windows policies to backup 5 different AD controler. All these policies are backuped to the same media server. Since few days (or weeks), we have noticed status warning 1 for 3 policies. I've checked pre-requirements on client, i've created ADlocation on agent. I've tested nfs communication between media and client with success. There is job details : 0 nov. 2020 10:17:22 - Info bpbrm (pid=8444) from client xxxx: TRV - Starting granular backup processing for (System State\Active Directory). This may take a while... 20 nov. 2020 10:17:40 - Info bpbrm (pid=8444) from client xxxx: TRV - Granular processing failed! 20 nov. 2020 10:17:40 - Error bpbrm (pid=8444) from client xxxx: ERR - Error encountered while attempting to get additional files for System State:\ 20 nov. 2020 10:17:48 - Info bptm (pid=4320) waited for full buffer 2530 times, delayed 46540 times 20 nov. 2020 10:17:51 - Info bptm (pid=4320) EXITING with status 0 <---------- 20 nov. 2020 10:17:51 - Info xxxx (pid=4320) StorageServer=PureDisk:xxxx; Report=PDDO Stats (multi-threaded stream used) for (xxxx): scanned: 2710754 KB, CR sent: 179845 KB, CR sent over FC: 0 KB, dedup: 93.4%, cache disabled 20 nov. 2020 10:17:52 - Info bpbrm (pid=8444) validating image for client xxxx 20 nov. 2020 10:17:53 - Error bpbrm (pid=8444) cannot send mail to xxxx,xxxx 20 nov. 2020 10:17:53 - Info bpbkar32 (pid=2288) done. status: 1: the requested operation was partially successful 20 nov. 2020 10:17:53 - end writing; write time: 0:12:21 So I've activated log (bpbkar) in client : 10:16:37.985 [2288.4168] <4> tar_backup_tfi::backup_send_chkp_data_state: INF - checkpoint message: CPR - 5366272 2288 0 0 1193 0 0 0 4 600047616 2 2 512 0 1 810000 0 0 0 1184 59 /System State/Active Directory/C:_windows_NTDS/pdi_strm.bin 10:17:22.613 [2288.4168] <2> tar_base::V_vTarMsgW: TRV - Starting granular backup processing for (System State\Active Directory). This may take a while... 10:17:35.301 [2288.4168] <2> _nbfs_view_lock(): INF - ACE_OS::hostname: xxxx 10:17:40.192 [2288.4168] <2> ov_log::V_GlobalLog: ERR - raiPdiEnumerate():start() failed, error = 17 10:17:40.192 [2288.4168] <2> beds_pdi::getReplayedFile(): ERR - raiPdiEnumrerate() failed 10:17:40.192 [2288.4168] <2> tar_base::V_vTarMsgW: TRV - Granular processing failed! 10:17:40.192 [2288.4168] <2> tar_base::V_vTarMsgW: ERR - Error encountered while attempting to get additional files for System State:\ Do you have any idea to help me ? Which log can help me ?Solved2.3KViews0likes6CommentsUnable to configure disk pool for cloud storage server
Greetings! I have troubles, when i configure disk pool for cloud storage server. I run disk pool configuration wizzard. On volume section i can't see created buckets. If i click "Add new volume" and try to create bucket it fail with“RDSM has encountered an STS error: getDiskVolumeInfo", however NEW CREATED BUCKET appear in S3 gui. Cloud is Red Hat Ceph v3.874Views0likes1Comment