restoring file from linux source to windows target; is possible?
Hi people: We are using NBU 10.x on windows 2022 media/master server (both roles same server). Have a mix of windows and linux clients. We need to restore a file from /wrk/app/app-baks.tar to a windows server on e:\wrk\app. My co-worker running this restore tells me that NBU GUI is telling that restore path is invalid. Tried e:/wrk/app but don't work. And seems that isn't supported. Any idea to accomplish this task ? Maybe UNC or and NFS on the same server? Hope to being clear.39Views0likes4CommentsNBU 10.3.0.1 Protection Plan doesnt let you add asset
When trying to add protection to a Microsoft Azure Cloud VM asset, the PPlans will be greyed out. Tried checking the guides, faqs etc for any reason as to why it wont let me add the protection and could not find anything. Seems to be arbitrary.22Views0likes1CommentNBU restore VMs from AIR_Gapped DR site into Main site
we are checking the possibility to restore a VM from AIR_Gap replication location back into the main site's Vcenter1 main site: NBU1+Vcenter1 DR site: NBU2+Vcenter2 VM initially resides on Vcenter1 of main site, NBU 10.2.0.1 (flex instance) takes backup of this VM and replicate its backup image to WORM in DR (NBU2). in case we got disaster in main site and NBU1 in main site is crashed, can we restore the VM from WORM/AIR-Gap (NBU2) into Vcenter1? is it only that we need to add Vcenter1 into NBU2? thanks,99Views0likes1Comment[Snapshot Manager] Inconsistency between Cloud and Storage sections
Hello! Looking for help, please. My situation is the following: I was faced with an enviroment with an old CloudPoint server that failed on upgrading, resulting in the loss of the images and configuration. Upon fresh installation of a new VM of the Snapshot Manager 10.3, i promptly configured the Cloud Section of the Web UI Primary server and added the provider configuration (Azure). All the permissions have been granted to the Snapshot Manager regarding Azure. Protection Plans created, protected assets selected. Problem is, even thou the jobs are coming through with status 0, i am unable to find any recovery points for the assets. Also, upon investigation, i found on the Storage -> Snapshot Manager section, that the Primary Server is configured as a Snapshot Server, with the old version (10.0). This was done on the old configuration and i have no idea as to why it is present there. Trying to connect does not work, error code 25 as well as retrieving version information. Trying to add the new Snapshot Manager will result in Entity alredy exists error message. Could this storage configuration be related? If so, any suggestions as how to fix it? (I am also unable to delete the old Cloudpoint from the Web UI, but it is disabled) Primary server version is 10.3 New Snapshot Manager is 10.3 Old Cloudpoint was 10.0, already decomissioned. Thank you!426Views0likes1CommentNBU 10x tape 2 tape copy (inline copy) clarification.
Hi people: We have a 4 drive LTO9 library, Master/media server is Windows2022 and NBU 10.1.2. I have some policies that do a two tape copy of data to different pools. For example, mydata policy have two schedules foreverfull schedule make a backup to Mypool01 (primary copy on tape1) and its copy is located in Mypool02 (second copy on tape2) with retention level 100. Most of the time the policy make the two tape backup and copy successfully. monthlyfull schedule make a backup to Monthpool01 (primary copy) and its copy is located in Monthpool02 with retention level 10. Sometimes tape1 o tape2 fails over the copy (HW errors mostly, like need cleans drive) and then just use bpduplicate to make the copy over the tape of the respective pool. Today I have some doubts about the correct syntax, because I need to copy a backup with two copies made with monthlyfull schedule (the backupid is server01.net_1705357985) to use foreverfull pools and change the retention level. My question/doubts are around the -npc; should I use it? Of course, I want that backups on foreverfull schedule remains as primary copies. bpduplicate -v -number_copies 2 -backupid server01.net_1705357985 -client server01.net -dp Mypool1,Mypool2 -dstunit mylibrary-01-hcart3-robot-tld-0 -id 000012 -L copi.wri -policy Respaldo_servr01 -rl 100,100 Can anybody give some ligth about this?839Views0likes8CommentsOracle to Netbackup Copilot
Hello, I'm trying to implement Copilot for Oracle. I've set up the SLPs and registered the test instance, but NBU is unable to perform a backup with the error: Unable to perform a manual backup with policy "test". The policy does not have a list of files to back up. The setup: Oracle Linux 7.7, NBU 10.2, StoreOnce 5260 (4.3.6), Catalyst 4.4.0. In short, I'm trying to implement NBU accelerator for faster backups. If there is another way, please refer to the guide. Thank you in advance.Solved821Views0likes6CommentsLTO cleaning tape definition with two different drives?
Hi all: I need some advice. I have a client with Quantum i3 tape library; with one LTO7 and other LTO8 tape drives and 50 slots. Both drives are in the same partition; but since the client still have LTO6 media (written with an oracle SL48 tape library; which was decommissioned) the LTO7 is hcart and LTO8 is hcart2. As you know, LTO Cleaning tape are universal; that means no matter the LTO X category of the drive the tape cleaning media/cartridge is the same. The library was configured to let the application manage all cleaning tapes. My current issue is to reach universality with LTO Cleaning tapes or at least how to assign the cleaning tapes to the drives if the cleaning tapes have almost the same code bar CLNXYZCU. The current Media ID Generation is 0,8,5:6:7:8:1:2 \_ robot number \_ bar code length I was thinking that maybe use from 000 to 400 to assign "HC_CLN1/2 inch cleaning tape" to LTO7 drive and from 500 to 999 to assign "HC2_CLN1/2 inch cleaning tape 2". But need to hear options/advices from people with more experience.Solved1.7KViews0likes3CommentsThe Veritas Flex Appliance and the Game of Leapfrog
It’s my firm belief that we don’t see much that is actually “new” in IT very often. Mostly what we see is either the clever application of existing technologies or the re-application of pre-existing technologies with the understanding that the tradeoffs are different. I include server virtualization and containerization in the “not new” category with both actually being quite old in terms of invention, but in more recent history, containers having more interesting applications. The reason I’m going down this path is I frequently get questions regarding the Flex appliance as to why we chose to implement with a containerized architecture instead of using a hypervisor for an HCI data protection appliance like [insert company name here]? And, are your sure there’s no hypervisor? Yes, I’m sure there’s no hypervisor in Flex, it uses containers. Fundamentally there are good applications for both, and for entirely different types of workloads, so let’s look at why chosecontainers for the Flex Appliance instead of a hypervisor. Containers has its roots in FreeBSD “jails”. Jails enabled FreeBSD clusters to be hardened and for deploying of “microservices” (to use a more modern turn of phrase) onto the systems. The big advantage here being very high levels of isolation for the microservices, each running in its own jail. Containers then versus now are fairly different. FreeBSD jails were relatively primitive compared to something like Docker, but for their time they were state of the art, and they worked quite well. Which brings us to hypervisors. VMware started largely in test/dev. About the time it was entering into production environments, we were also seeing significant uptake of widely adopted 64-bit x86 processors. Most applications were running on a single box and were 32-bit, single-threaded, single-core, and didn’t need more than 4GB of RAM. Rapidly, the default server was 4 cores and had 8GB of RAM, and those numbers were increasing quickly. The hypervisor improved the extremely poor utilization rates for many sets of applications. Today, most new applications are multi-core, multi-threaded, and RAM hungry, by design. 16-32 cores per box is normal as-is is 128+ GB of RAM, and modern applications can suck that all up effortlessly, making hypervisors less useful. Since 2004 Google has adopted running containers at scale. In fact, they were the ones who contributed back “cgroups”, a key foundational part of Linux containers and hence Docker. This is interesting because: Google values performance over convenience Google was writing multi-core, multi-threaded, “massive” apps sooner than others Google’s apps required relatively large memory footprints before others Googles infrastructure requires application fault isolation So, although virtualization existed, Google chose a lighter weight route more in line with their philosophical approach and bleeding edge needs. Essentially “leapfrogging” virtualization. Here we are today, with the Veritas Flex Applianceand containers. Containers allow us to deliver an HCI platform with “multi-application containerization” on top of “lightweight virtualization” - essentially leapfrogging HCI appliances built on a hypervisor for virtualization. A comprehensive comparison of virtualization vs. containers is beyond the scope of this blog, but I thought I would briefly touch on some differences that I think are key and that help to highlight why containers are probably the best fit for modern, hyper-converged appliances: Virtualization Containers Operating system isolation (run different kernels) Application isolation (same OS kernel) Requires emulated or “virtual hardware” and associated “PV drivers” inside guest OS Uses host’s hardware resources and drivers (in the shared kernel) Standardized “packaging” of virtual machine (mostly; variance between hypervisors) Standardized packaging requiring Docker or one of the other container technologies Optimized for groups of heterogeneous operating systems Optimized for homogeneous operating system clusters Here’s another way to look at it: Enterprise Cloud Hardware Custom/Proprietary Commodity HA Type Hardware Software SLAs Five 9s Always on Scaling Vertical Horizontal Software Decentralized Distributed Consumption Model Shared Service Self Service What you see here is a fundamentally different approach to solving what might be considered a similar problem. In a world with lots of different 32-bit operating systems running on 64-bit hardware, virtualization is a fantastic solution. In a hyper-converged appliance environment that is homogeneous and running a relatively standardized 64-bit operating systems (Linux) with 64-bit applications, only containers will do. The application services in the hyper-converged Flex Appliance are deployed, added or changed in a self-service consumption model. They’re isolated from a potential bad actor. The containers and their software services are redundant and highly available. Application services scale horizontally, on demand. One of the best party tricks of the Flex Appliance that I didn’t touch on above is that containers fundamentally change how data protection services are delivered and updated. With the Flex Appliance, gone are the days of lengthy and risky software updates and patches. Instead, quickly and safely deploy the last version in its own container in the same appliance. Put the service into production immediately or simultaneously run old and new versions until you’re satisfied with its functionality. We couldn’t do any of this with a hypervisor. And, this is why the Flex Appliance has leapfrogged other hyper-converged data protection appliances. I also refer you to this brief blogby Roger Stein for another view on Flex.3.4KViews0likes2Comments