Deduplication folder stays offline
Forum, We upgraded Backup Exec from 2010-R3 to 2012-SP1. We have the dedupe option for both versions. The upgrade was done two weeks ago and we have been running dedupe jobs successfully. Now the deduplication folder will not come online. We have about 10TB of backups extending back 6 months. Under “Storage” this message is displayed next to the dedupe folder: This device has not been discovered correctly. Cycle the services on _CoSRV-BEX to retry device discovery. This alert is also logged: Backup Exec was unable to initialize and communicate with the device [SYMANTECOstLdr MACF] (The handle is invalid.). Click the link below for more information on how to diagnose the problem. http://www.symantec.com/business/support/index?page=answers&startover=y&question_box=V-275-1017 note the above link is for 2010, not 2012. Actions taken: Using the BE Services mgr, the services were recycled (many times) – no improvement. The server was rebooted – no improvement. Using the BE Services mgr, the services (including the dedupe services) were recycled – no improvement. The server and drive array were powered off and powered back on. All is normal. We did recycle the services with and without the dedupe services after this power up - no improvement. FYI We have a Dell PowerEdge R710 and an MD1200 array for the local D: drive, which has the dedup folder and nothing else. The server runs Win Server 2008R2 and has 64GB RAM. There are no hardware errors. The physical array is normal and drive D can be browsed. Live Update shows we are up-to-date. Some google searches suggest solutions for 2010, not 2012 . . . that the solution is to remove the target servers from the devices window, and disable client side dedupe. How can that be done in 2012? I have opened a support ticket with Symantec, but I cannot get them to call me back. Symantec advises that they will have the Deduplication support team call me back. I was promised a call back 3 ½ hours ago, but that hasn’t happened. I have called back twice with the ticket number and been shunted over to the voice mail of engineer that owns the ticket. Is there any hope for this? Should I look for a replacement to Backup Exec? ... Frustrated and hoping we don't have to restore anything.Solved4.2KViews25likes2CommentsBackupexec USB Tape drives and Compression
We typically install HP Tape Drives & Symantec Backupexec. In 2010 we installed a HP USB DAT 160 with Symantec Backupexec 12.5. The backup would not compress. Symantec told us that Backupexec is capable of compression & HP told us that the tape drive is capable of compression. But we could not get any compression when used together. An engineer in our firm did some digging around & came back saying that Symantec Backupexec will not support compression on USB Tape devices. We finally gave up & purchased a HP SAS DAT 160 & compression worked. I am not looking for do this do that suggestions, trust me we have done it all. I am not concerned about how much compression & the factors that affect compression. As far as I am concerned 1MB over native capacity is compression My question is does anybody (would love to hear from Symantec) know for a fact if it is still true for the latest version of Symantec Backupexec i.e. It will not support compression on USB Tape devices?908Views13likes2CommentsHow-To: Downgrade Backup Exec 2012 to 2010 R3
Two months ago I decided to upgrade our backup environment to Backup Exec 2012. I have upgraded Backup Exec in the past and always had relatively few issues during the upgrade process. I had tried to find some detailed information about Backup Exec 2012 but all I could find was some vague descriptions of what was new. I figured I would give it a shot(totally going against my own if it aint broke don't fix it policy). After the relatively smooth upgrade process my jaw dropped at the interface. I thought it was terrible. But I thought I would give it a chance. I eventually got used to the interface although after two months I still preferred the old interface. My problem was the bugs. Maybe it was because I upgraded rather than installed fresh. But over the course of two months I ran into numerous bugs. The application would randomly quit with exception errors. Sometimes during the middle of my jobs it would just stop backing up and I could not perform any jobs (inventory, erase, etc) without restarting the services. It was terrible. I was having to babysit the thing constantly and still not having reliable backups. After working with Support and them telling my to just "repair" the installation and having the same problems, I decided to take matters into my own hands and downgrade. Luckily before I upgraded I made a copy of my Catalog and Data folders that are located in the Backup Exec program files folder. I would like to preface by saying the intro to this article is not meant to start a discussion about the fallacies of BE 2012 - there are plenty of forums for that - just giving a little background on why I chose to downgrade. The following are the steps I used to downgrade my installation. 1. Locate your backed up Data and Catalogs folders.Hopefully, you backed up your Data and Catalogs folder before you upgraded. If you didn't then this article doesn't apply to you. 2. Uninstall BE 2012 choosing to remove everything when asked. 3. After uninstall is complete reboot the server. 4. Install BE 2010 and select the options you need during the wizard, choose to use a new SQL Express instance. 5. After installation run Live Update and update to R3. 6. Reboot the server. It doesn't prompt you to reboot but if you try opening the application it tells you that you need to so go ahead. 7. Stop all the BE services. 8. Navigate to C:\Program Files\Symantec\BackupExec\ and rename the Data folder to Data.new and the Catalogs folder to Catalogs.new 9. Copy your backed up Data and Catalogs folder to the directory in Step 8. VERY IMPORTANT STEP - WILL NOT WORK IF NOT PERFORMED 10. Run thebeutility.exefound in the location from Step 8. Ignore the warning message that pops up when you open it. 11. Select All Media Servers in the left pane. 12. Select your media server in the right-pane and right-click. 13. Select Copy Database. 14. Navigate to the .mdf database file that is located in your Data folder that you copied over in Step 9. 15. Do the same thing for the .ldf log file located in the same location. 16. Press OK. It will run through stopping services re-attaching the database and starting the services again/ NOTE: The first time I did this I didnt do steps 10-16 and the BE services would not start correctly. Thanks to owner ofthis blogI was able to follow their steps and get it going. Big shoutout. 17. You now have an almost fully-restored BE 2010 installation again. It will be running in trial mode. You will still need to contact Symantec Licensing (1-800-721-3934) to get your license keys donwgraded to BE 2010. But at least you can run backups now for 60-days. Now the time-consuming part...This will vary depending on how many servers you are backing up. After I restored my BE 2010 installation I edited my policies and checked my selection list. The first thing I noticed was I was not able to select anything other than user shares on each server. I then proceeded to check my resource credentials. They all failed stating the agent was not installed or the credentials were invalid. I was sure the service account I was using was valid but just to make sure I went ahead and logged into another machine succesfully using it. After I confirmed it wasnt the service account, I knew it was most likely the newer agents I had installed for using BE 2012. I tried to install the BE 2010 agents through the BE 2010 interface. It said they installed correctly but I was still getting the same error. Here is what I did to get the agents going again. 1. Log into each server you are backing up. (I used RDP, but console would work also) 2. Navigate to the backup exec program files unc path located on your media server. ex: \\servername\c$\program files\symantec\backupexec\agents 3. You will see RAW32 and RAW64. These obviously are the Windows installation agents for 32-bit version and 64-bit. 4. Select the appropiate folder for your server architecture and run setup.exe 5. Select the Uninstall option (repair doesnt work - I tried). 6. After the uninstall is complete, run setup.exe and this time install. NOTE: You could also just run the uninstall on all your servers and then re-install the agents using the BE 2010 interface. Actually this would have been my preferred method but I didn't think about it until after I installed all the agents. 7. Most of my servers did not require a reboot but if you get an error about a missing file after the install - uninstall again, but this time reboot after the uninstall and re-install after the server reboots. 8. Go back to your media server and test your resource credentials and selection lists. They should all pass and you should be able to see and select all the drives, SQL database, and Exchange IS'. You are done! I would say for a ~20 server environment this process might take around two hours to complete. It took me much, much longer but I was figuring things out as I went. Like I said the most time consuming part was the agent installs and this will greatly affect how long it takes in your environment. I hope this helps someone out. Again, I'm sure BE2012 is right for someone but for us, we'll stick with 2010 as long as we can. We already bought Avamar and will be moving to that in the next 30 days but we still have to keep BE for at least 7 years2.3KViews10likes6CommentsHOW TO install BackupExec 2010 agent on Debian (RALUS)
I hope this post will be useful to many people (please vote for it or mark it as solution if it helps you). Installing directly RALUS on Debian will not always work. First problem : ../perl/Linux/bin/perl: No such file or directory Second problem : at the end "was not successfully installed" and "impossible to add VRTSralus to (server)" And some others that will get solved when following my solution This is a simple way to install it and avoid these (and other) problems : 1. (optional) Create a folder to keep all RALUS files and copy the archive into it : mkdir /root/BE mkdir /root/BE/RALUS2010 mv RALUS_RMALS_RAMS-2896.9.tar.gz /root/BE/RALUS2010/ cd /root/BE/RALUS2010 2. Unpack the archive provided by Symantec tar xzf RALUS_RMALS_RAMS-2896.9.tar.gz 3. Stop the RALUS service if it is already installed and runnig /etc/init.d/VRTSralus.init stop 4. Very important, if you are under a 64 bit Linux you have to this Extract debian package : tar xzf RALUS64/pkgs/Linux/VRTSralus.tar.gz Install debian package : dpkg -i VRTSralus-13.0.2896-0.x86_64.deb Start installation : ./RALUS64/installralus 5. But if you are under a 32 bit Linux you have to this (I didn't tested under 32 bits) : Extract debian package : tar xzf pkgs/Linux/VRTSralus.tar.gz Install debian package : VRTSralus-13.0.2896-0.i386.deb Start installation : ./RALUSx86/installralus or ./installralus 6. Be sure to answer all questions correctly especially the one about the host server (XXX.XXX.XXX.XXX), you must give the IP of the Backup Exec server. 7. Do a restart of the RALUS Backup Exec agent, and it should say "[ OK ]" /etc/init.d/VRTSralus.init start I hope it will help ! Send me questions if you have other problems... Denis P.S. Tested with Debian 5.0.3 P.P.S. If you still have some problems : A) If you get "ERROR: VXIF_HOME is invalid. It must point to the root of VxIF. Exiting ...", simply edit ./RALUS64/installralus, and change line 3 : from : VXIF_HOME=../;export VXIF_HOME to : VXIF_HOME=/root/BE/RALUS2010/;export VXIF_HOME B) If you get "./RALUS64/installralus: line 50: ../perl/Linux/bin/perl: No such file or directory", simply edit ./RALUS64/installralus, and change line 50 : from : ../perl/$OS/bin/perl -I.. -I$PATH -I$VXIF_HOME -I../perl/$OS/lib/$PERL_VER ../installralus.pl $* to : ../perl/$OS/bin/perl -I.. -I$PATH -I$VXIF_HOME -I../perl/$OS/lib/$PERL_VER ./installralus.pl $* or to : perl -I.. -I$PATH -I$VXIF_HOME ./installralus.pl $* (to be clear, remove one dot in front of"/installralus.pl", keep only one dot instead of two) C) If the installation is sucessful but VRTSralus refuses to start, launch /opt/VRTSralus/bin/beremote –-log-console to see the error. If you get error while loading shared libraries: libstdc++.so.5: cannot open shared object file: No such file or directory you simply need to install the package : Under Debian 6.0.3 : apt-get install libstdc++5 (Thanks to RockwellMuseum)3.2KViews9likes17CommentsEvent ID: 1524/1517
This is not an urgent issue: I have a server running backup exec 12.5. It backs up a lot of stuff for us from a central file server. On my file server's application log (windows 2003), I get Event ID: 1524/1517 every night at 9:15PM. User: BackupService Event ID 1524: Windows cannot unload your classes registry file - it is still in use by other applications or services. The file will be unloaded when it is no longer in use. User: SYSTEM Event ID 1517: Windows saved user FILESERVER\BackupService registry while an application or service was still using the registry during log off. The memory used by the user's registry has not been freed. The registry will be unloaded when it is no longer in use. This is often caused by services running as a user account, try configuring the services to run in either the LocalService or NetworkService account. It seems to me like it may be a bug in coding on either microsoft or symantec's side?? Any pointers? .:edit:. I did read about a program called UPHclean by microsoft which seems to be the answer, but I'd like some feedback first.Solved5KViews8likes6CommentsMultiple Forders in BEData Folder
Hello everyone. I'm just walking into Symantec as this is a new position for me and its kinda everywhere and out of control and I was hoping I could get a better handle on it. Is it possible within the BEData folder to create a fodler for each server, For example BEData Mailserver Fileserver DNS etc just trying to find an easier to way clean items up. Much appreciated.Solved1.1KViews7likes5CommentsBackup Job Rate more than halved when backing up Hyper-V VM vs Physical Server
Hi, We recently P2Ved one of our fileservers, prior to which we were able to backup with a job rate of 2GB+/min and a total backup runtime of around 15 hours. Since P2Ving the server, we now only get a job rate of around 900MB/min and the total runtime is now around 32 hours. Does anybody know why the job rate would have drop so significantly? Environment Details: We have gigabit ethernet between the VM host and iSCSI SAN (Dell Powervault MD3200i) and we backup to tape (a Dell PowerVault TL2000) The VM is a Hyper-V VM running on a Hyper-V cluster but we are not using the Hyper-V agent for backing up, just the standard RAWS agent, OS is W2K8 R2 Standard The server BE is installed on is physical with the tape library connect via SAS, BE version is 2010 R3 SP1, OS is W2K8 R2 Enterprise Cheers Adam.Solved1.2KViews7likes4CommentsBE 2014 Slow Hyper -V Incremental Back ups
Hi All There seems to be a lot ofthreads regardingthis problem, however i've not found a difinitive solution. Environment Physical Backup Server - BE 2014, OS Windows 2008 R2 - Fully patched, 16GB RAM, 2 x 6 Core CPU's, backing up a mixture of physical andHyper-V Virtual Windows servers running Windows 2008/2012. We have recently upgraded to Backup Exec 2014, and it seems that our Hyper-V incremental backups run extremly slow. At around 200 MB/M. I understand that if I enable Microsoft incremental backups I need to upgrade my server to 2012 to get GRT backups. But i've not read anything that says this resolves the issue, and may impact on servers performance due to a snapshot always running. Any help would be great. Thanks Paul.330Views6likes1Comment