You don't mention the OS of the Primary Server but the suggestions are the same for both Win / Linux.
If the catalog is usually ~650GB consistently, then images that expire = new backups, so to almost double in size in just a few days is odd.
First step is to do a recursive listing of /usr/openv/netbackup/db/images (or ...\veritas\netbackup\db\images ) and look at the size of files, what is 'big' that has appeared in the past few days ?
The dir structure under images will show multiple client names
Under each of these you will have subdirectories of ctime, 1 million seconds apart
Example:
/usr/openv/netbackup/db/images/reanbur630-02/1726000000
/usr/openv/netbackup/db/images/reanbur630-02/1727000000
/usr/openv/netbackup/db/images/reanbur630-02/1728000000
Under each of these you should have just .f files and .lck files, as well as a catstore dir which depending on backup size will contain around 10 files per image that is greater than a certain size (I forget the exact size, but if the .f file is > than I think 2GB it gets truncated to 72 bytes and the contents are split up into the multiple files in catstore - this is to make large backups more efficient to search. The files under catstore will contain the ctime of the image as part of their filename).
There shouldn't be any other files - although if you have catalog compression, the files will be compressed.
Actually that's a thought, were you using catalog compression and it has been disabled perhaps, or catalog archiving and images have been unarchived ?