Corrupt copied transaction log backups

GladiatrGladiatr Posts: 28
edited November 6, 2008 11:43AM in SQL Backup Previous Versions
Hi,

I am having this similar kind of a problem with my log shipping machine and can I get some input on this please.

The details are below:

We have a production server (PS) and a standby server (LS).

On the PS and LS we are running SQLBackup 4.6 and every night we take the full backup of our production databases on the PS and mirror them to the LS using the following command:

exec master..sqlbackup N'-SQL "BACKUP DATABASE [DatabaseName] TO DISK = [E:\SQLbackups\DatabaseName_<DATETIME yyyymmdd_hhnnss>] WITH NAME = [<AUTO>], DESCRIPTION = [<AUTO>], PASSWORD = ''<ENCRYPTEDPASSWORD>somepassword</ENCRYPTEDPASSWORD>'', KEYSIZE = 256, COMPRESSION = 3, THREADS = 1, INIT, MIRRORFILE=[\\LS2\SQLBackup\DatabaseName_<DATETIME yyyymmdd_hhnnss>], ERASEFILES=7"', @exitcode OUTPUT, @sqlerrorcode OUTPUT

Once the full backup is completed then we backup the transaction logs every 10 minutes on the PS and mirror it to LS using the following command:

exec master..sqlbackup N'-sql "BACKUP LOG [DatabaseName] TO DISK = ''E:\SQLbackups\Transaction Logs\DatabaseName_Log_<DATETIME yyyymmdd_hhnnss>.sqb'' WITH NAME = ''<AUTO>'', DESCRIPTION = ''<AUTO>'', PASSWORD = ''<ENCRYPTEDPASSWORD>somepassword</ENCRYPTEDPASSWORD>'', KEYSIZE = 256, MIRRORFILE = ''\\LS2\SQLBackup\DatabaseName_Log_<DATETIME yyyymmdd_hhnnss>.sqb'', COMPRESSION = 3, THREADS = 1"', @exitcode OUTPUT, @sqlerrorcode OUTPUT

and then the LS runs a scheduled job to restore the transaction logs backup by executing the command below:

exec master..sqlbackup N'-sql "RESTORE LOG [DatabaseName] FROM DISK = ''E:\SQLBackup\DatabaseName_Log_*.sqb'' WITH STANDBY = ''E:\DATABASES\UNDO\UNDO_DatabaseName.DAT'', PASSWORD = ''somepassword'', MOVETO = ''E:\SQLBackup\DatabaseNameTransactionLogArchive\''"', @exitcode OUTPUT, @sqlerrorcode OUTPUT

This has been happening for the past two years or so until one day I checked my logs on the LS and noticed that they are sitting there since July and have not been restored by the restore job above.

On further investigation I noticed that we have a drive space issue on the PS when the transaction job was executing actually on the PS. The backup log file size on the PS is 0 locally. But if you notice the transaction backup job mention above that is executing on the PS it is suppose to mirror the same file to the LS as well. So I went to see if I can find the file on the LS and I got the file with the 17KB in size. When I tried to restore the file manually it giving me the following error:

Not a SQL Backup nor native backup file: 'E:\SQLBackup\DatabaseName_Log_20080727_032000.sqb'
SQL Backup exit code: 600

I ran the verify command on the transaction log file and it says:

Verifying file:
E:\SQLBackup\DatabaseName_Log_20080727_032000.sqb

Error 570: File is not a SQL Backup file.
Error 715: Backup file is not encrypted.

SQL Backup exit code: 715

I checked the next transaction file and it gave the the same error, then I checked couple of more and they are ok and nothing wrong with them.

I also used the SQB2MTF command-line utility and the GUI utility but this time it's saying the password is incorrect. I used both the utilities with the good transaction log and with the same password and it worked like a charm.

So is there a way we can fix these corrupted transaction log files so that I can use them to restore on my LS without again having to restore a full backup of the database and start from the scratch?

I will appreciate your help in this regards.

Thanks
gladiatr.

Comments

  • Hi,

    Thank you for your post into the forum.

    Unfortunately there isn't a way to fix these corrupt transaction log files, so if you do not have a local copy on the source server your database will need to be reseeded.

    In your job scripts, you are using the MIRRORFILE option, if you have now resolved your source server space issues I would advise replacing this with COPYTO. You should then have a good transaction log backup file available on your source server if the copied file does get corrupted.

    You may wish to use a different program to replace this copying process if you get a lot of logs which become corrupted.

    Our CopyTool written by Petey may work for you and is available from Here

    I hope this helps,
    Matthew Flatt
    Redgate Foundry
  • Hi,

    Thank you for your post into the forum.

    Unfortunately there isn't a way to fix these corrupt transaction log files, so if you do not have a local copy on the source server your database will need to be reseeded.

    In your job scripts, you are using the MIRRORFILE option, if you have now resolved your source server space issues I would advise replacing this with COPYTO. You should then have a good transaction log backup file available on your source server if the copied file does get corrupted.

    You may wish to use a different program to replace this copying process if you get a lot of logs which become corrupted.

    Our CopyTool written by Petey may work for you and is available from Here

    I hope this helps,

    Thank you Matthew for replying to my post. I didn't get a chance to check your reply earlier since we are busy in upgrading our SQL 2000 database engine to SQL 2005.

    I understand we can't fix the corrupt transaction log files, however, it would be nice in future if we can do this as well, but I have couple of questions below now:

    Firstly, I ran into space issue on my PS server and I'm using the MIRRORFILE option, isn't it the job would generate the warning message in the SQLBackup eventlog about running out of space on the PS but at the same time it would mirror the same transaction log on my Standby (LS) server even the job is unable to backup the transaction log backup on the PS machine. According to my understanding this is what MIRRORFILE option is suppose to do.

    Secondly, the reason we used the MIRRORFILE option is we would like to keep a transaction log backup on the PS as well as on the LS together. If I use the COPYTO option the transaction log backup that is being happening on the PS would copy the corrupt log backup to the LS as well, then this way I would end up having two copies of the corrupt transaction log backup file. Atleast, by using the MIRRORFILE option I have an advantage of a good file on either of the servers.

    And thirdly, we are keeping the transaction log files on our PS because we have an additional drives on the PS for this purpose with RAID capabilities and we are mirrioring the transaction log backups to the LS so in case of disaster or space issue we can have atleast one fully trusted working transaction log backup.

    Please put some light on this.

    Also, thank you for providing me the link to the CopyTool written by Petey I will definitely try this as well.

    I appreciate your help.

    Thanks.
    -gladiatr
  • Hi Gladiatr,

    Your understanding of the MIRRORFILE command is correct.

    I suggested using COPYTO only if your space issues on your PS Server have been resolved.

    I suggested this because getting a corrupt backup when backing up to a local drive is rare, it is generally when copying over a network that problems can be caused.

    If you continually have space issues, using MIRRORFILE might be a good option although it is not preventing you from encountering corruption at this point.

    Thanks,
    Matthew Flatt
    Redgate Foundry
  • Hi,

    Thank you for your post into the forum.

    Unfortunately there isn't a way to fix these corrupt transaction log files, so if you do not have a local copy on the source server your database will need to be reseeded.

    In your job scripts, you are using the MIRRORFILE option, if you have now resolved your source server space issues I would advise replacing this with COPYTO. You should then have a good transaction log backup file available on your source server if the copied file does get corrupted.

    You may wish to use a different program to replace this copying process if you get a lot of logs which become corrupted.

    Our CopyTool written by Petey may work for you and is available from Here

    I hope this helps,

    Hi Matthew,

    I have a question now regarding this CopyTool.

    As you have suggested in your earlier reply to use the CopyTool to copy logs file. I have already setup my logshipping in this way now.

    Now, there is a new problem I discovered today when we again ran out of space on the production server but this time on the C: drive. When I investigated the cause I found out that there was a huge file created by this CopyTool on the C: drive under FTPTools folder in Document and Settings\All Users\Application Data\FTPTools. The size of this file was over 1GB. Now, my question is there any way to change the path of the this CopyTool error log file to some other drive by using parameters like /use or /filelist etc. Or, is there any way not to create this error log file?

    My other question is, this is a CopyTool so why it is creating an FTPTool folder instead in Document and Settings\All Users\Application Data. Please provide your feedback on this.

    Also please let me know what /INITLIST exactly does when used.

    Thanks.
    -Gladiatr.
Sign In or Register to comment.