Log backup continuation as scheduled
deepakontheweb
Posts: 12
Hi,
I am using Red-Gate SQL backup v4 for log shipping having 300Gb+ size, when index defrag job runs it generate approx 20Gb log in 2 to 3 files (TL backup every 5 mins).
The standby server is located at remote site having slow connection and takes long time to move TL backup files.
In this situation and logshipping design with red-gate, Next TL backup does not start until TL backup file is moved to stand by server.
I wanted to know, if there is any configuration which can allow me contineously taking TL backup after every 5 minutes(like Microsoft replication design) and another job keep copying TL files to remote location. (simply I dont need TLbackup and copy clubbed together)
here is the existing code :
DECLARE @exitcode int
DECLARE @sqlerrorcode int
exec master..sqlbackup N'-sql "BACKUP LOG [DBXXX] TO DISK = ''B:\DBLogs\LOG_(SQLServername)_DBXXX_<DATETIME yyyymmdd_hhnnss>.sqb'' WITH NAME = ''<AUTO>'', DESCRIPTION = ''<AUTO>'', ERASEFILES = 3, COPYTO = ''\\DRSQLServerName\B$\DBXXXLogShippingLogFiles'', COMPRESSION = 3, THREADS = 1"', @exitcode OUTPUT, @sqlerrorcode OUTPUT
IF (@exitcode <>0) OR (@sqlerrorcode <> 0)
BEGIN
RAISERROR ('SQL Backup job failed with exitcode: %d SQL error code: %d', 16, 1, @exitcode, @sqlerrorcode)
END
Thanks in advance..
I am using Red-Gate SQL backup v4 for log shipping having 300Gb+ size, when index defrag job runs it generate approx 20Gb log in 2 to 3 files (TL backup every 5 mins).
The standby server is located at remote site having slow connection and takes long time to move TL backup files.
In this situation and logshipping design with red-gate, Next TL backup does not start until TL backup file is moved to stand by server.
I wanted to know, if there is any configuration which can allow me contineously taking TL backup after every 5 minutes(like Microsoft replication design) and another job keep copying TL files to remote location. (simply I dont need TLbackup and copy clubbed together)
here is the existing code :
DECLARE @exitcode int
DECLARE @sqlerrorcode int
exec master..sqlbackup N'-sql "BACKUP LOG [DBXXX] TO DISK = ''B:\DBLogs\LOG_(SQLServername)_DBXXX_<DATETIME yyyymmdd_hhnnss>.sqb'' WITH NAME = ''<AUTO>'', DESCRIPTION = ''<AUTO>'', ERASEFILES = 3, COPYTO = ''\\DRSQLServerName\B$\DBXXXLogShippingLogFiles'', COMPRESSION = 3, THREADS = 1"', @exitcode OUTPUT, @sqlerrorcode OUTPUT
IF (@exitcode <>0) OR (@sqlerrorcode <> 0)
BEGIN
RAISERROR ('SQL Backup job failed with exitcode: %d SQL error code: %d', 16, 1, @exitcode, @sqlerrorcode)
END
Thanks in advance..
Comments
This is very comman issue, I am sure its not happening only in our scenario!! Well, does it mean.. I should:
- remove logshipping
- Keep taking transaction log backups
- use own set of process to move TL backups to remote side
- Configure a job to restore TL backup files, and move to another folder..
Ahh a lot work.. * (what if somebody has over 100's databases)
Deepak Kumar
deepak.kumar@sqlknowledge.com
in the agent job for defragging\reindexing
step 1. disable the transaction log backup job
step 2. set the database to BULK-LOGGED
step 3. run the customised reindex script. This script basically cursors through each table on the database and after each table, gets the current system time, and if the time passed is at least the required frequency (5 mins), then it does a transaction log backup, and then goes to the next table.
step 4. set database back to FULL recovery
step 5. enable the transaction log backup job
hope this helps
Thanks again :-)
Deepak Kumar
deepak.kumar@sqlknowledge.com
This was not a use case scenario that we anticipated, where the copy will take such a long time that it blocks scheduled backups.
If we provided an option that allows SQL Backup to pass over the backup file names to a copy utility, so that it runs in an independent process, a different problem would arise. You could have the copy utility copying one set of backup files to the standby server, and during that time, another transaction log backup is made. You then have 2 instances of the copy utility trying to transfer files. If yet another copy is started, the connection would eventually get congested, and one or more transfers could fail.
Or we could provide an option to set the COPYTO function to run only if the backup file size is under a certain size. You would then have to manually copy the large file over, but at least the backups would run as scheduled and the connection won't get saturated, assuming you have another way of moving large files.
Any other suggestions are welcomed. Thanks.
SQL Backup Consultant Developer
Associate, Yohz Software
Beyond compression - SQL Backup goodies under the hood, updated for version 8
1. for taking TL backup
2. for moving TL backup file to standby server location.
if TL backup move step is taking long time, then I can execute step 1 seperatly. (like microsoft logshipping architect itself)
:idea:
Deepak Kumar
deepak.kumar@sqlknowledge.com