Log backup continuation as scheduled

deepakonthewebdeepakontheweb Posts: 12
edited September 1, 2007 6:19AM in SQL Backup Previous Versions
Hi,

I am using Red-Gate SQL backup v4 for log shipping having 300Gb+ size, when index defrag job runs it generate approx 20Gb log in 2 to 3 files (TL backup every 5 mins).

The standby server is located at remote site having slow connection and takes long time to move TL backup files.

In this situation and logshipping design with red-gate, Next TL backup does not start until TL backup file is moved to stand by server.

I wanted to know, if there is any configuration which can allow me contineously taking TL backup after every 5 minutes(like Microsoft replication design) and another job keep copying TL files to remote location. (simply I dont need TLbackup and copy clubbed together)

here is the existing code :

DECLARE @exitcode int
DECLARE @sqlerrorcode int
exec master..sqlbackup N'-sql "BACKUP LOG [DBXXX] TO DISK = ''B:\DBLogs\LOG_(SQLServername)_DBXXX_<DATETIME yyyymmdd_hhnnss>.sqb'' WITH NAME = ''<AUTO>'', DESCRIPTION = ''<AUTO>'', ERASEFILES = 3, COPYTO = ''\\DRSQLServerName\B$\DBXXXLogShippingLogFiles'', COMPRESSION = 3, THREADS = 1"', @exitcode OUTPUT, @sqlerrorcode OUTPUT

IF (@exitcode <>0) OR (@sqlerrorcode <> 0)
BEGIN
RAISERROR ('SQL Backup job failed with exitcode: %d SQL error code: %d', 16, 1, @exitcode, @sqlerrorcode)
END



Thanks in advance..
Thanks,
Deepak Kumar

[email protected]

Comments

  • Any Reply ??

    This is very comman issue, I am sure its not happening only in our scenario!! Well, does it mean.. I should:

    - remove logshipping
    - Keep taking transaction log backups
    - use own set of process to move TL backups to remote side
    - Configure a job to restore TL backup files, and move to another folder..

    Ahh a lot work.. * (what if somebody has over 100's databases)
    Thanks,
    Deepak Kumar

    [email protected]
  • I had a similar issue at another site that I worked at a few years back and got round the issue by doing the following to the defrag\reindex job.

    in the agent job for defragging\reindexing

    step 1. disable the transaction log backup job
    step 2. set the database to BULK-LOGGED
    step 3. run the customised reindex script. This script basically cursors through each table on the database and after each table, gets the current system time, and if the time passed is at least the required frequency (5 mins), then it does a transaction log backup, and then goes to the next table.

    step 4. set database back to FULL recovery
    step 5. enable the transaction log backup job

    hope this helps
  • I accept, this could be a case to avoid generation of heavy log. But its production environment and all activities should be fully logged.. Can’t play with database and its core-environment!! Another, we have very complicated system architect a lot many things are dependent on it.

    Thanks again :-)
    Thanks,
    Deepak Kumar

    [email protected]
  • peteypetey Posts: 2,358 New member
    Yes, it does appear that you would need to copy the files to the standby server using some other method, as the backup process will not end until the backup has completed. There isn't any setting to change this behaviour.

    This was not a use case scenario that we anticipated, where the copy will take such a long time that it blocks scheduled backups.

    If we provided an option that allows SQL Backup to pass over the backup file names to a copy utility, so that it runs in an independent process, a different problem would arise. You could have the copy utility copying one set of backup files to the standby server, and during that time, another transaction log backup is made. You then have 2 instances of the copy utility trying to transfer files. If yet another copy is started, the connection would eventually get congested, and one or more transfers could fail.

    Or we could provide an option to set the COPYTO function to run only if the backup file size is under a certain size. You would then have to manually copy the large file over, but at least the backups would run as scheduled and the connection won't get saturated, assuming you have another way of moving large files.

    Any other suggestions are welcomed. Thanks.
    Peter Yeoh
    SQL Backup Consultant Developer
    Associate, Yohz Software
    Beyond compression - SQL Backup goodies under the hood, updated for version 8
  • How about creating 2 steps in backup job at primary server.. (instead of restricting by size)
    1. for taking TL backup
    2. for moving TL backup file to standby server location.

    if TL backup move step is taking long time, then I can execute step 1 seperatly. (like microsoft logshipping architect itself)


    :idea:
    Thanks,
    Deepak Kumar

    [email protected]
Sign In or Register to comment.