Backup Copy of multiple databases at end of job

We've been using Redgate backup for about 6 months now and love it, makes managing backups of our server which has upwards of 50 databases much easier to manager and track.
What we do struggle with, is the way that the copy component is implemented, that if we have a job that backups all databases (except ones we've excluded), it will do the copy after each backup completes. In theory this should be fine, however we've found quite a few times due to network performance issues on our system, that this causes the actual backups to drag on as it is waiting for the copy to finish.
We run a data warehouse, and have a nightly load that runs from 12-7am each day, so we definitely don't want backups running during this time sucking up the CPU. If it is copying backups during this time, this is less important given the network has minimal impact on our builds.

Is there a way to do this easily? We've recently switched to a separate robocopy job that runs at a set time, however it is quite disconnected, compared to doing it all in the same place.
Tagged:

Answers

  • Eddie DEddie D Posts: 1,669 Rose Gold 5
    Hi, thank you for your forum post.

    When executing a backup job that performs a backup of multiple databases using the COPYTO keyword, the job will backup and copy the backup file to the COPYTO location, before moving onto the next database. 

    There is no feature within SQL Backup to delay the copying process until all the backups for each database specified in a job have completed or just run the COPYTO process on its own.  Which means creating a separate copy job, as you have already configured, which the SQL Backup GUI will have zero knowledge upon to help with management of the copying process.

    An alternative is to use the SQL Backup Cloud Storage Settings, further reading here, however this does require you have your own AWS S3 or Azure storage.  The upload to cloud storage is performed by a separate service outside of the backup job, but you will have visibility within the SQL Backup GUI. 

    Many Thanks
    Eddie
    Eddie Davis
    Product Support Engineer
    Redgate Software Ltd
    Email: [email protected]
  • cmthomas68cmthomas68 Posts: 4 New member
    Thanks for replying Eddie. At this stage, we can't use cloud for storing our backups. This seems like it would be a good feature/ability to have in the backup tool being able to delay the copies till all the backups have finished. Would remove the reliance of the network to the actual backup performance.
  • Eddie DEddie D Posts: 1,669 Rose Gold 5
    edited February 16, 2021 11:23AM
    Hi, thank you for your reply.
    I will feedback your suggestion to the SQL Backup Product Manager and Development Team.

    I have submitted an enhancement request whose internal reference number is SB-5994.

    Sadly, I cannot guarantee that the enhancement request will be successful, or if approval is given, what future version of SQL Backup it will appear in.  I will update this post if I receive any further information.

    Many Thanks
    Eddie
    Eddie Davis
    Product Support Engineer
    Redgate Software Ltd
    Email: [email protected]
  • Eddie DEddie D Posts: 1,669 Rose Gold 5
    edited February 16, 2021 11:55AM
    Hi cmthomas68,
    A colleague has made me aware of a feature in SQL Backup that is rarely used, first introduced in V7.5.  This requires editing the existing job via SSMS and adding a SQL Backup keyword called USEQUEUEDCOPY.  This keyword allows the user to put a non transaction log backup with a COPYTO into the Log Copy Queue.

    The Log Copy Queue is primarily used with Transaction Log Backups, where on completion of a backup using the COPYTO keyword, an entry is added to the Log Copy Queue and the copying process takes place outside of the Log Backup job.  This is the default behaviour for Log Backups and is useful in a log shipping configuration to keep retrying the copy process in the event of a longer network outage.

    By adding the USEQUEUEDCOPY keyword in a Full or Differential backup using the COPYTO keyword, on completion of the backup, the backup file is not copied but an entry is added to the Log Copy Queue.  The SQL Backup Agent service will need copy the backup file to the COPYTO location and keep retrying over a 24 hour period.

    Below is example syntax you could try.  The USEQUEUEDCOPY keyword is not configurable using the backup wizards in the GUI, this means you will need to add the keyword manually by editing the Job Step in SSMS.

    Example Syntax before, maybe like the following:
    EXECUTE master..sqlbackup '-SQL "BACKUP DATABASE [AdventureWorks2017] TO DISK = ''D:\Full_Backups\<AUTO>.sqb'' WITH PASSWORD = ''<OBFUSCATEDPASSWORD>gwS5c+4DcmQ=</OBFUSCATEDPASSWORD>'', CHECKSUM, DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 4, COPYTO = ''\\MyServer\Copied_Backups'', KEYSIZE = 256, THREADCOUNT = 3, VERIFY"'


    After adding the USEQUEUEDCOPY keyword:
    EXECUTE master..sqlbackup '-SQL "BACKUP DATABASE [AdventureWorks2017] TO DISK = ''D:\Full_Backups\<AUTO>.sqb'' WITH PASSWORD = ''<OBFUSCATEDPASSWORD>gwS5c+4DcmQ=</OBFUSCATEDPASSWORD>'', CHECKSUM, DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 4, COPYTO = ''\\MyServer\Copied_Backups'', USEQUEUEDCOPY, KEYSIZE = 256, THREADCOUNT = 3, VERIFY"'


    PLEASE NOTE:
    You need to consider the impact the above may have on the Log Copy Queue and the workload of the SQL Backup Agent service.  I recommend that you test carefully in both a test and production environments.

    If you already perform Log Backups of a high frequency every 10 minutes or less, I suspect that adding the USEQUEUEDCOPY keyword will greatly increase the workload of SQL Backup Agent and overwhelm it.

    Many Thanks
    Eddie


    Eddie Davis
    Product Support Engineer
    Redgate Software Ltd
    Email: [email protected]
Sign In or Register to comment.