Unusually high memory consumption to copy to s3 storage?

To lay the groundwork:
  • SQL 11.0.7001.0 (Ent 2012 SP4)
  • SQLBackup 9.2.7.543 
  • Compress, yes set to:1
  • Encryption, yes set to: 256
  • Threads set to: 6
  • Max Xfer and block sizes set to defaults
  • AWS with tagging
We are experiencing high memory usage for long running jobs when in the Copying to Hosted Storage phase.  

Seems as if it's a newer behavior as Backup has worked so well you almost dangerously cease to remember it's even there (in a good way).  No issues for almost 2 years up until now.

If a job is small in size data wise, it does not become an issue.  For instance I can have 20+ diffs simultaneously copying to s3, with the same settings but since they all wrap in ~20 min, no significant memory allocation shows up.

As a test last night though tried copying a large single DB's log file (190gb data size, 71gb backup size, yea I know... but it's an ideal test case).   I knew it would lag in the copy phase and after several hours it was consuming over 20gb in memory, just that single job.  Did a controlled GUI based cancellation and confirmed immediately the release of the consumed memory. 

It's fairly easy to test and replicate as needed.  Just unusual how suddenly this became an issue with minimal data growth and memory allocation in general.  If I loaded up several large items to copy overnight, with the same settings, the GUI will go unresponsive due to memory forcing me to hard reset the Redgate service.

While I agree I can go less aggressive thread-wise, can't explain any significant changes/load besides being on the latest version of Backup.

Thank you for the time.

Tagged:

Comments

  • millenniamillennia Posts: 2 New member
    I have something similar. Since I updated to the latest 9.2 version the usual bulletproof backup to S3 is now unable to copy a 51GB backup - it takes up all the available RAM on the SQL Server and eventually has to be cancelled - which releases all the RAM. Once beyond 24 hours the COPYTO fails as a timeout and I have to copy the backup to S3 manually via a CloudBerry attached drive, which completes in about an hour without issue.

    All the other 319 DB backups (yep, that many!) - the largest 21GB in size - are copied fine.

    Something has gone wrong with the latest update of 9.2, and it would be good to have Redgate look into this as it's not good to have to copy a backup manually because the software is no longer reliable.
  • Hi @FleetDZ, @millennia -

    Sorry to hear you've been having some issues! We have been doing some recent work around this area, which included simplifying how we handle uploads to S3 storage; it seems likely some of these changes may have caused this.

    I have internally logged this issue as bug SB-5898. We will investigate and get back to you with any progress we make.

    Thank you for your detailed posts, they will be exceedingly useful in tracking down and replicating the problem!
  • millenniamillennia Posts: 2 New member
    Thanks, hopefully you'll be able to get it in the next v9 update as it's annoying to have my regular weekly backups to the cloud choke and have to do it manually.
Sign In or Register to comment.