Unusually high memory consumption to copy to s3 storage?
FleetDZ
Posts: 6 New member
in SQL Backup
To lay the groundwork:
Seems as if it's a newer behavior as Backup has worked so well you almost dangerously cease to remember it's even there (in a good way). No issues for almost 2 years up until now.
If a job is small in size data wise, it does not become an issue. For instance I can have 20+ diffs simultaneously copying to s3, with the same settings but since they all wrap in ~20 min, no significant memory allocation shows up.
As a test last night though tried copying a large single DB's log file (190gb data size, 71gb backup size, yea I know... but it's an ideal test case). I knew it would lag in the copy phase and after several hours it was consuming over 20gb in memory, just that single job. Did a controlled GUI based cancellation and confirmed immediately the release of the consumed memory.
It's fairly easy to test and replicate as needed. Just unusual how suddenly this became an issue with minimal data growth and memory allocation in general. If I loaded up several large items to copy overnight, with the same settings, the GUI will go unresponsive due to memory forcing me to hard reset the Redgate service.
While I agree I can go less aggressive thread-wise, can't explain any significant changes/load besides being on the latest version of Backup.
Thank you for the time.
- SQL 11.0.7001.0 (Ent 2012 SP4)
- SQLBackup 9.2.7.543
- Compress, yes set to:1
- Encryption, yes set to: 256
- Threads set to: 6
- Max Xfer and block sizes set to defaults
- AWS with tagging
Seems as if it's a newer behavior as Backup has worked so well you almost dangerously cease to remember it's even there (in a good way). No issues for almost 2 years up until now.
If a job is small in size data wise, it does not become an issue. For instance I can have 20+ diffs simultaneously copying to s3, with the same settings but since they all wrap in ~20 min, no significant memory allocation shows up.
As a test last night though tried copying a large single DB's log file (190gb data size, 71gb backup size, yea I know... but it's an ideal test case). I knew it would lag in the copy phase and after several hours it was consuming over 20gb in memory, just that single job. Did a controlled GUI based cancellation and confirmed immediately the release of the consumed memory.
It's fairly easy to test and replicate as needed. Just unusual how suddenly this became an issue with minimal data growth and memory allocation in general. If I loaded up several large items to copy overnight, with the same settings, the GUI will go unresponsive due to memory forcing me to hard reset the Redgate service.
While I agree I can go less aggressive thread-wise, can't explain any significant changes/load besides being on the latest version of Backup.
Thank you for the time.
Tagged:
Comments
All the other 319 DB backups (yep, that many!) - the largest 21GB in size - are copied fine.
Something has gone wrong with the latest update of 9.2, and it would be good to have Redgate look into this as it's not good to have to copy a backup manually because the software is no longer reliable.
Sorry to hear you've been having some issues! We have been doing some recent work around this area, which included simplifying how we handle uploads to S3 storage; it seems likely some of these changes may have caused this.
I have internally logged this issue as bug SB-5898. We will investigate and get back to you with any progress we make.
Thank you for your detailed posts, they will be exceedingly useful in tracking down and replicating the problem!