Azure integration & blob life-cycle limitation
FleetDZ
Posts: 6 New member
in SQL Backup
Originally we had a similar issue with the S3 integration and Backup, then tagging was added based upon the issue (old thread: https://forum.red-gate.com/discussion/comment/146163#Comment_146163)
But now we are facing nearly the same issue, as our org is consolidating single cloud to Azure.
So what's happening... We have differing life-cycle/retention rules based upon what a DB contains. An example would be a file db, which can be hundreds of gb in size. We wouldn't age and life-cycle up the tier structure, because the minimum ages for for those tiers would be cost prohibitive from a data size/volume issue. This db with file content, we'd want to keep 7 days and delete. But for data content db's we'd want to life-cycle up the storage tiers, and potentially delete at 180 days.
The issue primarily is caused because Azure's blob life-cycle rules are based on prefix. Where as in S3 it's prefix and custom tags.
So when a backup job inherit's the server's naming, there's no opportunity to denote that db "A" should be deleted by Azures life-cycle rules in 10 days, while "B" should be tiered up and deleted months later. I guess the only way to get around this would be to make a dedicated job for each DB allowing for custom naming, but the just opens a massive can of worms.
Also the dir structure of the local backup data isn't retained, even though blob containers do support it. In the container, it's all root level so prefix mapping off dir struct isn't an option either. Redgate does a nice job mapping info into the blobs meta data, but those values aren't available to the life cycle management
With this difficult limitation put forth by MS, I'm guessing the only option would be for Backup to possibly support multiple containers per account. That way, the existing job rules and interfaces work, but we could have one job for the "data" and one for the "file" dbs, each going to different containers for decent life cycle management.
If I need to further clarify, or if I'm interpreting the possibilities here incorrectly, please let me know!
But now we are facing nearly the same issue, as our org is consolidating single cloud to Azure.
So what's happening... We have differing life-cycle/retention rules based upon what a DB contains. An example would be a file db, which can be hundreds of gb in size. We wouldn't age and life-cycle up the tier structure, because the minimum ages for for those tiers would be cost prohibitive from a data size/volume issue. This db with file content, we'd want to keep 7 days and delete. But for data content db's we'd want to life-cycle up the storage tiers, and potentially delete at 180 days.
The issue primarily is caused because Azure's blob life-cycle rules are based on prefix. Where as in S3 it's prefix and custom tags.
So when a backup job inherit's the server's naming, there's no opportunity to denote that db "A" should be deleted by Azures life-cycle rules in 10 days, while "B" should be tiered up and deleted months later. I guess the only way to get around this would be to make a dedicated job for each DB allowing for custom naming, but the just opens a massive can of worms.
Also the dir structure of the local backup data isn't retained, even though blob containers do support it. In the container, it's all root level so prefix mapping off dir struct isn't an option either. Redgate does a nice job mapping info into the blobs meta data, but those values aren't available to the life cycle management
With this difficult limitation put forth by MS, I'm guessing the only option would be for Backup to possibly support multiple containers per account. That way, the existing job rules and interfaces work, but we could have one job for the "data" and one for the "file" dbs, each going to different containers for decent life cycle management.
If I need to further clarify, or if I'm interpreting the possibilities here incorrectly, please let me know!