Automated deployment to large, complex databases
TomN
Posts: 2 New member
How does redgate automated handle extremely large database changes? With DACPACs there is an issue when the table is too large and complex and can lead to a fairly critical failure. How does RedGate mitigate this issue?
Tagged:
Best Answer
-
way0utwest Posts: 312 Rose Gold 1I don't mean to distract you or avoid the question, but there are a couple things in here that I want to point out.
First, there's no magic on how to make changes. When we alter a table, there are really only a couple things that can be done in SQL Server to change a table. All of us are bound by the rules of the database engine for making changes, and we can't avoid those rules in terms of meta data only changes, or rewriting pages, or anything else.
Second, the issues with large tables exist with small ones, but because of the scale or duration of changes (or resources), we don't notice them as much. We also can recover quicker, just because a rebuild/copy/etc, of 100 rows in inherently less disruptive that it is with 1,000,000,000 rows.
Now, some of the issues with DACPAC changes are that you lack control in how you would prefer to make these changes. Similar issues can occur with SQL Compare, though I think Redgate does a better job of warning you and giving you a script to review.
With SQL Change Automation (SCA), if you are tracking these changes in VS as migration scripts, you have complete control over how to make the change. We can't mitigate a bad decision, like locking a 1,000,000,000 row table to add and change data or rebuild an index. You might have valid reasons to make that change. We do suggest ways to make some changes, but at the time of migration script import, you need to ensure that you review the method of making the change.
The advantage of SCA is that we give you complete control to decide how the change should be deployed. We don't necessarily give you the best method for your environment, since the requirements and restrictions vary by application, but we allow you to write the code that suits you. This means you need to understand how to make large table changes effectively in your world.
There numerous ways to approach problems and you ought to research the best ways. A few ideas
- https://sqlstudies.com/2016/07/14/altering-a-column-in-a-large-table-a-case-study/
- https://www.sqlservercentral.com/Forums/Topic851917-1550-1.aspx
- https://www.red-gate.com/simple-talk/sql/database-administration/changing-data-types-large-tables-int-bigint-conundrum/
- https://serverfault.com/questions/36696/how-to-speed-up-adding-column-to-large-table-in-sql-server
Note there are other methods, and you might need to experiment with how the different methods of making changes affect your system. Then incorporate those in your process and use them in your migration scripts.
If you are using SQL Source Control, what I would recommend is that you review the scripts for ALTER TABLE changes and perhaps rewrite them. This won't flow through in your development environments necessarily, as the SCA cmdlets will still do what they do, but in smaller environments, some of these long or lengthy deployments usually don't matter if you rebuild the environment. However, for deployment to QA, Staging, Production, for the moment of the change, you'll want a real script to be used. In these cases, I'd use the new pre/post options for the table changes, and then comment out the alter in the regular script if you don't want it.Editor, SQLServerCentral