So, _MigrationLog.Status column has the following meanings: 0 = migration was not deployed, e.g. a button "Mark as deployed" was clicked in the UI 1 = migration was deployed 2 = migration was imported to the project and no changes in the database were necessary. The most common scenario is when a developer makes some changes directly in the database, and then click the Refresh button in SCA VS, followed by an Import and Generate Script.
so if I set a script's deployed status to 0 in my source DB, then that script would not be run even in my target DBs, but they would be set as "Marked as deployed" on target DB's as well...is that right?
What should I mark a script's deployed column value in my source DB so that it doesnt get executed on my target DB? I tried 0,1,2...none of them worked...i dont want to delete the script from my VS project.
You don't need to manipulate the _MigrationLog table. The tool should be doing that. Click the button in VS for the scripts you do not want to run. Keep in mind though this only applies for that server / DB. In my example, see screen shot, when I deploy to UAT or PROD the script will still run.
I guess let me put my question in a different way. Lets say I have a baseline script and couple of change scripts in project. And they were succesfully deployed on my Target DB. So I assume the migrationlog table would show 0 for baseline script and 1 for the change scripts. Is that right?
Now, lets say, my migrationlog table got dropped by accident on my target DB. Now If I run my release pipeline, since there is no migrationlog and also since it finds a baselinescript, it safely skips running the baseline script. But it always try to run the other two change scripts on my target DB.
The question is, how can i skip the other two change scripts from being run on my target DB? I tried setting them as "Mark them as deployed" from VS, but still the release pipeline is trying to execute it on my target DB,
I agree, that the table should not be dropped. But we are still evaluating the tool, so we are weighing in possible accidents. The table may not be dropped, but could be refreshed when we do data refresh from higher environements to lower. For Eg : if we refresh out UAT DB with Prod DB, then the migrationlog table would also get refreshed, so there is always a chance that the data in table gets messed. So I am finding ways to manipulate it, esp finding a way to skip a Change script from being deployed on a target DB.
I refresh our lower environments all of the time from PROD and you want the _MigrationLog table to be refreshed as well so it also represents what has already been deployed to prod.
"Mark it deployed" is for when the change is already made in DEV. You don't want it deployed again in DEV. It will still be deployed in other databases where its not set as deployed in the _MigrationLog table
It's a valid concern. Maybe someone from Redgate could chime in with ideas on how to prevent a disaster if the table was dropped or altered. After your fist deployment you could add a check to see if the table exists and has some rows before your deployment can happen. Depending on your situation though there may be valid cases where that table does not exist, like deploying to a new environment.
I believe that as long as there was an entry in the migration_id column that matched the id shown in the migration itself (and possibly the checksum to make sure the script hasn't been modified since it was deployed), that it wouldn't attempt to be deployed.
Doing a trial of this, the entry in the database I deployed to is the same except for the complete_dt column value and the deployed value (being 1 instead of 2 since it was deployed to the target, but imported to the dev database).
If I then delete the _MigrationLog table doing the deployment fails because target needs to be baselined.
Recreating the _MigrationLog table with the same values as are in the development database (but changing it to deployed 1 instead of 2) - I used SQL Compare to recreate the schema for _MigratinoLog and Data Compare to create a script to deploy the data (which I then changed the 2 to a 1 for the migration) - after building again, it then worked as there was nothing to deploy.
Many Thanks Eddie
Eddie Davis Senior Product Support Engineer
Redgate Software Ltd
Email: support@red-gate.com
Answers
0 = migration was not deployed, e.g. a button "Mark as deployed" was clicked in the UI
1 = migration was deployed
2 = migration was imported to the project and no changes in the database were necessary. The most common scenario is when a developer makes some changes directly in the database, and then click the Refresh button in SCA VS, followed by an Import and Generate Script.
https://documentation.red-gate.com/sca3/developing-databases-using-sql-change-automation/generating-scripts-to-capture-database-changes/migrations/script-status-information-window
You don't need to manipulate the _MigrationLog table. The tool should be doing that. Click the button in VS for the scripts you do not want to run. Keep in mind though this only applies for that server / DB. In my example, see screen shot, when I deploy to UAT or PROD the script will still run.
If you just want to add some scripts to your project that will never run with deployments see additional scripts. You would leave your scripts folder unchecked. https://documentation.red-gate.com/sca3/developing-databases-using-sql-change-automation/generating-scripts-to-capture-database-changes/programmable-objects/additional-scripts
Now, lets say, my migrationlog table got dropped by accident on my target DB. Now If I run my release pipeline, since there is no migrationlog and also since it finds a baselinescript, it safely skips running the baseline script. But it always try to run the other two change scripts on my target DB.
The question is, how can i skip the other two change scripts from being run on my target DB?
I tried setting them as "Mark them as deployed" from VS, but still the release pipeline is trying to execute it on my target DB,
2nd part, DON'T DROP THAT TABLE, bad things will happen!!!! In all seriousness yes baseline should not get run again but your other migration scripts will run. You could code each migration script so that it could be re-run and not cause issues. For example have an if exists statement that checks if your change was already made. Another option might be to re-baseline. https://documentation.red-gate.com/sca3/developing-databases-using-sql-change-automation/advanced-project-manipulation/re-baselining
But the best option is just don't drop that table
I am not 100% sure that my answer is correct.
I believe that as long as there was an entry in the migration_id column that matched the id shown in the migration itself (and possibly the checksum to make sure the script hasn't been modified since it was deployed), that it wouldn't attempt to be deployed.
Doing a trial of this, the entry in the database I deployed to is the same except for the complete_dt column value and the deployed value (being 1 instead of 2 since it was deployed to the target, but imported to the dev database).
If I then delete the _MigrationLog table doing the deployment fails because target needs to be baselined.
Recreating the _MigrationLog table with the same values as are in the development database (but changing it to deployed 1 instead of 2) - I used SQL Compare to recreate the schema for _MigratinoLog and Data Compare to create a script to deploy the data (which I then changed the 2 to a 1 for the migration) - after building again, it then worked as there was nothing to deploy.
Many Thanks
Eddie
Senior Product Support Engineer
Redgate Software Ltd
Email: support@red-gate.com