Best practices for applying SCM to databases

poivrepoivre Posts: 2
Hi all-

I am in the process of designing a system that will get our databases under version control. The basic idea is to treat our production DB's and their backups as the source of truth for data, and treat the CREATE scripts in source control as the source of truth for schema, stored procedures, functions, views, etc.

I am evaluating the SQL Toolkit as an API for achieving the following:
- automatically creating scripts for table, SP, function, view creation
- diffing existing databases and creating change scripts to go from one version of the DB to another
- diffing create scripts and creating change scripts to go from one version of the DB to another (not sure if the toolkit can do this)

My problem is, I am essentially making this up as I go. Can anyone shed any light on the efficiency of the SQL Toolkit in achieving this? Also, can anyone point me towards ANY books, articles, presentations, ANYTHING on the design of such a system? What are pitfalls? What are best practices? Man, I can't find anything but conjecture on this.



  • Hi Josh,

    Sorry for the delay in anyone getting back to you. Your first two points should be perfectly possible using the toolkit - Probably the easiest thing to do is to look at the snapshot API sample and use that to take a snapshot of each of your databases and check these snapshots in to source control, you can then use snapshots in the SQL Compare GUI - or via the API to create your change scripts etc against any database.

    As for diffing create scripts the SQL Compare API does not support this at this time however if you were to apply those to a database directly and then use SQL Compare API to diff that database and the target database you can generate change scripts that way.

    Richard Mitchell has created a handy utility if you are storing snapshots which allows you to diff two snapshots - Have a look here:

    Hope this helps,

    - James
    James Moore
    Head of DBA Tools
    Red Gate Software Ltd
  • fordc03fordc03 Posts: 49 Bronze 2
    You can do what you're talking about...

    Basically, you have a "baseline" build, which could be a snapshot or live database.

    Then, you do a compare between your "baseline" and your "production" database.

    you build the differences and create an update package that can then be applied to multiple databases of the same type.

    In our case that didn't work too well because in our situation we had a couple thousand databases that were all a teeny bit different and an update package doesn't work.

    In these cases we have a baseline, we do the compare, then apply the differences script.

    We also generate a change report on what has changed. All using the red gate API's...and one big super threader to run the build script across multiple databases at the same time.

    Red-Gates Schema Compare utilities are the fastest on the market bar none...they are also the most accurate. And it's wonderful for going from SQL 2000 to 2005 and also going the other direction, provided you didn't implement any SQL 2005 specific code or objects into your database to prevent you from going backwards.

    Using the data compare utilities you can compare "seed" data with whatever is in production and generate an update package that way as well...

    Hope the explanation has been helpful with what you can achieve...

    Unfortunately, there isn't any documentation to build this setup for you...well there is, it's just scattered all over the place. Red-Gate's support has been very helpful in helping me piece all of it together.

    The only problem I couldn't solve with their API's was mass distribution and checking my DB versions to apply different scripts.

    I can help point you in the right direction if you want...

    On a side note...why am I answering questions in the Red-Gate forum? I don't know...I guess I just don't have enough work to do that I'm doing freelance on the side for Red-Gate?? haha :)
Sign In or Register to comment.