SQL Compare API

I'm currently still testing the evaluation copy of the compare API. We intend to use it as part of automated release management, which among other things migrates and structure and data changes to staging database. For that it would need to have at least the following:
- find any differences in structure and generate step by step scripts
- find any differences on filtered list of tables to look up

So far I've encountered the following problems:
- There does not seem to be a way to separate User objects from Login objects. We have a need to only create users but not create logins as well. If you are migrating security from one database to another on same instance, only the users need to be created using the same login credentials
- When I created a new, blank database (same instance) and tried executing the generated script (first manually, then using BlockExecutor) I first encountered the problem because the logins already existed and script was trying to recreate them. After filtering and excluding User database types it now fails to execute the script because the create table scripts are being executed before function create scripts.

I haven't even gotten to the part of comparing the data differences but my confidence is running very thin right now because of these problems.

My questions are:
- Does the comparison engine check for object dependencies at all? And order the differences in the Differences class instance accordingly?
- Is there a way to compare only User object but not Login objects?
- If you run data comparison (SQL Data Compare) before the structure scripts are executed would it generate the missing data correctly? It would have to know that a column is missing in the for instance target database (2nd) and generate the populate column values scripts. It would also have to generate full insert statements for any missing tables from target database. Am I correct in those assumptions?

- this is more of an enhancement; Currently you can either enumerate the Differences instance collection and execute a Work.BuildFromDifferences, passing the differences collection and individual difference in each step to get the script for a single difference. Is there a way to execute BuildFromDifferences on the whole collection and enumerate the blocks generated individually? Currently I can only get batches out of it instead of whole blocks by difference.

What we need is a way to inspect all the differences between development and production database and store individual difference scripts to a separate database - together with messages. Then someone needs to confirm the individual changes after which point the actuall script is generated from confirmed parts from that database. Ideally what the person confirming would see would look like:
- create function A
- create table B
- create column X on table C
- remove index Y from table D
adn after he confirmed for instance 1st and 3d statement only the script for A and C would be generated and executed.

Comments

  • Michelle TMichelle T Posts: 566 Gold 1
    We do check for dependencies - I think you have unfortunately just come across the one major case that is currently broken. There is currently a problem with creating functions and tables in the correct order when the table depends on the function through a default constraint (i.e. the table mentions the function as part of such a constraint). We're working on fixing that, but our attempts so far have led to other dependencies becoming broken, so I'm afraid you probably won't see much progress on that before 7.2.

    The dependency relationships are taken into account in ordering the final synchronization script, but not the Difference objects in the Differences class.

    While User objects can't be seperated from Login objects, the script should be checking if the login exists before trying to create it. If you could supply me (michelle.taylor@red-gate.com) with an example of synchronization failing because logins already exist (a snapshot of a database that causes this and enough information to recreate the appropriate logins on both sides of the comparison would be excellent, but any further description would be useful), I'll look into it and see if I can work out what the problem is.

    If you use the data comparison and synchronization tools on tables or views where the structure doesn't match, it will attempt to map columns from one table to the other and ignore any that it can't map. It won't generate insert statements for unmapped tables or columns.

    I'm not sure exactly what you want from Work.BuildFromDifferences. Once you've built an ExecutionBlock from a full Differences object, there's no internal concept of 'blocks by difference' - ExecutionBlock is just an object for storing a series of batches to be executed. The 'single difference' block that you can get from BuildFromDifferences is something we put in to support the user interface's display of the synchronization script for a single difference - it doesn't have any meaningful existence as an entity within a full ExecutionBlock.

    Looking at your problem description, I think the way I'd approach it is to take the finished synchronization script from SQL Compare and split it up into sections by looking for the relevant object names in the PRINT statements that are included in the script. You'd also need to fix up the transaction code, and it wouldn't guarantee that your final script (after picking and choosing sections) would run because you might have left out something that later sections which you did choose actually turned out to depend on. But I'm a tester, not a developer, so you might want to take development advice from me with a generous pinch of salt!

    I'm sorry this answer has been a little rambling, but I hope I've addressed all of your concerns (even if I can't satisfy them). If you need more clarification, please do ask!
    Software Developer
    Redgate Software
  • Thanks for your answers Michelle, it's not rambly at all.

    The interesting thing that happened after I posted this topic was trying out the synchronization using the visual interface (SQL Compare 6). When trying to execute the script (from the interface) it resultet in exactly the same error messages as through API, because of ordering. When the interface displays the list of tasks in a tree view the ordering seems correct - the first object causing the error was user defined data type. It showed up as first item in the tree view task list but when the script was generated it was located at completely different part of the batch.

    Unfortunatelly, I can't provide you with any screenshots or sql samples right now as I managed to put the test database into "Suspect" mode using the script generated by another API ... Appart from that I'm afraid I won't be able to sen you a snapshot of even the development database as same structure is used for live products. Once our DBA restores the test database I'll be able to send relevant bits of SQL generated.

    Just to clarify - in regard to data comparison - does this mean that if a table does not exists on database B, it exists on A, no insert statements would be generated when synchronizing from A to B? I'm guessing one would have to run the structure scripts first and then the data comparison tool?

    To clarify what we're trying to do; We have a Development, Staging and Live databases. Development is constantly changing. At the moment, when a developer does any changes to development database, he has to script any data or structure changes. When it's time to release the code and database to staging, the merge of all developers scripts is run and then code deployed (already automated). What we are lookin into next is fully automating the deployment process and implementing nightly builds. This means we already have a utility, which is able of pulling all the code from source control server and creating installable builds of all the applications. We will be updating that tool to automatically check on structure differences between development and staging database and generate all scripts nececary.

    Tricky part is filtering the changes being done to the staging. Once the comparison has completed, a partitioned version of the change scripts has to be produced. Each morning the release manager will get a list of scripts to confirm - it must be in human readable format (i.e. Add column A to table B). Once he has confirmed desired blocks the single file script has to be generated.

    Another tricky part is the comparison can not be run again when building single sript file because by that time the development database might have changed again and those new changes should not be included. The same goes for table data comparison.

    That is why I require Work.BuildFromDifferences to be as partitioned as possible or perhaps achieve the same result using Differences class to enumerate and generate a script for single difference. The only problem with that for now is because the script generated also contains transaction information and as such each difference is now a single transaction rather than whole process being one transaction.

    To answer the suggestion about looking up specifical object and PRINT statments in the script generated - it might work but it would require a lot of parsing and jugling around - in which case it would be easier to just manually write the comparison tool.

    I hope my explanations describe well enough what I'm trying to achieve.

    Can you recomend a stable release of Compare API that works correctly in regard to object dependencies?

    Regards
  • Michelle TMichelle T Posts: 566 Gold 1
    Could you get around the problem of further changes in the development database by taking a database snapshot (using the SaveToDisk method of the Database object) of the development database at an appropriate point and doing further comparisons based on that?

    SQL Compare isn't really designed for changing which objects you are planning to synchronize at the stage after the synchronization script is generated, but you could produce a report of the available changes to be synchronized and then go back and select just the objects you want to synchronize from the snapshot if you had one of the development database at the time the report was generated.

    SQL Compare 6.2.0.271 is the current stable release, but it does have a couple of dependency issues remaining. You might want to try and get hold of a copy of SQL Compare 5.3 - support@red-gate.com should be able to help you with this - and see if that works better for you. We fixed a number of dependency issues between 5.3 and 6.2, but we did make some tradeoffs in fixing the regressions that those changes caused, so depending on your precise situation you might find 5.3 works better for your purposes.
    Software Developer
    Redgate Software
  • Thanks again Michelle,

    I'm performing further tests using the current release API (6.2). I'll try to find a way to separate the Differences found and script generated for individual difference - it already contains regions and I may be able to filter out just the relevant alter database region.

    As for the dependencies, it will probably help if the code does several iterations of Differences collection - first for user defined types, then functions, then tables, then indexes, then stored procedures.

    I'll post my findings once tests are done in case anyone else might find it usefull.

    Regards
  • Finally, after many hours of trial and error a solution was found - a workaround for dependency problems. To refresh - a dependency has to be created before the dependant object.

    The idea was to try and execute the scripts generated from differences between databases against actual database - the one that needs to be updated from the other. Script blocks need to be executed one by one untill the end of the script is reached. Each block either succeeds or fails. If it fails its added to a collection. Once the end of the script has been reached, that error collection is executed again - because by now some of the dependencies were created. This process is repeated untill the error collection is the same size as initial collection - it means no script blocks were successfully executed any more and we can exit.

    The process was not really simple. A number of script blocks had to be removed (ignored) by the iteration. For instance transaction and internal error handling. But now at the end I get a script that can be run agains the database without any error handling because it has already been tested agains the same database.

    During this process I found out the following things about the script generated (mind I was comparing our development database agains a new, empty database):
    - CREATE TYPE statements need to be located outside of the transaction handling. If CREATE TYPE is executed within the transaction itself, the rest of the transaction does not see the user defined type - thus dependency errors
    - User object and Login object need to be separated. The account used to run the update script does not always see the whole security details (it has to be sysadmin to see al that) and the comparison engine would also script the logins even dough they already exists.
    - In the original script, generated by the Red Gate API, the full text catalogs (CREATE FULLTEXT CATALOG) were located at the end of the script, same with CREATE FULLTEXT INDEX statements. A lot of tables, stored procedures, views depend on full text indexes if enabled and the catalog and indexes need to be applied immediatelly after the table has been created/altered.
    - Same goes for any indexes, triggers, foreign keys, primary keys - they need to be scripted immediatelly after table create/alter statement

    I hope this helps you with improvements on the engine. If it would be helpfull, I can also send you the 2 classes (c#) I have written to handle this. You can run it agains a complex database to see the differences in ordering between original script and script after "live test".

    One other thing I would like to see on the object representation of database objects (RedGate.SQL.Shared) would be a common property Dependencies - it's probably already there - it would be nice if it was public.

    Anyway, the final resulting script after this also takes care of any invalid objects in the source database - they are simply excluded from the final script. If everything else goes well, I think we will be going with this API as it still produced by far most reliable and viable results then the rest of similar products.

    However, the next step will be taking a snapshot/backup of the database to be updated prior to executing/testing the scripts to it so the actuall database is not corrupted. Can you advise me on a way to execute scripts agains a backup? I've read somewhere it is possible to compare a snapshot against actual database. Is it possible to execute sql agains a snapshot as well?

    Regards,
  • Michelle TMichelle T Posts: 566 Gold 1
    SQL Compare 7 includes the ability to compare directly against a backup - the Beta verison is available to download from the relevant forum further down the forum list.

    However, you can't execute SQL against a backup or a snapshot directly - you'd need a seperate live database to actually execute SQL against it. (You should be able to get yourself such a database by creating a new, empty database and synchronizing the snapshot or backup into the live database, though.)

    It's unlikely that we'll ever display a Dependencies property on objects in the SQL Compare Engine (it's not quite 'just there', it's gathered up from a number of places when it's needed) because we have a whole seperate engine for displaying dependencies, the engine behind SQL Dependency Tracker, which would be easier to clean up and release than putting the functionality into SQL Compare Engine.

    Thanks for all of the information about how you've handled the dependancies. The main problem we've faced in doing similar things is that we want SQL Compare's output to be fully transactional as much as possible (so that we can roll back completely if any problems are encountered) and that means not doing things like taking CREATE TYPE statements out of the transaction or doing CREATE FULLTEXT statements (which can't be made transactional) before the main transaction has been successful. While we've been looking for ways around these problems, we don't want to compromise our ability to roll back changes.
    Software Developer
    Redgate Software
  • Thanks for clarification. Transaction safe script makes perfect sense and is desired most of the time. However, there are cases when not everything can be executed within a transaction. As you pointed out full text indexes is one of those cases.

    Correct me if I'm wrong but in case when database contains full text indexing, those are commonly used with function, stored procedures and views. Since all of those are created within the transaction, the script will always fail, because the indexes simply won't exist at the time when creating/changing objects dependant on those indexes.

    User defined types are another example. You could solve this by at least moving those to the top of the transaction. Because there is no point in creating a user defined type if you're not using it anywhere. And during my testing it turns out that if the user defined types are created inside the transaction they are not visible to anything untill the transaction is commited. Since the script fails because of missing user defined types, the whole transaction fails and nothing ever gets commited.

    This sort of excludes any objects that can't be accessed correctly inside the transaction from valid comparison. It makes sense - most of the other engines simply don't generate a script for full text indexes. It is something that has to be manually maintained by the DBA.

    Still, the API circumvents the problem of manual comparison and provides a reliable list of differences. That part is usefull. Good point is also that the differences generated to script objects are still accessible as blocks and can as such be used to regenerate the script. Is it possible to exclude transaction wrapping from Work.BuildFromDifferences method? That would at least narrow down the script blocks to process and speed up the parsing.

    I'm currently also working on programatically creating a backup of the database, running the tests scripts (previous post) and then restoring the database from backup.

    Just thought of another usefull tool which I would like to see but it may be quite difficult to implement: By testing some other API's I managed to put the database into "Suspect" mode. Microsofts recomendation is you simply restore the database from backup in case like that. A tool would be usefull, that analyzes the database and suggests "repair" scripts.
  • Michelle TMichelle T Posts: 566 Gold 1
    You can use Options.NoSQLPlumbing to exclude the transaction wrapping from Work.BuildFromDifferences.
    Software Developer
    Redgate Software
  • Michelle, thanks for all the help.

    Since we managed to put the API to good use, I'll be recomending to our IT director we buy licences for Red Gate SQL Toolkit.

    Regards
  • Hi again,

    I've managed to get the structure synchronization part working and it produces a valid script at the ned.

    Now I'm looking into data compare (RedGate.SQLDataCompare.Engine). I managed to find a couple of samples on how to use it, but some of the classes from those examples don't even exists in the namespace. Thus those samples must be for an older version of the API.

    One of the unpleasant parts is that both RedGate.SQLCompare.Engine and RedGate.SQLDataCompare.Engine reference RedGate.SQL.Shared, but different versions which leads to compilation warnings and possibly to runtime errors. Are you planning to put all references in order in one of the future releases?

    Can you point me to any code samples? Your walkthrough only contains samples for RedGate.SQLCompare.Engine. I need info on:
    - how to create 2 datasources
    - how to configure which tables to look up
    - how to configure which columns in those tables to compare (Fine tune)
    - how to execute comparison
    - how to treat results.

    Thanks and regards
  • Michelle TMichelle T Posts: 566 Gold 1
    Which version of Data Compare are you using? Data Compare 6 was released after SQL Compare 6, hence the different versions. You should be able to use the RedGate.SQLCompare.Engine.dll file out of the Data Compare directory, which should refer to the same version of RedGate.SQL.Shared as RedGate.SQLDataCompare.Engine.

    I'll dig out the Data Compare examples and see what's wrong with them, then either give you the link to the current set or get them fixed for you.
    Software Developer
    Redgate Software
  • Michelle TMichelle T Posts: 566 Gold 1
    If you go to http://www.red-gate.com/supportcenter/P ... %20Toolkit and pick 'Download SQL Data Compare API help file' (not the examples link, I've noticed that's broken at the moment), you should be able to get a copy of the current toolkit samples from the first page (the bit where it says 'The SQL Data Compare API provides a number of sample API projects in C# and Visual Basic .NET. To download the sample API projects, click here.').

    The references will need fixing - you'll need to delete and recreate references to RedGate.BackupReader, RedGate.SQL.Shared, RedGate.SQLCompare.BackupReader, RedGate.SQLCompare.Engine and RedGate.SQLDataCompare.Engine. All those dlls should be in the directory you installed SQL Data Compare to.

    When I'd fixed the references, the project then built succesfully, so I think the actual examples are still correct.
    Software Developer
    Redgate Software
  • Hi,

    ales.potocnik, would it be possible to get a copy of the C# work you did with the Compare/Data API?

    I understand if not, but you have clearly done a lot of work on this, which is something we are also looking to do, and it would assist us greatly.

    Much appreciated if you can help.

    Tim
  • Hi Tim,

    I unfortunatelly can't post just the file as the whole solution is a bit more complex and consists of a dedicated database for release/nightly build process and some C# code. The code also handles automated code download from Vault server. On top of all there are also 3 CLR stored procedures, also written in C# that take care of sorting the scripts.

    I'll try to create some snipets when I have enough time and post them here or you can alternativelly send me an email to ales.potocnik@espares.co.uk and I can respond with some samples.

    Regards
Sign In or Register to comment.