Options

New-DatabaseReleaseArtifact - is it possible to make it skip deployed scripts

I need to deploy one solution to multiple databases, and even though each deploy is relatively fast, if I deploy full solution using BuildArtifact it takes about 1 minute for each database to check which scripts were already migrated.

At the moment target databases can have different versions, but I know that they all are higher than certain 'baseline' version.
What I wanted to do is to build ReleaseArtifact using this 'baseline' version so during deployment I need to check less scripts (so only ones added after 'baseline' and 'current' version are added).

The problem, however, is that ReleaseArtifact builds different script compared to BuildArtifact. In the BuildArtifact script if migration script is already deployed it just being skipped. In ReleaseArtifact, deployment fails if it finds that some script already been deployed. Is it possible somehow change this behaviour for ReleaseArtifact?
Tagged:

Answers

  • Options
    Hi RomanPekar,

    Thanks for the question!

    I'm afraid the short answer is no due to the release being targeted. You could arguably consolidate your existing scripts per environment by rebaselining to expedite the process somewhat (your mileage may vary), but that's still on a per environment basis.
    The tool isn't intended to use a 'communal' baseline as that workflow is comparatively brittle compared to the current model.
    Kind regards
    Peter Laws | Redgate Software
    Have you visited our Help Center?
  • Options
    RomanPekarRomanPekar Posts: 19 Bronze 1
    Thanks for the answer @Peter_Laws

    I was thinking about rebaselining, which would reduce amount of scripts to be deployed,. The problem is most of our scripts are programmable objects so it's not going to help much.

    I'll try to think about other solutions.
  • Options
    Are your programmable objects changing repeatedly that's causing the difficulty or is it a matter of sheer volume please?

    We've changed our tooling approach in recent years, but I want to better understand your use case to ensure my advice is applicable.
    Kind regards
    Peter Laws | Redgate Software
    Have you visited our Help Center?
  • Options
    RomanPekarRomanPekar Posts: 19 Bronze 1
    It is indeed a matter of sheer volume. We have around a 1000+ procedures/functions, so just checking the list against migration log table takes some time. It's not much, but if there're multiple databases it adds up.

    My idea was to prepare smaller package using some "baseline". Let's say, I have versions 2.1, 2.4 and 2.5 deployed on different environments.
    I know that version 2.0 was for sure already deployed everywhere and I can use version 2.0 as a target to build a release package which then can be deployed to any environment. So the only changes which are going to be checked against migration log are changes that were done since version 2.0.
  • Options
    What you describe should work, this does place additional load on you to ensure parity across these environments, I can't speak for the practical impact. As you suggested it I anticipate you've considered this.

    alternatively, you may find the way our newer tooling flyway handles this more agreeable as programmable objects are compared like other db objects and selectable as needed. The handling is comparable, but may be an easier means of managing it, especially en masse.
    Kind regards
    Peter Laws | Redgate Software
    Have you visited our Help Center?
Sign In or Register to comment.