Architecture of Repositories in Multi-User Dedicated Configurations
Kean
Posts: 3 New member
I'm new to Source Control in general and so I'm looking for some suggestions on how to best setup the framework of local repositories for the users on my team.
Scenario:
My team is comprised of 10 SQL Developers and DBAs. We have 20 SQL Server Instances with each instance containing anywhere from 10-30 databases. Everyone on the team has SSMS installed on their separate desktops/work stations. Most team members also RDP on to our servers and utilize SSMS directly on the box they are connected to. In most cases, this is because a developer will be working with an SSIS package and needs to update a Stored Procedure that the package calls.
I have identified a few different approaches to creating the local repositories and I'm looking to get some insight from more experienced minds.
Approach A:
Every user has local repositories created on their individual desktops for each instance, for each database. The local repositories would be structured as ...\Instance1-20\DatabaseRepo1-30\.
On every server, there would also be local repositories created for each person, for each instance, for each database. The local repositories on each server would be structured as ...\Person1-10\Instance1-20\DatabaseRepo1-30\.
Pro's: This approach covers every combination of Person connecting to any database on any instance from any location (individual desktop + any of the 20 servers they could be RDP'd to).
Con's: Very complex and the local repositories for an individual are no in-sync, meaning that the local repository ...\Person1\Instance1\Database1\ on Server A could be different than the local repository on Server B.
Scenario:
My team is comprised of 10 SQL Developers and DBAs. We have 20 SQL Server Instances with each instance containing anywhere from 10-30 databases. Everyone on the team has SSMS installed on their separate desktops/work stations. Most team members also RDP on to our servers and utilize SSMS directly on the box they are connected to. In most cases, this is because a developer will be working with an SSIS package and needs to update a Stored Procedure that the package calls.
I have identified a few different approaches to creating the local repositories and I'm looking to get some insight from more experienced minds.
Approach A:
Every user has local repositories created on their individual desktops for each instance, for each database. The local repositories would be structured as ...\Instance1-20\DatabaseRepo1-30\.
On every server, there would also be local repositories created for each person, for each instance, for each database. The local repositories on each server would be structured as ...\Person1-10\Instance1-20\DatabaseRepo1-30\.
Pro's: This approach covers every combination of Person connecting to any database on any instance from any location (individual desktop + any of the 20 servers they could be RDP'd to).
Con's: Very complex and the local repositories for an individual are no in-sync, meaning that the local repository ...\Person1\Instance1\Database1\ on Server A could be different than the local repository on Server B.
Approach B:
The team's culture/thought process must be changed so that all changesto database objects are completed from their individual desktops/work stations. If that becomes the working process, then local repositories would only need to be configured on their individual desktops for each instance, for each database. The local repositories would be structured as ...\Instance1-20\DatabaseRepo1-30\.
Pro's: Simple/Streamlined configuration
Con's: Because the team's "culture" will have to change, the barrier of acceptance will be increased making it harder for the team to actually utilize Source Control effectively.
Approach C:
Create a centralized storage location (either a File Share or Mapped Network drive) and have one set of local repositories created. The repositories would be structured as ...\Person1-10\Instance1-20\DatabaseRepo1-30\ but they would only need to exist on the shared storage location which is unlike Approach A where each local desktop and each of the 20 servers required all repositories to be created.
Pro's: Centralized architecture allows for easier barrier of acceptance without the complexity of Approach A.
Con's: Increased network bandwidth as every sync / compare requires network travel.
There is also a very good chance Approaches D to Z are better than anything I've identified, hence the purpose of this question. I'm open to any ideas/suggestions you have.
Thanks,
Kean
There is also a very good chance Approaches D to Z are better than anything I've identified, hence the purpose of this question. I'm open to any ideas/suggestions you have.
Thanks,
Kean
Tagged:
Answers
I'm assuming you are using git, because you refered to "local repositories". If you aren't using git, please can you let me know. Centralised source control and distributed source control work very differently.
I think you are overcomplicating things. The management of the individual developers' local repos should be up to the individual developers. They can put their local repos wherever they like. It shouldn't need to be centrally managed. It doesn't matter.
As for the structure of the repo, perhaps it would be simpler to split this out into multiple repos? If I'm working on database 7 from server 3, why do I need to waste time and effort including database 2 from server 6? A git repo with between 200 and 600 databases in it sounds terrifying. How long is it going to take to clone the repo? All the branches for all them databases are going to create one hell of a spaghetti soup.
In an ideal world, each database would be in it's own little repo. However, in tightly coupled systems, where sets of databases are littered with cross-database dependencies, it can be easier to treat them as a single component in a single repo. Even in this scenario, it would probably make life easier to split the 2-600 databases into groups of tightly coupled databases.
Probably with a structure a bit like this:
- Server1
- - Database1
- - - state << Link SQL Source Control to this directory
- - - other_stuff (security scripts, test data, build/deployment scripts etc)
- - Database2
- - - state
- - - test_data
- - - build_scripts
- Server2
- - Database1
- - - state
- - - test_data
etc
One final question: Are some of your databases the dev/test/prod versions of the same database? If so you should treat the dev/test/prod versions as a single database in source control. All your deployments, to all the databases in the pipeline, should come from source control. That way you enforce that dev/test/prod etc remain in sync with each other.
DevOps Mentor and Coach
Director of DLM Consultants
Creator of Speaking Mentors
Microsoft Data Platform MVP
Friend of Redgate
Twitter / LinkedIn