Best Practices for Databases with HUGE Lookup Tables

We have a legacy database with at least a dozen lookup tables with tens of thousands of records each. These are mostly tables managed by third parties that we synchronize locally. We manage changes to these tables using SQL Data Compare and SQL Source Control.

This often causes problems with developers' machines hitting Out Of Memory errors in SSMS/SQL Source Control when refreshing the Commit/Get latest tabs in SQL Source Control. Even when they don't run into memory issues, the synchronization process can take minutes and these unproductive minutes really add up quickly when you need to refresh the content several times per day.

Is there a set of Best Practices for dealing with this scenario?

We are considering storing core database (Database1) separately from the large lookup tables into a separate database (Database2) and then creating Synonyms in the core database that reference the lookup tables. This seems like it will solve the issue with latency, but will result in a little more operational overhead.

Pros:
* 99+% of the time, developers will have a more responsive experience in SSMS/SQL Source Control

Cons:
* DevOps needs to create and maintain the new database and associated source control repositories
* Devs need to create this new database and sync with SQL Source Control
* Devs need to commit changes to two different databases if they need to make changes to the lookup tables
* It's possible to make a change to the source lookup table (Database2) that breaks things in the original database (e.g. Stored Procedures) that might not be detected

Are there other Pros/Cons that I am not considering? Are there other practices that we should consider?


Tagged:

Comments

Sign In or Register to comment.