Syncing Large Tables
Brian Donahue
Posts: 6,590 Bronze 1
Hi Doug,
If you run into a circumstance where a table needs to be rebuilt, SQL
Compare will need to copy all of the data and you may get a query timeout.
Usually SQL Compare will not do this (it will script an alter table instead)
unless you've got an option such as 'force column order to be identical'
turned on.
Regards,
Brian Donahue
Red Gate Technical Support
"DougW" <doug.webster@exony.com> wrote in message
news:JonJFX0NEHA.1508@server53...
> We are having a problem using SQL Compare 3.15 when sync-ing *large*
tables
> between databases.
>
> For example, Table A has a new additional column. The SQL sync script
> produced creates a temporary table with the extra column and attempts an
> INSERT into it from the original table. As the original table contains 8+
> million rows this grows the transaction log from 100Mb to 8.2 GB. As the
> transaction log growth is somewhat geometric and some of our tables
contain
> 50+ million rows this solution is therefore not viable on our prodution
> servers with relatively small (9GB) RAID 0+1 log files. Note: the BCPed
data
> is approximately 1.2GB uncompressed). Adding extra files to the log
located
> on the data and index disks for the transfer and then removing them
> afterwards is a solution but is clumsy and slow.
>
> Red-Gate FAQs state:
> > Q: Can SQL Compare and SQL Data Compare cope with very large databases?
> > A: Yes. Both products have been designed to work with the very largest
> commercial databases (e.g. SAP in Fortune 100 companies). For example,
we'
> ve successfully tested the SQL Bundle on databases of 100s GB in size with
> 150,000+ database objects.
>
> Are we therefore missing an option somewhere? For instance, I might have
> expected in the above scenario for the Red-Gate tool to BCP the contents
of
> the original table to a specified location and a BULK INSERT + new format
> file back into the temporary table, thus bypassing the transaction log
> limits and theoretically increasing the speed of the transfer. In fact,
the
> temporary table would not be required at all in this scenario as the
> original could be dropped and recreated with the new column after the bcp
> was successfully completed.
>
> Best regards
> Doug Webster
>
>
>
>
>
>
>
>
>
>
If you run into a circumstance where a table needs to be rebuilt, SQL
Compare will need to copy all of the data and you may get a query timeout.
Usually SQL Compare will not do this (it will script an alter table instead)
unless you've got an option such as 'force column order to be identical'
turned on.
Regards,
Brian Donahue
Red Gate Technical Support
"DougW" <doug.webster@exony.com> wrote in message
news:JonJFX0NEHA.1508@server53...
> We are having a problem using SQL Compare 3.15 when sync-ing *large*
tables
> between databases.
>
> For example, Table A has a new additional column. The SQL sync script
> produced creates a temporary table with the extra column and attempts an
> INSERT into it from the original table. As the original table contains 8+
> million rows this grows the transaction log from 100Mb to 8.2 GB. As the
> transaction log growth is somewhat geometric and some of our tables
contain
> 50+ million rows this solution is therefore not viable on our prodution
> servers with relatively small (9GB) RAID 0+1 log files. Note: the BCPed
data
> is approximately 1.2GB uncompressed). Adding extra files to the log
located
> on the data and index disks for the transfer and then removing them
> afterwards is a solution but is clumsy and slow.
>
> Red-Gate FAQs state:
> > Q: Can SQL Compare and SQL Data Compare cope with very large databases?
> > A: Yes. Both products have been designed to work with the very largest
> commercial databases (e.g. SAP in Fortune 100 companies). For example,
we'
> ve successfully tested the SQL Bundle on databases of 100s GB in size with
> 150,000+ database objects.
>
> Are we therefore missing an option somewhere? For instance, I might have
> expected in the above scenario for the Red-Gate tool to BCP the contents
of
> the original table to a specified location and a BULK INSERT + new format
> file back into the temporary table, thus bypassing the transaction log
> limits and theoretically increasing the speed of the transfer. In fact,
the
> temporary table would not be required at all in this scenario as the
> original could be dropped and recreated with the new column after the bcp
> was successfully completed.
>
> Best regards
> Doug Webster
>
>
>
>
>
>
>
>
>
>
This discussion has been closed.