insufficient system memory error during synchronisation
ChrisClark
Posts: 7
Hi,
I'm using SqlDatacompare to produce a 10mb synchronisation script. When I attempt to apply this script using Sql Management Studio or SqlCmd, the script runs for approximately 20 mins and then fails with the error 'There is insufficient system memory to run this query' or 'There is insufficient memory available in the buffer pool.' The server I'm running this against has 1.25gb of memory, and a 3gb swap file. The last time I ran it I watched task manager for the duration of the process and the page file never went over 2gb, yet the script still failed.
I suspect that this problem is probably being caused by the fact that the script produced is a single batch; from other posts on this forum it seems that it's not possible to instruct SqlDataCompare to split the script into several batches, and I'm wary of writing code to do this automatically from a data integrity point of view, but I was wondering if anyone could suggest anything to make the script more memory efficient - I'm using the command line to generate the script (based on saved SDC packages) and perhaps I'm using the wrong switches?
I'm using SqlDatacompare to produce a 10mb synchronisation script. When I attempt to apply this script using Sql Management Studio or SqlCmd, the script runs for approximately 20 mins and then fails with the error 'There is insufficient system memory to run this query' or 'There is insufficient memory available in the buffer pool.' The server I'm running this against has 1.25gb of memory, and a 3gb swap file. The last time I ran it I watched task manager for the duration of the process and the page file never went over 2gb, yet the script still failed.
I suspect that this problem is probably being caused by the fact that the script produced is a single batch; from other posts on this forum it seems that it's not possible to instruct SqlDataCompare to split the script into several batches, and I'm wary of writing code to do this automatically from a data integrity point of view, but I was wondering if anyone could suggest anything to make the script more memory efficient - I'm using the command line to generate the script (based on saved SDC packages) and perhaps I'm using the wrong switches?
Comments
Thank you for your post.
You may want to try using the 'Where' filter to split the tables and generate a smaller compare and synchronization script. Using the GUI, edit your SQL Data Compare Project. On the 'Tables & Views' tab right click the tables with large amounts of data in them and choose 'Open WHERE clause editor' this will open up a dialog box where you can specify a where clause and split the large table into parts.
Best Regards
Eddie
Senior Product Support Engineer
Redgate Software Ltd
Email: support@red-gate.com
Thanks for your reply. I'm currently using a Where clause to synchronise a very specific subset of the data. The only other thing I could do is use a Where clause to horizontally partition data and generate a separate script for each partition; ideally I'd like to avoid this as it means I'd have to manually create a transaction to wrap the scripts in.
Cheers.
Chris
This seems to be a conflict in automation. The approach should be to specify a SET ROWCOUNT = Y, (i.e. 10000), value. Red-Gate data compare would then generate multiple insert, update or delete scripts. Likewise the schemacompare would populate a rebuilt table like this, with no begin/committ transactions and assuming a backup is available and disk space optimization is preferred:
SET ROWCOUNT 10000
While 0 = 0 Begin
-- populate rebuilt table
INSERT INTO [dbo].[tmpxxx_table] ([tableID], [field1], [dbaDateCreated], [dbaDateUpdated], [dbaIsActive], [dbaIsDeleted])
SELECT [tableID], [field1], [dbaDateCreated], [dbaDateUpdated], [dbaIsActive], [dbaIsDeleted] from [dbo].
@ROWCOUNT = 0) BREAK
End
GO