Overhead from SSC SPIDs
anna.p
Posts: 34 New member
Hello,
I have 20 databases on a particular server and 14 of them are actively linked to a repository on my machine through SSMS.
We've had some odd/unusual performance issues since I linked them. The performance issues only seem to occur when I'm connected to the server through SSMS. The issues are causing SQL Server to raise "out of memory" exceptions, sometimes causing one or more databases to go into shutdown mode rendering them inaccessible. A SQL service restart clears things up. I figured there was no way Red Gate tools could be contributing to the problem, but the problem got worse when two of my colleagues set up active links on their machines. We added 4GB to the machine, which didn't help.
Recently I noticed that sp_who is showing a whole lot of Red Gate SPIDs sitting in tempdb, which seem to run periodically because the CPU usage keeps going up over time. I'm guessing they're associated with SSC for two reasons: the inputbuffer shows code utilizing #RG_NewSysObjects and #RG_LastSysObjects (which are apt names for source control), and
because when I click on a database in the SSMS Object Explorer tree for the first time after having connected to the server, at least one additional Red Gate SPID is created.
How much overhead do these processes create? Have you ever heard of a situation where a bunch of these processes spinning over a long period cause serious performance issues? Should I have fewer databases actively linked at any given time? Am I nuts?
I have 20 databases on a particular server and 14 of them are actively linked to a repository on my machine through SSMS.
We've had some odd/unusual performance issues since I linked them. The performance issues only seem to occur when I'm connected to the server through SSMS. The issues are causing SQL Server to raise "out of memory" exceptions, sometimes causing one or more databases to go into shutdown mode rendering them inaccessible. A SQL service restart clears things up. I figured there was no way Red Gate tools could be contributing to the problem, but the problem got worse when two of my colleagues set up active links on their machines. We added 4GB to the machine, which didn't help.
Recently I noticed that sp_who is showing a whole lot of Red Gate SPIDs sitting in tempdb, which seem to run periodically because the CPU usage keeps going up over time. I'm guessing they're associated with SSC for two reasons: the inputbuffer shows code utilizing #RG_NewSysObjects and #RG_LastSysObjects (which are apt names for source control), and
because when I click on a database in the SSMS Object Explorer tree for the first time after having connected to the server, at least one additional Red Gate SPID is created.
How much overhead do these processes create? Have you ever heard of a situation where a bunch of these processes spinning over a long period cause serious performance issues? Should I have fewer databases actively linked at any given time? Am I nuts?
Comments
The queries you have noticed are part of SQL Source Control and SQL Compare's "check for new objects" functionality. Since this is "nice-to-have" rather than a "business-critical" sort of feature, you can turn it off by editing the SQL Source Control config file.
If you add the "PollingEnabled" element to the options file and set it to false as described here: http://www.red-gate.com/messageboard/vi ... ingenabled
you can see if this helps alleviate some of the load on your server.
Thanks for your reply. The change didn't work; I'm still getting the blue dot as soon as I click on the database in the object explorer and the SPIDs' IOs keep rising.
This is the current contents of my RedGate_SQLSourceControl_Engine_EngineOptions.xml file are:
Is it possibly a bug that the version in that file is set to 2 rather than 3?
I had to go through the source code to work out what's happening, but I will log a bug to make sure the skeleton of the options file is correct in a future update.