'Timeout Expired' Error Message
rsg2
Posts: 4
Just installed SQL Log Rescue. After specifying database and getting list of .bak/.trn files then selecting ones to view, after click 'Next' then (after several minutes) get this error message:
"Database Error (title bar).
"Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding."
I had selected 4 .BAK files and their corresponding 4 .TRN files.
Nothing shows to view after this error.
What is wrong? I can find nothing in the menus to set anything related to timeout periods or whatever.
"Database Error (title bar).
"Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding."
I had selected 4 .BAK files and their corresponding 4 .TRN files.
Nothing shows to view after this error.
What is wrong? I can find nothing in the menus to set anything related to timeout periods or whatever.
This discussion has been closed.
Comments
There is no option of setting a connection or query timeout as this should be handled 'seamlessly' by the .NET Framework.
The thing to do would be to try to track down the cause -- can you tell us if the software gets to the point where it says it has validated the backup files, or does the error crop up before this?
The problem occurs during the step "choose data backups." No matter how many backup files we choose, after hitting "next," the "anaylzing transaction log" box pops up for probably 10 minutes and then we receive the timeout/server not responding error.
The error occurs after I select a backup set and click the 'Next' button. The screen shows the 'gas gauge' form labeled 'Analyzing files' but the 'gas gauge' never displays any 'bars' on it so it is as if it never gets started.
When you select the backup files you should get a screen information you that the files were ok. You then have to select the finish button to continue and have the log analysed. This is the same if you don't select any back files, you just receive a warning.
If you are having a problem while analysing the file where are they located and do they all still exist. You may have to deselect some of them, or add your own selection in.
If you leave the files to be analysed for some time does the application run out of memory.
Regards
Dan
Red Gate Software Ltd
My impression is that SQL Log Server is not working correctly. I am disappointed in the support we have received from this vendor. Last week I tried a $50 program from another vendor and have received several responses from the vendor when troubleshooting it, but so far not a thing from the vendor of this program, don't know whether those replying to my posts are vendor support people or other users.
In our case, the analyze process ate through so much temp disk space that the server naturally couldn't go on - hence the timeout. The c: drive had 2Gb free, but that appears inadequate...
Perhaps the way that Log Rescue handles its analysis phase needs a look under the hood (bonnet). Does it really need to chew up all that space and keep it? Can it estimate the space it needs and tell me it doesn't have enough on the drive it's going to use and offer me the choice of another? If I tried this on a large DB, am I likely to wait an hour while it "analyses" during which time my ear is being sorely bent by managers who need the DB fixed!? I noticed other posts/comments on other sites that cited analysis times that were far too long for practical use with large DBs - what is the truth?
I'd hate to give up on the product as, on the face of it, it looks to have a very good feature-set and one that will same me gobs of development effort to have the same kind of undo-transaction functionality, and "who's the culprit?" detection capabilities.
Decide wisely...
If you do not recieve an answer from Red Gate within 24 hours, chances are very good that we never got your email. If you suspect this, we're not shy about giving you our telephone number -- in fact I believe it is on every page of our website at www.red-gate.com. Go ahead and give us a ring if you want!
There are some distinct differences in the way Log Rescue works when compared to other vendors. First off, log analysis incurs a performance penalty and you can either pay this at the server end or the client end. I believe we have chosen wisely by designing the software so that it pulls all of the relevant log data off of the server and stored it on temporary files on the client which is running Log Rescue. This is done for performance reasons as well, to prevent contention between Log Rescue and the SQL Server over the log file.
Second, for a better user experience, lots of data, particularly BLOBS, are cached at the client. In order to produce the most accurate results, we ask for your log files, which will also have information parsed from them and stored in memory and disk.
I'm sorry that you weren't happy with the software, but I believe that the software design prevents many more problems than it causes. You will need to ensure that you have ample computer power for analyzing your logs.
So it is entirely concievable that 2GB of free space may not be enough to compare a 1.4 GB log file.
I still have some questions/points:
1. Could you develop a "space required" algorithm and apply it to the "default" temp file drive to see if you can complete the "analyze" stage?
2. Give us a configuration option to tell Log Rescue where to put its temp files.
3. Can you give some kind of "Estimated Time Until Ready" on the analyze progress meter? You should be able to do this given the file sizes you are asked to analyze and your software's progress through them...
4. If we know the approximate time when the transactions we need to analyze were started and ended, can you not "skip read" through the logs and only save the data to temp files for the time period we need, rather than go wholesale when we only need nibbles? If we, the Users, find we need to expand our timeslice, that's our problem in incurring another parse - it still stands a chance of being much faster than saving everything, especially when dealing with high volume, large, databases... Even if you started saving all the data from the timestamp we request, we'll likely be able to get into recovery before our bosses go spare! If your software could allow us to page through time as soon as possible, i.e., some kind of async process is spooling to temp files in the background while the UI is able to satisfy "walk through time" requests on data already spooled, we can at least get started rather than wait until the last log byte has been spooled.
Cheers.
Decide wisely...
I'll certainly suggest it and we'll see if it's possible to do these things.
You can already change the location for temp files. Like all Red Gate products, we ask the operating system to furnish us with temporary storage, so you can change the location by setting the TMP environment variable in your system properties. You can change this to a larger drive, log off and on again, and Log Rescue will store its' working data set in the new physical location.
Perhaps you should also consider making the temp location your own configurable parameter...?
Decide wisely...
If the package used its own Environment Variable for temp files, e.g., LR_TMP, configurable as a system variable, the issue would be solved very simply.
The "larger drive" the server has access to is actually a NETWORK DRIVE and to use that as the Windows Temp folder is flat out suicidal.
Decide wisely...