Log Copy Queue Pending
mp
Posts: 13
Hello,
I just installed SQL Backup 6 and ran a full backup w/network copy last night and everything worked perfectly. I tried to setup transaction log backups this morning and the dumps ran fine, however, the logs are stuck in the Log Copy Queue and the "Last Attempt to Copy" column is empty. The log file for an individual database says:
"No attempts have been made to copy this file."
I am running backups every 15 minutes now and they are stacking up in the queue. Any thoughts/assistance is greatly appreciated. The server is an 64-bit cluster in case that helps any.
-Mike
I just installed SQL Backup 6 and ran a full backup w/network copy last night and everything worked perfectly. I tried to setup transaction log backups this morning and the dumps ran fine, however, the logs are stuck in the Log Copy Queue and the "Last Attempt to Copy" column is empty. The log file for an individual database says:
"No attempts have been made to copy this file."
I am running backups every 15 minutes now and they are stacking up in the queue. Any thoughts/assistance is greatly appreciated. The server is an 64-bit cluster in case that helps any.
-Mike
Comments
C:\Documents and Settings\All Users\Application Data\Red Gate\SQL Backup\Log\<instance>
Also- could you send the script used that causes the problem?
7/21/2009 1:30:24 PM: Backing up TestDatabase (transaction log) to:
7/21/2009 1:30:24 PM: X:\Backups\LOG_(local)_TestDatabase_20090721_133024.sqb
7/21/2009 1:30:24 PM: BACKUP LOG [TestDatabase] TO VIRTUAL_DEVICE = 'SQLBACKUP_A905F3BC-11EA-4ED5-A3BC-8910C5F68D3B', VIRTUAL_DEVICE = 'SQLBACKUP_A905F3BC-11EA-4ED5-A3BC-8910C5F68D3B01', VIRTUAL_DEVICE = 'SQLBACKUP_A905F3BC-11EA-4ED5-A3BC-8910C5F68D3B02', VIRTUAL_DEVICE = 'SQLBACKUP_A905F3BC-11EA-4ED5-A3BC-8910C5F68D3B03', VIRTUAL_DEVICE = 'SQLBACKUP_A905F3BC-11EA-4ED5-A3BC-8910C5F68D3B04', VIRTUAL_DEVICE = 'SQLBACKUP_A905F3BC-11EA-4ED5-A3BC-8910C5F68D3B05', VIRTUAL_DEVICE = 'SQLBACKUP_A905F3BC-11EA-4ED5-A3BC-8910C5F68D3B06' WITH BUFFERCOUNT = 28, BLOCKSIZE = 65536, MAXTRANSFERSIZE = 1048576, NAME = N'Database (TestDatabase), 7/21/2009 1:30:24 PM', DESCRIPTION = N'Backup on 7/21/2009 1:30:24 PM Server: sqlcluster Database: TestDatabase', FORMAT
7/21/2009 1:30:24 PM: Backup data size : 4.625 MB
7/21/2009 1:30:24 PM: Compressed data size: 38.500 KB
7/21/2009 1:30:24 PM: Compression rate : 99.19%
7/21/2009 1:30:24 PM: Processed 12 pages for database 'TestDatabase', file 'TestDatabase_log' on file 1.
7/21/2009 1:30:24 PM: BACKUP LOG successfully processed 12 pages in 0.152 seconds (0.606 MB/sec).
7/21/2009 1:30:24 PM:
7/21/2009 1:30:24 PM: Verifying files:
7/21/2009 1:30:24 PM: C:\Backups\LOG_(local)_TestDatabase_20090721_133024.sqb
7/21/2009 1:30:24 PM:
7/21/2009 1:30:25 PM: The backup set on file 1 is valid.
run this first:
EXEC MASTER..sqbdata 'select * from backupfiles_copylist'
see what the stauts of the jobs is (we think its going to be 'A') then run:
EXEC MASTER..sqbdata 'delete from backupfiles_copylist where status = 'A''
(if necessary, change 'A' to the correct status)
EXEC MASTER..sqbdata 'select * from backupfiles_copylist_log'
were the 'name' and 'copyto' values correct? Also, were there any results returned when you ran the following:
Thanks.
SQL Backup Consultant Developer
Associate, Yohz Software
Beyond compression - SQL Backup goodies under the hood, updated for version 8
This returned nothing:
EXEC MASTER..sqbdata 'select * from backupfiles_copylist_log'
Any ideas?
This is the query used by SQL Backup to pick up files to copy.
I should have answered your other question re copied files. In SQL Backup version 6, transaction log files that need to be copied to one or more other locations are put into a queue. The copying process is thus done in a separate process, and not as part of the backup process. This serves 2 purposes:
- in the event that the copying process takes longer than expected, the current and subsequent backup processes are not blocked
- a queued copy task can be re-attempted multiple times in the event of a failure
Note that only transaction log backups are queued for copying. Other types of backups are copied immediately i.e. as part of the backup task.
SQL Backup Consultant Developer
Associate, Yohz Software
Beyond compression - SQL Backup goodies under the hood, updated for version 8
I tried several times to create a new transaction log w/copy (and both with and without 'verify') and always resulted in the log files being queued but with no attempts to deliver as indicated by the 'Last Attempt to Copy' column.
Then run the following to check if the copy task has been queued:
and run the following to check if SQL Backup will pick up that task: I should also have mentioned that if you don't want the copy task to be queued, you can use the USESIMPLECOPY option so that the backup file is copied immediately following the backup, and not placed in a queue.
Thanks.
SQL Backup Consultant Developer
Associate, Yohz Software
Beyond compression - SQL Backup goodies under the hood, updated for version 8
EXEC master..sqlbackup '-sql "BACKUP LOG [TestDatabase] TO DISK = [C:\Backups\<AUTO>] WITH COPYTO = [\\myserver\myshare]"'
Backing up TestDatabase (transaction log) to:
C:\Backups\LOG_(local)_TestDatabase_20090722_112057.sqb
Backup data size : 512.000 KB
Compressed data size: 3.500 KB
Compression rate : 99.32%
Processed 0 pages for database 'TestDatabase', file 'TestDatabase_log' on file 1.
BACKUP LOG successfully processed 0 pages in 0.002 seconds (0.000 MB/sec).
SQL Backup process ended.
name value
exitcode 0
sqlerrorcode 0
filename001 C:\Backups\LOG_(local)_TestDatabase_20090722_112057.sqb
EXEC MASTER..sqbdata 'select * from backupfiles_copylist'
id backup_id name copyto overwrite status count retrycount diskretrycount diskretryinterval mailto mailto_onerror created lastattempt lastupdated
420 3082 C:\Backups\LOG_(local)_TestDatabase_20090722_112057.sqb \\myserver\myshare\LOG_(local)_TestDatabase_20090722_112057.sqb 0 P 0 0 10 30 2009-07-22 11:20:58.790 NULL 2009-07-22 15:20:58.000
EXEC master..sqbdata 'SELECT id, name, copyto, overwrite, count, diskretryinterval, diskretrycount, mailto, mailto_onerror, created, lastattempt FROM backupfiles_copylist WHERE status = ''P'' ORDER BY count DESC, created'
id name copyto overwrite count diskretryinterval diskretrycount mailto mailto_onerror created lastattempt
420 C:\Backups\LOG_(local)_TestDatabase_20090722_112057.sqb \\myserver\myshare\LOG_(local)_TestDatabase_20090722_112057.sqb 0 0 30 10 2009-07-22 11:20:58.790 NULL
Thanks.
SQL Backup Consultant Developer
Associate, Yohz Software
Beyond compression - SQL Backup goodies under the hood, updated for version 8
id name copyto overwrite count diskretryinterval diskretrycount mailto mailto_onerror created lastattempt
420 C:\Backups\LOG_(local)_TestDatabase_20090722_112057.sqb \\myserver\myshare\LOG_(local)_TestDatabase_20090722_112057.sqb 0 0 30 10 2009-07-22 11:20:58.790 NULL
Are we out of ideas on this one?
ftp://support.red-gate.com/patches/SQL_ ... 0_1001.zip
Stop the SQL Backup Agent service using the Windows Service manager, after ensuring that no SQL Backup processes are running. Rename the existing SQL Backup Agent service file (SQBCoreService.exe). Extract the debug version of the executable from the above archive, and place it in the folder where the original SQBCoreService.exe file was located.
Open the registry, and look for the SQL Backup Agent service definition. For the default instance, this would be
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\SQLBackupAgent
For a named instance, this would be
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\SQLBackupAgent_<instance name>
Look for the ImagePath value, and edit its data. Add -sqbdebug to the end of the startup parameter e.g.
C:\Program Files\Red Gate\SQL Backup\(LOCAL)\SQBCoreService.exe -sqbdebug
Start the SQL Backup Agent service. Let it run for a minute. This should generate a log file named SQBCoreService_log.txt in the folder where SQBCoreService.exe is located. Please send me that file.
Stop the SQL Backup Agent service after that, remove the -sqbdebug parameter from the startup parameter, and restart the service. This is done because the log file (SQBCoreService_log.txt) can grow very fast, and if you are not debugging, just adds unnecessary overhead.
Thanks.
SQL Backup Consultant Developer
Associate, Yohz Software
Beyond compression - SQL Backup goodies under the hood, updated for version 8
From reading the previous back and forth on this, I ran this:
EXEC master..sqbdata 'SELECT id, name, copyto, overwrite, count, diskretryinterval, diskretrycount, mailto, mailto_onerror, created, lastattempt FROM backupfiles_copylist WHERE status = ''P'' ORDER BY count DESC, created'
And I have a lot of rows returned, the paths are correct, but the "lastattempt" column is null. It is like something is not running to even attempt to perform the copy.
I can confirm that the Diff and Full backups are not impacted by this issue in our environment (Fulls and Diffs copy over to our network share after a local backup is performed with no issue, and I assume this is using the network resiliency features). Our current interim solution is to change all tlog backup jobs to use the "USESIMPLECOPY" param which basically disables network resiliency, but does work most of the time.
Is there anything in the works to resolve this issue or is there anything i should be looking at to further diagnose the problem?
I'm sure that whatever problem you're having does not have one signular, uniquely identifyable cause. Network resiliency itself only appeared because so much support time was spent troubleshooting unreliable networks, misconfigured routers, NETBIOS sessions timing out, and the like.
I suppose the problem, if anything, is that SQL Backup does not give us much to go on with regards to diagnosing these sorts of problems. I think it's unfair to say we didn't try, though, as you said in your previous post we tried everything we could think of to find the issue.
This will be my last post on this thread.
Nobody tried any network troubleshooting on my network as you can see from the forum transcript and the reliability of a simple NIC/ethernet/switch connection was never brought into question.
The problem for me was that your team just stopped trying without any resolution and, to redgate's credit, I was able to obtain a full refund with little issue.
EXEC MASTER..sqbdata delete from backupfiles_copylist_log'
EXEC MASTER..sqbdata 'delete from backupfiles_copylist'
The entries are:
COPYTO:CopyIntervalInMinutes
COPYTO:ThreadCount
The max for thread count is 10 so I set that to 10 and set the copy interval to 1 minute. This was just enough to allow our backups to copy over before the next transaction log backup job launched.
In testing this it appeared that the copying used a fairly naïve algorithm, basically copying <COPYTO:ThreadCount> files once every <COPYTO:CopyIntervalInMinutes> or something that performs similarly. So copying a large number of files takes a long time, even if the files are very small. Actually, it is a little more sophisticated than that, because it seems to be copying more than <COPYTO:ThreadCount> files at a time, but it appears to copy 20 or so files then do nothing for <COPYTO:CopyIntervalInMinutes>.
At any rate, creating those registry entry seems to have speed things up considerables!
i hope that helps someone.
Thanks.
SQL Backup Consultant Developer
Associate, Yohz Software
Beyond compression - SQL Backup goodies under the hood, updated for version 8
Could you please try a SQL Backup Agent service patch version (6.5.2.3). That patch now recognises a new setting (COPYTO:ThreadMultiplier, DWORD type), that is the multiplier used to determine the number of files to process during each run. So if you used a value of 5 for COPYTO:ThreadMultiplier, and a value of 10 for COPYTO:ThreadCount, SQL Backup will attempt to copy 50 files during each 'copy process' run.
You can download the patch from here:
ftp://support.red-gate.com/Patches/sql_ ... _5_2_3.zip
That archive contains the SQL Backup Agent executable file (SQBCoreService.exe). To replace the existing service executable file, do the following:
- ensure that no SQL Backup processes are running
- stop the SQL Backup Agent service, or disable the cluster resource if on a cluster
- rename the existing executable file (SQBCoreService.exe) in the SQL Backup installation folder
- extract and place the patched executable file into the same folder
- restart the SQL Backup Agent service/cluster resource
Let me know if this works for you.
Thank you.
SQL Backup Consultant Developer
Associate, Yohz Software
Beyond compression - SQL Backup goodies under the hood, updated for version 8
Thank you for the fast response! I installed the both of the patched files (SQBCoreService.exe and SQLBackupC.exe) since both were in the zip and created the new registry entry and set the value to 10.
The log backup copying process now copies all of my log backups each minute keeping the queue small!
SQL Backup Consultant Developer
Associate, Yohz Software
Beyond compression - SQL Backup goodies under the hood, updated for version 8