Warning 445: Breaks Log Shipping Resiliency

DonFergusonDonFerguson Posts: 196 Silver 5
edited April 2, 2010 4:29PM in SQL Backup Previous Versions
I had a CopyTo failure in log shipping that required manual intervention. I was able to track down the issue from the log indicating that at that specific time it failed to write the entry into the data.sdf file.

Here is the scrubbed error from the log:
8/18/2009 12:20:03 AM: Backing up DBName (transaction log) on InstanceName instance to:
8/18/2009 12:20:03 AM: F:\Log\HostName\InstanceName\DBName\LOG_DBName_20090818002003.sqb

8/18/2009 12:20:06 AM: BACKUP LOG [DBName] TO VIRTUAL_DEVICE = 'SQLBACKUP_BA3D96CD-AC21-4474-9671-F7FA806C28A8' WITH BUFFERCOUNT = 6, BLOCKSIZE = 65536, MAXTRANSFERSIZE = 1048576, NAME = N'Database (DBName), 8/18/2009 12:20:03 AM', DESCRIPTION = N'Backup on 8/18/2009 12:20:03 AM Server: NodeName\InstanceName Database: DBName', FORMAT

8/18/2009 12:20:06 AM: Backup data size : 1.000 MB
8/18/2009 12:20:06 AM: Compressed data size: 22.000 KB
8/18/2009 12:20:06 AM: Compression rate : 97.85%

8/18/2009 12:20:06 AM: Processed 17 pages for database 'DBName', file 'DBName_Log' on file 1.
8/18/2009 12:20:06 AM: BACKUP LOG successfully processed 17 pages in 0.071 seconds (1.867 MB/sec).

8/18/2009 12:20:21 AM: Warning 445: Get exclusive local data store access failed - timeout.

Since it wasn't able to append the log file record log copy queue the file never got copied to its intended target. I had to manually copy the file to resolve it. I am also fairly certain that this was most likely caused by a timing issue were the data.sdf file was being backed up. Is there a way to prevent this error beyond the obvious solution of excluding the data.sdf files from the OS drive backups? For example, is there an option to increase the timeout of have it retry to append into the local data store upon failure?

Comments

  • Anu DAnu D Posts: 876 Silver 3
    edited August 21, 2009 8:20AM
    Many thanks for your post.

    The data.sdf file is the local data store for the SQL Backup, it records the backup/restore history from the SQL Backup. When you install SQL Backup, we install SQL Server Compact Edition. The data.sdf file is the data file for SQL Server Compact Edition.

    SQL Server Compact Edition only allows a single connection at a time.

    Can you please check to see if you have Anti-Virus software running on the SQL Server with an on-access scanner?

    What I mean by an on-access scanner, some AV software will check the a file upon it being opened on a server.

    Can you please ensure that the on-access scanner is disabled for this file location:

    C:\Documents and settings\All users\Application Data\Red Gate\SQL Backup\Data\(LOCAL) or SQL Instance Name\Data.sdf

    If you do not have AV software with an on-access scanner, can you please run an executable which is available in the Windows Server Resource Kit called oh.exe. This executable is an on-handle checker, when run against a file you can check to see what other handles are running on the file. This will help us identify as to what application has a lock on the file.
    Anuradha Deshpande
    Product Support
    Redgate Software Ltd.
    E-mail: support@red-gate.com
  • DonFergusonDonFerguson Posts: 196 Silver 5
    The cause has already been definitively identified. It was the file system backup that was running not AV. The specific data.sdf file in question was backed up 1 minute before the error was logged. Prior database backups as well as subsequent backups were logged into the SQL CE tables just fine. It was just an unfortunate matter of timing.

    I am looking for a solution more elegent than excluding the Data directory from the file backups. If you don't have a better solution than I would consider this a defect that needs to be addressed by Red Gate.

    Let me propose two potential solutions:

    1. (Simple) If it fails to open the local cache file, then it needs to retry.
    2. (Better) Migrate the SQL CE tables to the SQL Server instance being backed up. This would not only resolve the issue in this thread but would also resolve ongoing corruption problems.
  • We've logged this to be addressed in a future release (reference SB_4341). Ideally we'll fix the blocking issue with a more elegant solution than a retry. Thanks for reporting the issue to us.
    Helen Joyce
    SQL Backup Project Manager
    Red Gate Software
  • DonFergusonDonFerguson Posts: 196 Silver 5
    This is still an issue in the 6.4 release. The problem is that sometimes backups in the source database don't get recorded in the "Server Cache" aka the SQL CE database. This is usually during times of heavy activity. Even excluding the SQL CE backups from the file system backup doesn't prevent an occasional failure to record a SQL Backup transaction log in the CE database. When it fails to record in CE, the file doesn't get shipped and log shipping breaks. I then need to manually identify the file that gets missed and copy it over to repair log shipping.

    I am considering coding a work around to this problem. The work around would be along the lines of putting in a SQL Agent job step that compares the output of a sqbdata query to against the msdb..backup* tables. If a log backup is recorded in msdb but not in SQL CE, then I would perform a sqldata inserts of the missing data.

    My question, is for the sqbdata insert, would it be sufficient to just insert a row into the backupfiles_copylist table? Or would I need to perform inserts into the other tables as well in order to get the the missed backup file onto the copy queue? Any guidance along those lines would be appreciated. Thanks.
Sign In or Register to comment.