DBInsight’s Blogs

Moving a SQL Cluster using Log Shipping

Posted by Rob Risetto on June 20, 2014

I recently had the requirement to configure log shipping in a SQL 2008 R2 Standard Edition environment. In my scenario, we needed to move a two node cluster to another floor but also attach it to a new SAN, have minimal downtime, no data loss and use the current server hardware.

Database mirroring was not an option since the very high transaction rate had previously caused mirroring to stall.

Ok so my approach was

  1. Remove SQL from the passive server and evict the passive server from the cluster
  2. Move the evicted server to the new location and create a new single node SQL Cluster using the new SAN partitions. Configure all other items e.g. Logins, user jobs etc.
  3. Set up a log shipping between the original and new single node SQL Clusters
  4. At cut over, stop all transactions, do a final transaction log backup and then restore the final Transaction Log backup files onto the Secondary SQL Server and make the databases operational.
  5. Run on the new single node SQL Cluster for a several days, then drop the original single node SQL Cluster and move the server to the new location and add the server to the new SQL Cluster.

In this scenario each single node SQL Cluster had no local redundancy but if there was issue we could cut over to the log shipped databases.

Now I could have used the built-in Log Shipping function but on the original SQL Cluster we had Transaction Log backup jobs etc already configured plus I just wanted a lightweight way to apply logs to secondary SQL Server temporarily.

So I went hunting for a PowerShell script that was straightforward and easy to implement and I found one on the Net and customised/enhanced it to work in my environment. BTW, compliments to the original author.

The updated script looks like below, dbserver01 is the Primary server and dbserver02 is the Secondary server.

# continue if errors, prevent script from failing if TLog Backup is too early to apply
$ErrorActionPreference = “continue”

# Backup share on Primary server

$1HourOld = [DateTime]::Now.AddHours(-1)

## Retrieve database folders and TLog Backup files
foreach ($dbName in (Get-ChildItem $LogBackupFolder | Where { $_.PsIsContainer }) )


$databaseName = $dbname.name
Write-output $databaseName

# Get TLog Backup files that have a creation time older than hour ago
foreach ($logName in (Get-ChildItem $LogBackupFolder\$databasename | where {($_.CreationTime -ge $1HourOld) -and ($_.Extension -eq “.trn”)} ))


# Restore using UNC path to TLog backup file

$logBackupPath = “$logBackupFolder\$databasename\$logName”
$restoreSQL = “RESTORE LOG $databaseName FROM DISK=’$logBackupPath’ WITH NORECOVERY”

Write-Output $restoreSQL

# Run the restore for the TLog Backup file on the secondary server
& sqlcmd.exe -S dbserver02 -Q “$restoreSQL” -E


The implementation included :-

  1. Manually restore the database on the secondary SQL Server using the NORECOVERY option
  2. Create a file share on the Primary SQL Server on the root of the backup directory, the one immediately above the database sub-directories that hold the TLog backup files. The premise here is that database backup sub-directories are created for each database as part of the transaction log backup maintenance plan. In my case the transaction log backup job ran every 5 minutes on the Primary SQL Server.
  3. Make sure that the secondary SQL Server service account has read access to the file share located on the Primary server and underlying directories.
  4. Create a SQL Agent job on the Secondary SQL Server to run the RestoreLogs PowerShell script. I set it up to run every 30 minutes.
  5. The script will check for all TLog backup files created since 1 hour ago and builds and then executes a RESTORE LOG statement using the database folder name and transaction log backup file name and file share.

The script would run every 30 minutes and look for files created in the last 1 hour, so it will attempt to restore TLog backups that are too early. No problems, since the command

$ErrorActionPreference = “continue”

allows the script to continue without stopping. I could have been real tricky and used the sp_can_tlog_be_applied to check if the TLog backup can be applied or keep track of what was restored in a table, but again I wanted to keep it simple plus this was a temporary log shipping implementation and the customer did not what to expend time and money on elegant solutions.

Now the customer wanted to run on the new SQL Cluster for several days but also wanted an option to fail back over to the original SQL Cluster if something went wrong. Therefore I needed to reverse the direction of the log shipping at cut over. i.e. the original Primary SQL Server Cluster would assume the role of the secondary at cut over.

At cut over, I stopped the transaction log backup job on the Primary SQL Server, I then ran a Tail Log backup using NORECOVERY to generate the final transaction log to apply to the secondary SQL Server databases and to set the databases on the Primary SQL Server into Restoring mode. The example of the statement on a Northwind database is shown below

— Need exclusive access to NorthWind

BACKUP LOG [Northwind] TO  DISK = N’E:\sqlbackuptrnlog\lastlog.trn’ WITH NO_TRUNCATE , NORECOVERY , INIT,  STATS = 10

I then ran the Restore Logs jobs on the Secondary SQL Server to apply the last transaction log backup created by the above statement.

I then ran the following RESTORE statement to make the databases operational on the Secondary SQL Server. Here’s example for the Northwind database


So now the Secondary SQL Server had operational databases and original Primary SQL Server had its databases in Restoring mode. I then enabled a Restore Logs job previously created on the original Primary SQL Server to apply transaction log backup files created from the Secondary SQL Server (now the Primary).

Spread the love

Leave a Reply

Notify of