Skip to content
Jul 31 13

Keep your bottlenecks in place!!

by Dan Thompson
Bottlenecks

It’s great if your business decide they want to invest in your database platform, as an architect it means you can design all those things you’ve always wanted to and have a proper data tier, the way all the books and articles intended. That being said, if you consider your current data tier and think about the waits/bottlenecks you see day to day which could be TempDB, Memory, IO, CPU, Network - What actually happens if you remove those bottlenecks? Bottlenecks never actually go, they just move to other places.

For example, take I/O into consideration. If you put a Fusion I/O card and/or SSD drives in place and move all your file groups up to the fusion I/O card, what results would you get? Yes, you’ve more than likely removed the I/O bottleneck you had before, but have you just moved the bottleneck to the CPU which now has more data to sort and join coming at it at quicker rates than it has ever seen before? Did you invest all that money in improving the I/O because you just wanted to clear that wait-type? Did you save any of that money to invest in some faster CPUs (…or a new server as mostly seems the case for CPU upgrades!)? Just because your data moves around from the disks quicker, does it mean that your server can now perform quicker necessarily?

Looking at TempDB contention – is this caused again by badly performing I/O subsystems? or perhaps not the most efficient number of TempDB data files? Obviously, the first thing we should consider here is why is TempDB getting hammered so much, but I’m presuming that angle has already been crossed off as a not-reasonably-possible-task. Again though, what happens if we move the bottleneck? Let’s say we put in some high iops storage to remove the bottleneck, now you can create more and more temp tables than before and perform data operations quicker into those temp tables and your developers who keep writing that bad code which uses TempDB so heavily can now rest assured they can remain shoddy coders for a little while longer. What’s your next option when TempDB consumes all that IO and you need something faster than those high iops disks/cards? Did you just move the finish line a little further into the distance rather than make it into a circuit track? Again, if TempDB can put data in and out quicker, does that mean it can necessarily sort and join data quicker? Again, you could potentially be moving your bottleneck to the CPUs.

One concern here is that your newly invested resources actually mask problems. What if you put that epic table which gets used all the time on your fast SSD/IO cards? Yeah, it can read into memory the pages it needs quicker and it’s likely to reduce the changes of any deadlocks or the duration of any blocking, but what happens if that nice plan with it’s SEEK operation goes bad and it starts scanning the data? Again your bottleneck is removed, so it’s able to scan the data at a huge rate which may go unnoticed, but then what does it do with it? Does it put all of those pages in memory? Does it start to order your whole table? Again, you might find your CPU or TempDB takes a knock.

The same thing happens with network, it just means there’s more bandwidth & latency for increased traffic and more data to process out of the door. Do your switches become the bottleneck or does the application tier now have more knocking on its door and you end up with ASYNC_NETWORK_IO waits instead?

I think the correct way to use these new high-iops devices is to add them to your solution and migrate carefully selected objects over to them as and when they become a problem. Not only do you look like a hero when there is a live problem, or when the developers have been trying to import the contents of the Internet into a heavily indexed table, but it also allows you to think about your storage in a tiered manor. Performance-Storage, Mid-line storage, and archive-storage. Now apply that to those very large tables and tell those pesky developers to consider archiving strategies for that data!

There are ways round some of these issues. I’m sure resource governor can assist in reducing the saturation to your CPU by placing an artificial limit on that resource for the general processes which can allow your important processes through even when the server is really busy.

I think bottlenecks and waits are one of the most important (if not THE important) consideration when accessing the performance of a server, but it’s really important to think about those “what if I did this..” questions. Are you just moving impending doom of your server to another day later down the line? Are you removing the obstacles which are in your way at the current moment in time, just to find that you have more obstacles behind them? I’m sure a lot of people jump at the chance of spending those hardware budgets on those annoying wait types, but think about the bigger picture for this or you could have to explain why the business have invested all that money in that expensive kit only to find no real return from their investment.

Jun 29 13

Windows 2012 R2 & SQL 2014 CTP 1 on the cloud (Azure)

by Dan Thompson

So I was really chuffed to find that I could take a look round a pre-built Windows Server 2012 R2 and SQL Server 2014 CTP 1 box today when I looked in the Azure gallery. This is a really great way to trial the new R2 edition of Windows 2012 and get used to it’s new features. The Virtual Machines up on Azure are so easy to get going with, you literally click new…VM…from gallery and pick the one which says SQL Server 2014 CTP1….a couple of minutes later your server is ready to RD to!

 

Windows Server 2012 R2

It looks like Microsoft listened! They have brought the start menu back:

StartButton

Presumably this means it’s coming back in for Windows 8.1 too? I hope so!

Startbutton_Hover

It’s got a cool hover animation, I’m really happy with it! This alone is a business case to upgrade all the servers to R2!!

One of the other features I wanted to take a look at was the IO throttling in Hyper-V, but I can’t take a look at that on the cloud, so I’ll have to take their word for it! Although, I can’t find much in the way of documentation about it yet. Obviously the main gripe for anyone who is using Hyper-V is when you want to extend a virtual hard disk. At the moment I have to power down the machine so I can access the drive exclusively, then resize it (which could take a while on a big disk) meanwhile you’re server is offline… Hyper-V in Windows Server 2012 R2 now includes online extendibility for virtual hard drives. You can expand, or reduce a virtual disk size while the VM is still powered up.

Another cool feature is live migration compression. This boasts you can improve live migration performance by two. That might make it a more viable HA/DR solution for larger VMs now.

I’m not a virtualisation expert by any stretch of the imagination, but I really am excited about the technology. I really feel that Microsoft could (over time) become the dominant provider of cloud solutions and virtualisation. I often have discussions about this with our VMware-focused infrastructure guys who dislike any mention of Hyper-V at the moment!

 

SQL Server 2014

SQL 2014 then… So having read all the product documentation on CTP1, is there anything in there as a DBA lead that makes me want to upgrade our platforms from 2012 when it comes out? In short, no. When rumours of 2012 came out, and then the CTPs and eventually the unveiling of it, everyone knew this was something big. In contrast, 2014 doesn’t really offer us much in the way of features so far in CTP1. There’s in-memory tables granted, and you can have an additional secondary’s on your AGs from 4 to 8. But does that really appeal to the masses? I mean certainly, the idea of scaling out your read servers is a good one. Using AlwaysOn to create quick read-only secondary’s would be a quick scalability route, rather than say for example replicating out to a bunch of servers, but there are obvious licensing implications to scaling out your reads.

I’m not sure we will get up to using 4 read-only secondaries, let alone 8! That being said, at least it’s a nice option for those budget-rich companies with money burning a hole in their corporate pockets!

May 13 13

SQL Server 2012 – Bug – CACHESTORE_OBJCP Memory Leak sp_trace_getdata

by Dan Thompson

This is to document a bug we have uncovered in SQL Server 2012. Microsoft have acknowledged this bug and are releasing a fix for it in SQL Server 2012 SP1 CU3.

Background

We have monitoring software which registers a trace and reads data from this trace in order to provide metrics. We noticed that after moving to our new SQL 2012 platform that we had performance problems every few days during our peak/busy season. At one point they were happening every other day. The symptoms we saw were an increase in CPU, and an increase in recompiles. We felt that the CPU increase was due to the recompiles, but the key thing was that the busier the system was, the quicker we knew the problem was going to happen – recompiles went up high, CPU maxed and the system crawled to a halt.

In order to resolve as we couldn’t figure what was going on initially, we failed over to our HA node. Our solution uses AlwaysOn and we had a HA node on the local site which was synchronous so this was easy and took a matter of seconds. Failing over to our secondary replica instantly fixed the problem, but we would see the problem occur in the next day or so and we would have to flip back. This went on for a week or so and we raised support tickets with Microsoft to get to the bottom of the issue.

To cur a long story short, by the time we were through to the right department and we had run all the standard diagnostics, we had already determined ourselves that the issue was due to the CACHESTORE_OBJCP entry in sys.dm_os_memory_cache_counters and specifically that the value for pages_kb was excessively high. We recorded a value of 8.5GB during one of the issues, which was high for our 140GB RAM server.

CACHESTORE_OBJCP

CACHESTORE_OBJCP is a cache store in memory for object plans (stored procs, triggers and functions). As for any of the cache stores, if they get 75% towards the calculated ‘Cache Plan Pressure Limit’ (see technet article at the end) it triggers an internal ‘memory pressure’ event. This is to alert SQL Server that there is memory pressure and SQL Server starts reacting to that trigger.

Tracking down the leak!

We were able to track down the statement which caused the object olan cache store to grow by monitoring the dm_exec_cached_plans dmv after failover (ordering by size_in_bytes DESC). The field to look at here as well is the objtype column – we don’t care about prepared types when we’re looking at the OBJCP cache store which only deals with procs, triggers and functions plans.

We collected readings from dm_exec_cached_plans every 3 minutes and we noticed one compiled plan which arose to the top of the list every time. It grew to the maximum size_in_bytes this dmv would allow with it being and int (214783647):

We can only presume that it further grows in the background to make up the value of the OBJCP cachestore value.

We can figure out which SQL statement the plan belongs to by CROSS APPLYing sys.dm_exec_sql_text:

SELECT TOP 10 *
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle)
order by size_in_bytes DESC

For us this showed the statement sp_trace_getdata which we tracked down to the trace SQL Sentry (performance monitoring) uses to collect data.

That’s not to say that SQL Sentry is the cause here. We raised it to SQL Sentry’s attention however and they are keen to learn about this as we are.

Resolution

We found that restarting the SQL Sentry service resolved the issue during peak times. The pages_kb value for CACHESTORE_OBJCP would then reset and start growing slowly again. We then put a job in place which kills the trace (SQL Sentry restarts the trace automatically) and this stopped the issue getting out of control and causing us problems at peak times.

We raised these facts with Microsoft and asked them if there were any known bugs around sp_trace_getdata or OPENROWSET which may cause the object case to bloat. They used our data collected to uncover this was a bug and it has been scheduled to be included in SP1 CU3 which is scheduled for March.

Conclusion

Obviously, this bug is not due to our monitoring service being SQL Sentry, this bug could have quite happily raised itself from any product which uses the same method, and if we had Sentry pointing at a different version of SQL there would have been no problem.

Keep an eye on http://blogs.msdn.com/b/sqlreleaseservices/ for the release of SQL Server 2012 SP1 CU3 to resolve this issue.

References:

Technet – Cache Plan Internals

May 13 13

Random WAITFOR DELAY time

by Dan Thompson

A Random WAITFOR DELAY – Shortest straw wins!

Introduction

With the advent of AlwaysOn, we now have a dilemma when it comes to backups. Potentially (depending on our site configuration) any node in an AlwaysOn Availability Group could be the primary. That’s fine as you can get your backup job to push out to all the servers (via MSX/TSX if you like) and you can determine whether the instance it is being executed on is the Primary or not.

If it is, it can execute the backup routine against the server so you have backups on your primary DC. Which is great for the Primary DC, but how do you take a backup of one of the Secondaries on the Secondary DC? You can’t say “if you’re a secondary and you’re on the secondary DC, take a backup” because if you have multiple secondaries, they will all take backups. So I decided I would let them pull straws. Shortest straw wins.

With a WAITFOR DELAY @RandomTime they can all execute the backup procedure and when the WAITFOR DELAY @RandomTime is complete, they first do a check to see if one of the other secondaries has started a backup. If it has, nothing to do – you didn’t get the shortest straw (this time). If there’s no backup there…start the regular backup routine.

Simples.

Here’s the code to generate a random WAITFOR DELAY:

DECLARE @RandomTime datetime = '1900-01-01 00:00:00.000'
DECLARE @RandomNumber int = 0
DECLARE @MaxRandomNumber int =20

SET @RandomNumber = @MaxRandomNumber*RAND()

SET @RandomTime = DATEADD(second, @RandomNumber,@RandomTime)

WAITFOR DELAY @RandomTime

SELECT @RandomNumber

 


 

It’s important to note here, my entire solution is to take backups from our secondaries (of which we have multiple and they could be on any DC). With SQL Server your per core licencing model covers you for a the primary server, and one secondary:

“The secondary server used for failover support does not need to be separately licensed for SQL Server as long as it is truly passive. If it is serving data, such as reports to clients running active SQL Server workloads, or performing any “work” such as additional backups being made from secondary servers, then it must be licensed for SQL Server.

Primary server licenses include support for one secondary server only, and any additional secondary servers must be licensed for SQL Server. Note: The rights to run a passive instance of SQL Server for temporary support are not transferable to other licensed servers for purposes of providing multiple passive secondary servers to a single primary server.”

Taken From: (Microsoft) SQL Server 2012 Quick Reference Licensing Guide

If anyone is interested in the full solution to check for secondary/primary let me know and I’ll post up a generalised one.

May 13 13

Restore History SQL by database name and type

by Dan Thompson

;with restorehistory 
as (

	SELECT 
		 d.name [database_name]
		,case restore_type  
 			when 'D' then 'Database (Full)'
 			when 'I' then 'Differential database'
 			when 'L' then 'Log'
 			when 'F' then 'File or filegroup'
			when 'G' then 'Differential file'
			when 'P' then 'Partial'
			when 'Q' then 'Differential partial'
		 end as [restore_type]
		,restore_date
		,row_number() over (partition by destination_database_name,restore_type  order by restore_date desc) as restorerank
	FROM sys.databases d
	LEFT OUTER JOIN msdb.dbo.restorehistory rh on d.name = rh.destination_database_name
)

select 
	 [database_name]
	,[restore_type]
	,restore_date
from restorehistory
WHERE restorerank = 1 OR [restore_type] IS NULL
ORDER BY [database_name] ASC, [restore_date] DESC


May 13 13

LEN() function returning wrong number!

by Dan Thompson

LEN() Function has caught me out again!

LEN() doesn’t take into consideration trailing spaces. I guess this must be to do with the varchar type, but actually it applies to any string you manually enter in. Take the following code as an example

SELECT LEN('5     ')

There are 5 spaces after the number 5, that’s 6 characters in total. The result returned from this query is 1 is LEN() trims the trailing spaces.

In order to count the full length of the string in this case, we must use the DATALENGTH() function.

So – Why do so many people use the LEN() function automatically? …well DATALENGTH is just for the amount of bytes an expression takes up, Length is the number of characters. But then, DATALENGTH for a string seems to return length of characters!

Clear? As mud!

More Information on MSDN:
DATALENGTH (Transact-SQL)
LEN (Transact-SQL)

May 13 13

Capacity Planning – Calculate SQL data file sizes remotely

by Dan Thompson

This has been a very handy script recently. I created it so I can calculate the surface area of data and log files on disk. It can be executed remotely as long as you have administrative permissions. Using the script below I can for example calculate the log files total size on a server excluding TempDB as follows:

Get-SQLDataFileSize -ServerInstance DBS01\DEV -ExcludeTempDB -FileType "LOGS"

Or I can find the size of the DATA files (again excluding TempDB):

Get-SQLDataFileSize -ServerInstance DBS01\DEV -ExcludeTempDB -FileType "LOGS"

I was asked recently to split a bunch of databases from one server onto two servers. I used the function get a list of all the files on a server (data and logs), and then I could figure out how much disk space would be needed by each side of the split (I took the figures into Excel for this bit…i’m not an animal!).

So in order to find out the totals for data AND logs I would execute it as follows:

Get-SQLDataFileSize -ServerInstance DBS01\DEV -ExcludeTempDB

It’s been a handy one for the toolbelt recently. The full code listing is below. Use the copy button which pops up to copy the code.

Enjoy.

<#
    .SYNOPSIS
        Get-SQLDataFileSize function returns a file size for each SQL file and a total
    .DESCRIPTION
        Get-SQLDataFileSize function returns a file size for each SQL file and a total. 
	This can be limited to calculating just one database, or all databases and to just the log files, or the data files.
    .PARAMETER ServerInstance
        SQL Instance of the server to query. This must be on the local machine, unless you have administrative access to the remote machine.
    .PARAMETER ExcludeTempDB
        If you pass this switch, the calculations will not take into account the TempDB files.
    .PARAMETER DatabaseName
        The name of the database you require to calculate. If you include this option it will only calculate for this database, rather than all the databases on the server.
    .PARAMETER FileType
	Options: ROWS, DATA, LOG, LOGS, LDF, MDF, NDF, ALL. The Default is ALL. 
	
	ROWS/DATA	: all data files (excluding logs)
	LOG/LOGS	: all log files
	MDF		: only files with the MDF extension
	NDF		: only files with the NDF extension
	LDF		: only files with the LDF extension

	ALL		: includes all files regardless of extension and whether it is LOG or DATA.


    .EXAMPLE
	Get-SQLDataFileSize -ServerInstance DBS01\DEV -ExcludeTempDB -FileType "DATA"
    .EXAMPLE
	Get-SQLDataFileSize -ServerInstance DBS01\DEV -DatabaseName "ReportServer"
#>
function Get-SQLDataFileSize () 
{
	Param
	(
		[parameter(Mandatory=$True)][string] $ServerInstance,
		[switch] $ExcludeTempDB,
		[parameter(Mandatory=$False)][string] $DatabaseName=$null,
		[parameter(Mandatory=$False)][ValidateSet('ROWS','DATA','LOG','LOGS','LDF','MDF','NDF','ALL')][string]$FileType="ALL",
		[switch] $ShowFullPaths
	)


	[string] $type_desc = "";
	[string] $ext = "";

	switch ($FileType)
		{
			"ROWS" { $type_desc = "ROWS"; $ext = $null}
			"DATA" { $type_desc = "ROWS"; $ext = $null}
			"LOG" { $type_desc = "LOG"; $ext = $null}
			"LOGS" { $type_desc = "LOG"; $ext = $null}
			"LDF" { $type_desc = $null; $ext = ".ldf"}
			"NDF" { $type_desc = $null; $ext = ".ndf"}
			"MDF" { $type_desc = $null; $ext = ".mdf"}
			"ALL" { $type_desc = $null; $ext = $null}
		}

	$sql = "
			SELECT 
				SERVERPROPERTY('ComputerNamePhysicalNetBIOS') [Hostname], 
				physical_name, 
				CASE WHEN SUBSTRING(physical_name,2,1) = ':' 
					THEN '\\' + CAST(SERVERPROPERTY('ComputerNamePhysicalNetBIOS') as varchar(255)) + '\' + REPLACE(physical_name,':','$') 
				END [physical_name_urn]
			FROM sys.master_files

			WHERE (type_desc = '$type_desc' OR '$type_desc' ='')
			AND (RIGHT(physical_name,4) = '$ext' OR '$ext' = '')
			AND (database_id = db_id('$DatabaseName') OR '$DatabaseName' = '')
			AND (database_id != db_id('tempdb') OR '$ExcludeTempDB'='False')
		";

	$logFiles = Invoke-SqlCmd -ServerInstance $ServerInstance -Database master -Query $sql `
			| ForEach-Object { Get-Item $_.physical_name_urn }

	if($ShowFullPaths) {
		$logFiles = $logFiles | Select FullName, Length
	} else {
		$logFiles = $logFiles | Select Name, Length
	}

	
		
	$logFilesSum = ($logFiles | Measure-Object -Property Length -Sum).Sum / 1GB
	
	$logFiles	
	"         {0:N2}" -f $logFilesSum + " GB"
}
May 13 13

Log Shipping via PowerShell Invoke-LSRestore

by Dan Thompson

A while ago at work now we were given a requirement that an external company were going to log ship a database over to us. I was asked to design a robust PowerShell function which we could use to regularly check and apply transaction log backups to a database backup which they had supplied. In essence Log Shipping via PowerShell, or at least the LSRestore portion of it.

Knowing how many problems I’ve had over the years with Log Shipping, I wanted to make the function as robust as possible within the time I had, as well as allowing flexibility so the function could be reused if needs be.

I looked into how the current LSRestore works – specifically: How does it check the file to ensure it can be applied to the backup?

My findings led me to use the stored proc [master].[sys].[sp_can_tlog_be_applied] within the function to check if the file was applicable. Then, if the file is applicable it is restored with the various options passed from the parameters. Finally, a check is done on the restorehistory tables to ensure the transaction log backup was applied succesfully. If there are any problems, the function will throw an error, which will cause any job step that is executing it to fail also.

The end product was Invoke-LSRestore below which we have had running now for a few months. Of course this will come nowhere near to the SQL Server LSRestore, but it is a PowerShell alternative if you are in a similar situation where you need to use it.

We have had it running regularly via a SQL Agent job on one of our production environments for the last few months, but as with any post I make – always test, test and test again on your own environment.

I created another function “Handle-Output” which was just a nicety really for running it in the console.

Handle-Output :

function Handle-Output($Object,$ForegroundColor="white") {
		if ($host.Name -eq "ConsoleHost") {
			Write-Host -ForegroundColor $ForegroundColor -Object $Object
		} else {
			Write-Output $Object
		}
}

Invoke-LSRestore :

function Invoke-LSRestore($ServerInstance, $DatabaseName,$BackupLocation,[switch]$Debug,[switch]$Standby,[switch]$NoRecovery)
{
	$now = Get-Date -Format u
	Handle-Output -Object "$now - Restore Started" -ForegroundColor "Green"
	
	$now = Get-Date -Format u
	Handle-Output -Object "$now - Settings:"	-ForegoundColor DarkYellow
	Handle-Output "`tServer: $ServerInstance"	-ForegoundColor DarkYellow
	Handle-Output "`tDatabase: $DatabaseName"	-ForegoundColor DarkYellow
	Handle-Output "`tBackup Source: $BackupLocation"	-ForegoundColor DarkYellow

	$FilesRestored = 0

	$BackupLocSQL = "
		DECLARE @backuploc as TABLE (Value varchar(255), Data varchar(255))

		INSERT INTO @backuploc(Value,Data)
		EXEC master..xp_instance_regread N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'BackupDirectory'

		SELECT TOP 1 Data
		FROM @backuploc"

	$DefaultBackupLocation = Invoke-SqlCmd -ServerInstance $ServerInstance -Query $BackupLocSQL | ForEach-Object {$_.Data}
	if ($debug -eq $TRUE) {Handle-Output -Object $DefaultBackupLocation -ForegroundColor "DarkYellow"}

	if ($debug) { get-childitem -path $BackupLocation\$DatabaseName*.trn | Select-Object Name | Sort-Object Name }
	
	get-childitem -path $BackupLocation\$DatabaseName*.trn | Sort-Object Name | ForEach-Object {
		$trn_path = $_
		$sql = "
			declare @p3 bit
			set @p3=1

			exec master.sys.sp_can_tlog_be_applied 
				 @backup_file_name=N'$trn_path'
				,@database_name=N'$DatabaseName'
				,@result=@p3 output

			select @p3 [Result]"
			
		if ($debug -eq $TRUE) {Handle-Output -Object $sql -ForegroundColor "DarkYellow"}
		
		$ret = Invoke-SqlCmd -ServerInstance $ServerInstance -Query $sql | ForEach-Object {$_.Result}
		
		if ($debug -eq $TRUE) {Handle-Output -Object $ret -ForegroundColor "DarkYellow"}
		
		if ($ret -eq $TRUE)
		{
			$RestoreSQL = "
				RESTORE LOG [$DatabaseName] 
				FROM  DISK = N'$trn_path'
				WITH  NOUNLOAD,  STATS = 10"
				
				if($Standby) {
					$RestoreSQL = $RestoreSQL + ", STANDBY =N'$DefaultBackupLocation\ROLLBACK_UNDO_$DatabaseName.TUF'";
				}
				if($NoRecovery) {
					$RestoreSQL = $RestoreSQL + ", NORECOVERY";
				}
			
			if ($debug) {$RestoreSQL}
			
			$now = Get-Date -Format u
			Handle-Output -Object "$now - Attempting to restore: `'$trn_path`'" -ForeGroundColor "Yellow"
			
			$RestoreResult = Invoke-SqlCmd -ServerInstance $ServerInstance -Query $RestoreSQL -Database Master -ErrorAction Stop -QueryTimeout 600
			
			if ($debug -eq $TRUE) {Handle-Output -Object $RestoreResult -ForegroundColor "DarkYellow"}
			
			$CheckRestoreSQL = "
				SELECT CAST(1 as bit) [IsRestored]
				FROM msdb.dbo.restorehistory h
				INNER JOIN msdb.dbo.backupset bs ON h.backup_set_id = bs.backup_set_id
				INNER JOIN msdb.dbo.backupmediafamily bmf ON bs.media_set_id = bmf.media_set_id
				WHERE 
					h.destination_database_name = '$DatabaseName' 
				AND bmf.physical_device_name = '$trn_path'
				AND bs.[type]='L'
			"
			
			if ($debug) {$checkRestoreSQL}
			
			$RestoreResult = $FALSE
			$RestoreResult = Invoke-SqlCmd -ServerInstance $ServerInstance -Query $CheckRestoreSQL | ForEach-Object {$_.IsRestored}
			
			if ($RestoreResult -eq $TRUE)
			{
				Handle-Output -Object "Restored Successfully`n" -ForegroundColor "Green"
			} else {
				Throw "Restore Failed!"
			}
			
			$FilesRestored ++;
		} else {
			Handle-Output -Object "`'$trn_path`' skipped." -ForegroundColor yellow
		}
	}
	
	Handle-Output -Object "Number of files restored $FilesRestored." -ForegroundColor "Green"
}

# Invoke-LSRestore -ServerInstance "UATDBS01" -DatabaseName "Accounts" -BackupLocation "Y:\MSSQL\Backups" -NoRecovery
# Invoke-LSRestore -ServerInstance "INTDBS01" -DatabaseName "Sales" -BackupLocation "Y:\MSSQL\Backups" -NoRecovery
May 13 13

Microsoft SQL Server 10.50.1757

by Dan Thompson

I came across one of our SQL Servers which had the product number 10.50.1757. There didn’t seem much in the way of documentation to support this version number, but we came across a link which the file versions matched this product:

http://support.microsoft.com/kb/2492381

FIX: SQL Server 2008 R2 performs poorly when most threads wait for the CMEMTHREAD wait type if the threads use table variables or temp tables to load or update rowsets

May 13 13

PowerShell SSRS function Export-SSRSReports() to RDL files

by Dan Thompson

I was asked to consolidate two 2005 SSRS Servers with over three hundred reports on them and export the contents into one 2012 SSRS Server. I figured I now had two options, either take my phone off the hook for a week and reside myself to the fact I’m going to be doing a lot of mouse clicking, or see how PowerShell can be used to script objects in SSRS. …I chose the later.

Fortunately for me, I stumbled upon an article by Donabel Santos (link here) which showed how to achieve half of what I was trying to do. I figured if I could script the objects out of SSRS into RDL files, I would be able to upload the RDL files to the new server. This is the way you would do it manually after all.

The script worked a charm and with a few modifications I got it to be able to read from 2005 SSRS server and also versions above that. There are a few variations from the ’2010′ and ’2005′ versions of the SSRS service you must interact with in order to obtain these files. In 2010, they seem to have gone away from the Report object to use a more generic model. So instead of getting a Report’s definition, you now get an object’s definition. Makes sense. I also put some comments into the final function so that others can use it.

I think because of the meta-description block it might require PowerShell v2.0 now, but that might be able to be avoided by taking out that block for those of you with an older version. This script can be executed from a server which has access to the report server URL which has PowerShell v2.0 on if that makes it easier.

PowerShell Function

function Export-SSRSReports()
{
<#
    .SYNOPSIS
        Downloads report definitions from SSRS into a file structure. Should work with 2005, 2008, 2008 R2 and 2012.
    .DESCRIPTION
        Uses the ReportService2005 WSDL to return a definition of all the reports on a Reporting Services instance. It then writes these definitions to a file structure. This function is recursive from SourceFolderPath.
		The next step for development of this, would be to allow the reports to be piped into another function/cmdlet.
    .PARAMETER ReportServerName
		The Reporting Services server name (not the instance) from which to download the report definitions.
	.PARAMETER DestinationDirectory
		Destination directory (preferably empty!) to save the definitions to. If the folder does not exist it will be created.
	.PARAMETER SourceFolderPath
		The folder path to the location which reports need obtaining from (e.g. "/Financial Reports"). The function will recurse this location for reports.
	.PARAMETER reportServiceWebServiceURL
		The URL for the 'ReportServer' Web service. This is usually http://SERVERNAME/ReportServer, but this parameter allows you to override that if needs be.

    .EXAMPLE
        C:\PS> Export-SSRSReports -ReportServerName "localhost" `
									-DestinationDirectory "C:\SSRS_Reports\" `
									-SourceFolderPath "/Standard Reports"

#>
	Param 
	(
		[parameter(Mandatory=$True)][string] $ReportServerName,
		[parameter(Mandatory=$True)][string] $DestinationDirectory, 
		[parameter(Mandatory=$False)][string] $SourceFolderPath = "/",
		[parameter(Mandatory=$False)][string] $ReportServiceWebServiceURL = "http://$reportServerName/ReportServer/",
		[parameter(Mandatory=$False)][int] $SSRSVersion = 2008
	)

	# initialise variables
	$items = '';
	$WebServiceProxy = '';
	[string] $ReportServerUri = '';
	$item = '';

    [string] $status = "Get definition of reports from {0} to {1}`n`n" -f $reportServerName, $DestinationDirectory
    Write-Host $status
    Write-Progress -activity "Connecting" -status $status -percentComplete -1

    [void][System.Reflection.Assembly]::LoadWithPartialName("System.Xml.XmlDocument");
    [void][System.Reflection.Assembly]::LoadWithPartialName("System.IO");

	if ($ssrsVersion -gt 2005) 
	{
		# SSRS 2008/2012
		$reportServerUri = "$ReportServiceWebServiceURL/ReportService2010.asmx" -f $reportServerName
	} else {
		# SSRS 2005
		$reportServerUri = "$ReportServiceWebServiceURL/ReportService2005.asmx" -f $reportServerName
	}
	$WebServiceProxy = New-WebServiceProxy -Uri $reportServerUri -Namespace SSRS.ReportingService2005 -UseDefaultCredential

    #second parameter in ListChildren declares whether recursive or not.
	#2010 service uses the property TypeName rather than Type which the 2005 service uses.
	if ($ssrsVersion -gt 2005) {		 
		$items = $WebServiceProxy.ListChildren($sourceFolderPath, $true) | `
				 Select-Object TypeName, Path, ID, Name | `
				 Where-Object {$_.typeName -eq "Report"};
	} else {
		$items = $WebServiceProxy.ListChildren($sourceFolderPath, $true) | `
             Select-Object Type, Path, ID, Name | `
             Where-Object {$_.type -eq "Report"};
	}

	# If the local (destination) directory does not exist - create it.
    if(-not(Test-Path $destinationDirectory))
    {
        [System.IO.Directory]::CreateDirectory($destinationDirectory) | out-null
    }

	# Hold a count of items downloaded for our progress bar
    $downloadedCount = 0

    foreach($item in $items)
    {    
        #need to figure out if it has a folder name
        $subfolderName = split-path $item.Path;
        $reportName = split-path $item.Path -Leaf;
        $fullSubfolderName = $destinationDirectory + $subfolderName;

        $percentDone = (($downloadedCount/$items.Count) * 100)        
        Write-Progress -activity ("Downloading from {0}{1}" -f $reportServerName, $subFolderName) -status $reportName -percentComplete $percentDone

        if(-not(Test-Path $fullSubfolderName))
        {
            #note this will create the full folder hierarchy
            [System.IO.Directory]::CreateDirectory($fullSubfolderName) | out-null
        }

        $rdlFile = New-Object System.Xml.XmlDocument;
        [byte[]] $reportDefinition = $null;
		if ($ssrsVersion -gt 2005) {
			$reportDefinition = $WebServiceProxy.GetItemDefinition($item.Path);
		} else {
			$reportDefinition = $WebServiceProxy.GetReportDefinition($item.Path);
		}

        [System.IO.MemoryStream] $memStream = New-Object System.IO.MemoryStream(@(,$reportDefinition));
        $rdlFile.Load($memStream);

        $fullReportFileName = $fullSubfolderName + "\" + $item.Name +  ".rdl";

        $rdlFile.Save( $fullReportFileName);
        Write-Host " *`t$subfolderName\$reportName.rdl" -foregroundColor White
        $downloadedCount += 1
    }

	Write-Host "`n`nDownloaded $downloadedCount reports from $reportServerName $subfolderName to $fullSubfolderName" -foreground green
}

Next Steps

The next step is to create a function which can follow a folder structure of RDL files and upload those definitions to a report server via the report server service – obviously creating the necessary folders as it goes. I’ve already written this PowerShell script so I’ll update this page with a link when I publish the post.

Also, we would need to hook up all the data sources which would point to invalid references

The final steps would be to obtain all the subscriptions which existed in the previous system and out/in those some how.

For anyone requiring the reference materials from MSDN: