When I Drop a Database, Why Does 1 Or More Files Still Exist?

This is a quick post about an issue we have encountered a couple of times when drop a database and one or more files appear to be left over for some time afterwards. This time, we found the culprit! Open-mouthed smile

Essentially, dropping any database is a relatively fast process – unless of course you have elected to delete the backup history; and there’s a lot of it. However, we’ve noticed that sometimes, 1 or more of the database files seems to hang a round for a bit afterwards; and that’s not something that’s supposed to happen! As I understand it, the only reason that 1 or more files hang around after you have dropped the database is that something is holding the file open, preventing the deletion from the file system; and there are only 3 possible culprits: –

  1. SQL Server
  2. Anti-Virus (I know that it should be excluding the data files etc in SQL Server; and it is, but at this point I did not want to make any assumptions).
  3. Diskeeper 2010

After cheeking with our anti-virus administrator, I confirmed that the anti-virus software was indeed excluding these database files, then a quick check on Google showed me that either no-one else was experiencing the problem or no-one else had noticed it yet.

So I start looking at Diskeeper, the best way for me to confirm that the Diskeeper service had indeed got the file open was to stop the service and see if the file disappears. Strangely enough, it did.

There is one thing to note though – this is not something that happens all the time, but we have seen it happening from time to time. Reproducing the issue takes a fair amount of time and effort; and a lot patience!

Posted in SQL Server Issues | Tagged , , , , , | Leave a comment

A Little Fun With DateTime

A colleague asked me how to get the last day of the month for the current month using T-SQL. It’s an interesting problem, myself and another colleague both gave different answers to his question (which just goes to show that there is always more than 1 solution to a problem Winking smile).

It also raised an interesting point, it’s not the first time that we’ve had to solve this problem, and in other languages, there is usually a little function that does this for you. In T-SQL, there is no elegant solution, in fact; I would have to say that the only elegant solution is to write a dot Net CLR function.

Okay, I’m going to show you the two T-SQL examples that we came up with for our colleague. The first example takes apart the current datetime and then puts it together again as a string before re-casting as a datetime. It’s not the most elegant of the 2 solutions, and it’s certainly something that should at the very least be put into a function that you can call from other pieces of code when you need it. Now, let’s have a look…

SET DATEFORMAT ymd

SELECT
	DATEADD(dd,-1,
	CAST(
	CAST(DATEPART(YEAR,GETDATE()) AS CHAR(4)) +
	RIGHT('0' + CAST(DATEPART(MONTH,DATEADD(MONTH,1,GETDATE())) AS VARCHAR(2)),2) +
	'01'
	AS DATETIME)
	) AS LastDayOfCurrentMonth

As you can see, it’s messy and difficult to read, so I’ll break it down a bit. Obviously, the first part of this select is subtracting 1 day from the date we have built in this very select. The next bit, obviously enough is getting the year from the current date; and after that, we are adding 1 to the current month, then appending a 0 to the month number, but we only use the right most 2 numbers of the date e.g. if the month number was 10 and we had appended a 0, ‘010’; we would still use only the value of 10 (hence the right(‘value’,length) bit. However, if the month number is 1, then we would end up with ‘01’. The final bit, we add the day number – ‘01’. The resulting string, from a current date of the 15/02/2011 would be ‘20110301’. This is why we have a dateadd bit that subtracts 1 day on the outermost part of the select, which would give us 2011-02-28 Open-mouthed smile

Like I said, not the most elegant way of doing it. So, now for the next solution.

SET DATEFORMAT ymd

SELECT
	DATEADD(DAY,
		DATEPART(DAY,DATEADD(MONTH,1,GETDATE()))*-1
		, DATEADD(MONTH,1,GETDATE())
		) AS LastDayOfCurrentMonth

As you can see, it’s a lot easier to read than the previous one, and a lot simpler too – none of this messy string stuff! All that this one is doing is adding a month to the current date, then subtracting the number current number of days from it – giving you the correct answer.

You may be wondering what happens when you are in a month that has 31 days, and the next month has 28 (i.e February 2011). The good news is that DATEADD performs the calculation flawlessly; and automatically compensates for this. Open-mouthed smile

Once again, this is something that would be better suited to being in a scalar function than copying this around to multiple pieces of code. Happy coding! Winking smile

Posted in T-SQL Code Examples | Tagged , , , , | Leave a comment

SQL Server 2011 on a Windows 2008 R2 Core Server

I have to admit, I had a little trouble with the title here, everything I wanted to use as the title just seemed a little bit too long; and to be honest, I think that one I settled on may be a little vague. Such is the way of life.

Anyway, ever since I first saw Netware servers in operation with nothing but a command line interface, I loved the idea. Think about it, on Windows, you use a significant amount of resource to display things in a pretty way because it was designed to have a graphical user interface that anyone could use. That’s all fine, but I strongly believe that we should not be wasting resources nor adding complications to a server product. I want a server with a command line interface and that’s it. Microsoft’s own studies have shown many times that the majority of problems that have occurred in Windows have been caused by 3rd party drivers that were badly written. Think about it, do you really need windows explorer to see your files and directories on an SQL Server? Do you need a sound card driver?

With the arrival of Windows server 2008, we saw Microsoft introduce the “core” server product – a command line interface and no Windows Explorer. Fantastic!  Open-mouthed smile (Although technically, this was something that you were able to implement yourself from Windows 2000 and later, because you were able to change which shell you wanted to use).

There was however, a slight snag with this. SQL Server was not supported on a Windows Server Core product.

Until now…

With SQL Server 2011 “Denali” it looks like that has all changed, it’s even stated that it will reduce the necessity of server reboots by 50%-60%. Why is that? because far fewer resources are needed to show you a command line on your server! Internet Explorer patches? There’s no internet explorer! So, not only are you reducing the amount of resources consumed (SQL Server Management Studio starts at about 126Mb and works its way upwards to around 700Mb – depending on what your doing of course), you are also reducing the surface of attack on your server. The processor is no longer being used to display a list of folders or files via Windows Explorer, nor is it used to display your query results in SSMS Winking smile.

I’ll be honest though, these days, memory is a lot cheaper, processors are a lot more powerful; and disk space… well, masses of space. So, perhaps it will make things a little more difficult when you need to jump onto the server and fire up SSMS to fix an issue, but from my point of view, SSMS is a client application and should therefore, be installed on something other than your SQL Server. I have the same opinion about Visual Studio.

So, is SQL Server running on a Windows Core server a good thing?

Absolutely, because although it may make things a little more complicated for you, you are in fact: –

  • Allowing your SQL Server the use of more server resources which would have been allocated to things like SSMS, Visual Studio or Windows Explorer.
  • Reducing the surface of attack on your server.
  • Increasing the stability of your server.
  • Reducing the amount of downtime needed whilst patching your server.

For more information, have a look at the product page here,

Posted in SQL Server 2011 "Denali" | Tagged , , , | 1 Comment

Changes To Exception Handling In SQL Server 2011

When SQL Server 2005 was released, we saw the introduction of structured exception handling using BEGIN TRY, BEGIN CATCH statements. This fantastic feature allowed us to trap and handle exceptions that we may encounter in the course of our code execution, in the same way that you can handle exceptions in .Net applications Open-mouthed smile. This was fantastic news for DBAs and developers because it meant that we were able write code to cope with certain exceptions in much more efficient manner – a vast improvement over the ‘IF @@ERROR <> 0 do something’ statements.

However, there was a catch. Once inside the catch block, if you encountered an exception that your code was not designed to handle; it was not possible to pass the original exception back to the caller without mucking about with the message string. This meant that any application that used the exception information to determine if it could handle the exception gracefully, had to read the message string of the exception to find the information it needed to decide what to do; or you used the return value of the stored procedure to indicate the error. Either way, not a pretty sight; and certainly not very efficient!

With the advent of SQL Server 2011, it looks like all that may change! Open-mouthed smile The first thing that you should note is that the RAISERROR function has now been depreciated.. the replacement is something that all dot Net developers should be familiar with – the THROW statement.

The THROW statement has been implemented with 2 goals in mind, the first is that you can THROW your own errors (anything with an error number over 50,000 – same as RAISERROR) by supplying the THROW statement with a set of parameters: –

THROW error_number or @variable, message or @variable, state or @variable

  • error_number or @variable is any number between 50,000 and 2147483647 (INT).
  • message or @variable supports 2047 characters of the NVARCHAR data type – it does not support printf formatting, but there is a new formatting function you can use called FORMATMESSAGE.
  • state or @variable is any number between 0 and 255 (TINYINT).

As you can see, it’s not that different to using the RAISERROR statement, with one notable difference – you can no longer supply the severity as a parameter. The severity is always 16.

Passing your custom error to the caller: –

BEGIN TRY
	--custom error
	THROW 50000,N'My custom exception.',1;

END TRY
BEGIN CATCH
	PRINT 'In CATCH block. Passing my custom exception to the caller..';
	THROW;

END CATCH

The second way to use the THROW statement is without supplying any parameters at all.

BEGIN TRY
	--Divide by 0 error
	SELECT 1/0

END TRY
BEGIN CATCH
	PRINT 'In CATCH block. Pass the exception to the caller.';
	THROW

END CATCH

This allows you pass the original exception back to the caller, allowing the caller to decide if it needs the user to do anything or if the exception can be handled gracefully in the application itself, without any of complications of searching message strings etc.

It’s a feature that’s been lacking for a long time, but now that it looks like it’s finally here.. thank you Microsoft! Open-mouthed smile Hopefully, this feature will remain when the product is released to manufacturing; and if is, I suggest you start using the feature as soon as possible.

Posted in SQL Server 2011 "Denali", T-SQL Code Examples | Tagged , , , | 3 Comments

What I’ve Been Doing Lately

We’ll it’s been a busy period of late and this the first time I’ve been able sit down and write a blog entry, Christmas has come and gone; and we’ve all welcomed in the new year with the hope that the situation in this country improves for all of us.

Speaking for myself (or perhaps I should say writing for myself), I’ve been busy recruiting since October and I’ve also been making the transition from DBA to manager; which for the moment feels a bit like being in 2 worlds at the same time.

Now that I feel like I’m finally starting to find my feet in this new role, I’m starting to catch up on things that I’ve been keeping on hold, like trying out some of the cool new features in SQL Server 2011 and of course; creating my report scheduling solution, all of which you will see in my blog Open-mouthed smile

One of the nice new features that’s proving very useful to me in Denali is the ability to ‘throw’ errors in the same manner that you can in .Net. It was the one thing that was lacking in SQL Server 2005/2008 BEGIN CATCH blocks, you were not able to pass back the original error to the caller. Thank you Microsoft!

Well, that’s all I’ve got time for folks, keep an eye out for my new blogs about the SQL Server 2011 over the next few days; and the overview of my solution to scheduling reports in SSRS.

Posted in Uncategorized | Tagged | Leave a comment

ITunes Drives Me Nuts!!!

ITunes drives me nuts sometimes, for example, I needed to bring my laptop into sync with the music on my IPod. I didn’t have another source available to me at the time, so I did some searching around and found some information on how to do it. Then I downloaded and installed ITunes, where upon it asked me if I wanted to import my music from the IPod to the laptop. Cool, I thought. Lets do it! I watched the program do it stuff, everything seemed to go well……. right up to the point that I realised that what it had actually done was import all music that was purchased from ITunes; and ignored all music that was actually imported from CDs!!!!! Arghhhhhhhhh!!!!!!!!! Angry smile

computer_says_no

Thanks ITunes.

Posted in Personal | Tagged , | Leave a comment

Bulk Scheduled Reporting Using SSRS

We are migrating from Crystal Enterprise to SSRS, but we have found one major short coming in SSRS – you can’t bulk schedule reports, nor can you control how many reports can be executed at the same time. I know that you can limit the resource usage in SQL Server, but all that does is slow things down to prevent someone killing your server (it certainly doesn’t stop your blocking/deadlock problems which can arise).

The problem that we have is that we routinely schedule around 900 reports at our month02_loose_snow_avalanche-end for users; and because of the security model we use, the scheduled reports must run as the user because our data is secured at a user/group level. Normally, you would just tell the user to go-ahead and schedule the report themselves, but that gets us back to the initial problem – how do we stop them overrunning the server with say, 900 reports at 3am? This is something that SSRS just does not support because the scheduling tool used by SSRS is SQL Server Agent.

We have found some products out there that can support what we want, but they are very expensive. So that leaves creating something in-house. I know that is possible to create your own interface for SSRS using the API provided by Microsoft to give some improved scheduling capabilities; you could probably control how many reports are allowed to run at the same time that way too. Do you think that it would be possible to do this in T-SQL?

We do! Over the next 2 weeks, I’ll be posting the T-SQL code that will give you that functionality, that not only limits how many reports are allowed to run at the same time, but also allows us to run the report as the user that scheduled it. This will still require a front-end to allow users to achieve this, however, by creating the mechanism in T-SQL to support this, the front-end can be a simple SharePoint Web-Part. Open-mouthed smile

Posted in SQL Server 2008, SSRS | Tagged , | Leave a comment

Contained Databases

Contained databases are a new feature that currently exist only in SQL Server 2011 “Denali” (currently in CTP 1). According to Microsoft’s documentation on this new feature, contained databases remove any dependency from the SQL Server management level, all objects, settings etc. are contained in the database, this includes user authentication (instead of a user being authenticated by the database engine, they are authenticated at the database level).

The benefit I thought of when I read about this feature was good bye orphaned logins!!! The other, obvious benefit is that the database is self contained. There are no settings to keep in mind when moving a database from 1 instance to another, it’s all there inside the database.

There are some restrictions. Currently, there are 3 different levels of containment; the first is ‘none’ as in, no containment. The second is ‘partial’, partial is the hybrid; it allows you to have some objects that reference other databases, whilst giving you the benefit of self-contained users. The final one is ‘full’, meaning that the database is entirely self contained. In a contained database, you cannot use database replication, change data capture or change tracking.

So, contained databases are divided into 2 parts, the application model; and the management model. The application model is the bit that holds all the information pertinent to the application, all the tables, users, stored procedures etc. that the application depends on; it also contains another new little feature – application level agent jobs! I’m not quite sure what that is yet, because I’ve not looked at it yet (I’ll check that one next Winking smile). The management model is the bit that holds things like your TCP end points, users mapped to logins etc. Basically everything that exists outside the database.

So, are contained databases useful to us? I think so, it looks a feature that’s going to prove very useful to DBAs and developers alike.

To see the Microsoft documentation about this new feature, click this link  http://msdn.microsoft.com/en-us/library/ff929071(v=SQL.110).aspx.

Posted in Contained Databases, SQL Server 2011 "Denali" | Tagged , , | Leave a comment

Beta Testing Software

I don’t know how many of you try out beta software, I know that it’s not always without it’s headaches, but in the end; you do get to help shape the product that you might be using in the future. I’ve been testing beta software for years, mainly out of curiosity, to see what the next version will look like, not all the features are enabled to begin with, but it is nice to see the product taking shape; and being able to have some input before it’s released.

Beta testing is incredibly useful,you get to see the new features of a product, it allows you to start planning for your upgrade; and sometimes, it comes with some fringe benefits, like a gift voucher or maybe, f you’re really lucky, a free product Smile.

Today, I was 1 of the lucky ones, I got to test some software for an hour and I got a nice gift voucher out of it Open-mouthed smile. A nice thank you from a company that you are helping to test new versions of products before they are released. Especially now that we are getting to close to Christmas! Winking smile  

If you do decide to start beta testing, make sure that you do it on a disposable environment –like a virtual machine or something. Whatever you do; do not install or use beta software on your production systems.

Posted in Testing | Tagged , | Leave a comment

Fun With SET Based Operations 2

Okay, now for a little more fun with SET based operations. This is a quick post that will show you how to combine multiple rows for 1 column, into a 1 row, 1 column, as a comma delimited list. It’s a very simple piece of code and the way that it works is exactly the same as my previous post. SET based operations are not transactional based; and that’s why we can do these things.

First, let’s create a new temporary table with some rows to play with.

SET NOCOUNT OFF
GO

--Create our test table
CREATE TABLE #TestTable
(
TestColumn VARCHAR(100) NOT NULL
);

DECLARE @intCount INT
DECLARE @chvString VARCHAR(100)

SET @intCount = 0

--Insert some test data
WHILE @intCount <= 30
BEGIN
	SET @chvString = 'Test string number ' + RIGHT('0' + CAST(@intCount AS VARCHAR(2)),2)

	INSERT INTO #TestTable
	(TestColumn)
	VALUES(@chvString)

	SET @intCount = @intCount + 1
END

--This is how our data looks in the table
SELECT * FROM #TestTable

Now that we have our table, I’ll show you how to create a comma delimited list in a select statement. The way it works is very simple, although the SELECT statement will normally return a data set with 1 or more rows; we’re going to combine it with a variable, which we will use to concatenate the rows together.

--This is how our data looks in the table
SELECT * FROM #TestTable

DECLARE @chvCommaDelimitedList VARCHAR(MAX)
SET @chvCommaDelimitedList = ''

SELECT
	@chvCommaDelimitedList = @chvCommaDelimitedList + ', ' + TestColumn
FROM
	#TestTable

--Let's see our new comma delimited list
SELECT @chvCommaDelimitedList

As you can see from the SELECT, it’s not quite perfect, we have to trim off the leading comma to make it useful to us, but that’s not exactly hard to do.

Posted in T-SQL Code Examples | Tagged , | Leave a comment