Wayne Sheffield

My blog about SQL Server

Browsing Posts published by Wayne Sheffield

TSQL TuesdayWell, here it is again. The second Tuesday of the month, which means that it’s T-SQL Tuesday. This month, Dev Nambi (blog | twitter) is hosting the party, and he has decided that the topic is to be on Assumptions. Specifically, he wants us to write about:

A big assumption you encounter at work, one that people are uncomfortable talking about. Every team has an elephant in the room.

One assumption that I have seen repeatedly is “if there is a table with a clustered index, I can select all of the rows out of that table and they will be in the order of the clustered index without having to specify an ORDER BY clause”.

So, let’s test this out. Remember, to prove that this assumption is false, all that is needed is to get the results out of order.

First up for this test: create and populate a table.

 

-- let's use a safe database
USE tempdb;
GO
-- create a table to play around with
IF OBJECT_ID('tempdb..temp') IS NOT NULL DROP TABLE temp;
CREATE TABLE temp (
    RowID INTEGER PRIMARY KEY CLUSTERED,
    MyGUID UNIQUEIDENTIFIER DEFAULT (NEWID()));
 
-- populate the table with an ascending number
-- let's try a million rows, generated with a dynamic tally table
WITH Tens    (N) AS (SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
                     SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
                     SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1),
     Hundreds(N) AS (SELECT 1 FROM Tens t1, Tens t2),
     Millions(N) AS (SELECT 1 FROM Hundreds t1, Hundreds t2, Hundreds t3),
     Tally   (N) AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM Millions)
INSERT INTO temp (RowID)
SELECT  N
FROM    Tally;
 
 
-- now, checkpoint the database (it's in simple, so all pages will now be "clean")
CHECKPOINT;
-- and let's remove the clean pages from memory
DBCC DROPCLEANBUFFERS;

 

Now let’s just do a few random searches on this data.

-- let's select some data from the table
SELECT  RowID, MyGUID
FROM    temp
WHERE   RowID BETWEEN 100 AND 200;
 
-- and some more data
SELECT  RowID, MyGUID
FROM    temp
WHERE   RowID BETWEEN 1000 AND 1200;
 
-- and some more data
SELECT  RowID, MyGUID
FROM    temp
WHERE   RowID BETWEEN 5000 AND 5100;
 
-- notice that all of these are ordered?

Now let’s run our query against the entire table.

-- now, let's get some more data
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
SELECT  RowID, MyGUID
FROM    temp;

On my system, I get the following results:

CI not in order

As you can see, we went from value 1953 to 867133. As can be plainly seen, the assumption is false.

So, what happened? And are you complaining about that READ UNCOMMITTED transaction isolation level? Well, it turns out that this isolation level isn’t necessary to reproduce this – this query will also reproduce the same result set:

SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
SELECT  RowID, MyGUID
FROM    temp
WITH (TABLOCKX);

Are you wondering what is happening here? Well, anytime that you are in the read uncommitted isolation level (you might be familiar with it as the NOLOCK query hint), you are telling SQL that you don’t care about data consistency, so SQL will read the data as fast as it can. Which happens to be an Allocation Order Scan… it will read the pages from the data file in the order that they were allocated for this table. And, for higher isolation levels, anytime that SQL can be sure that the data won’t change underneath it (such as when you have an exclusive lock on the table, or reading from a read-only filegroup), it will also perform an Allocation Order Scan.

Can we see that this is what is happening? Sure we can!

First off, re-run this query with the Actual Execution Plan turned on. You’ll get an execution plan like this:

Non-Ordered Scan

Click image to see full screen

Notice that the scan is not ordered. (If you re-run any of the previous queries, you will see that they are ordered.) With an unordered scan, SQL will get the data as fast as it can since it doesn’t need to follow an index… and in this case, it is an Allocation Order Scan (if you run this query without the TABLOCKX query hint in the READ COMMITTED isolation level, it will still be unordered, but it can’t do an Allocation Order Scan since there isn’t a guarantee that the data can’t be changed). This can include reading pages that are already in memory prior to others (I’ve seen this happen before, and that is why I included those previous queries).

But let’s continue to see if these results are indeed in allocation order. In a separate query window, run this query (undocumented command, but Paul Randal blogged about it here) to see the pages that are allocated for this table:

DBCC IND (tempdb, 'temp', -1)

The results that I get are (unnecessary columns are removed for the sake of brevity):

DBCCIND

And it continues, but this is enough. We are interested in the rows where PageType = 1; these are the pages where the data is stored.

Side note: Do you see a few rows where the next logical page (NextPagePID) is not the next physical page? What we have here is a fragmented index.

So, let’s modify the query to return what file, page and slot that the row is being returned from:

SELECT  RowID, MyGUID, sys.fn_PhysLocFormatter(%%physloc%%)
FROM    temp
WITH (TABLOCKX);

Here we added a function to the result. The %%physloc%% system variable returns the row/page/slot that a row is on, but in a binary format. The sys.fn_PhysLocFormatter function transforms that to something easier to read, in the format “File:Page:Slot”. (Again, this is undocumented, and Paul Randal blogged about this also here). When this query is run, browse through the results, and you will see the pages being returned in the same order as they were allocated in (from the DBCC IND statement earlier). On my system, the results that I get are:

Non-Ordered Scan with physloc

Click image to see full screen

Browse through this result set for the earlier rows and you will see the pages returned in the same order of the physical file allocation of these pages.

In summary: under certain conditions, SQL will choose to perform an Allocation Order Scan of the data. This is not the same as the physical order of the pages in the file, but the order that the pages were actually allocated for this data. The results will be in the order of the pages, and an assumed order is not be guaranteed without using an ORDER BY clause.

You’re utilizing the database mirroring high-availability feature, configured to be in the High Safety mode with automatic failover, which means that the mirroring configuration includes having a witness server to ensure that the system can automatically switch over to the partner in the event of a failure of the principal.

Under this scenario, if the partner server loses its connection to the principal, it will contact the witness server to figure out what to do. In order for automatic failover to occur, both the partner and witness servers have to agree that the principal server is out of commission. There are three possible outcomes:

  1. If the witness is still in contact with the principal, then an automatic failover will not occur. The principal server will remain online, serving the application. However, the transaction log will start to accumulate transactions that cannot be removed until the connection has been re-established and these transactions have been sent to the partner.
  2. If the witness has also lost its connection with the principal, then the partner and witness servers will reach a quorum and the partner database will be brought online, becoming the new principal (specifically, it will undo any un-committed transactions and then bring the database online).
  3. If the partner server is also disconnected from the witness server, then automatic failover will not occur.

The purpose of this last outcome is to prevent what is known as a split brain – a situation where both of the servers are online, serving up results to application requests. If this were to occur, then there would be two different versions of the database, and reconciling these differences would become problematic.

Now let’s consider what would happen if the principal server were to lose its connection to both the partner and witness servers, however the principal server is still online (let’s just say that the partner and witness servers are each in separate remote locations that has had a service interruption – such as a change to a network switch that has isolated these servers from the principal). With the principal server still being online, and potentially able to service at least some users, what does it need to do to ensure that the mirror does not end up in a split brain situation?

Let’s first look at what the possible scenarios are now:

  1. The partner server is in communication with the witness server, and they have reached a quorum where the partner server has been brought online.
  2. The partner server is not in communication with the witness server, and thus it is still in a partner state.

Since the principal server has lost communication with both servers, it can’t know in which situation the mirror is in. Therefore, it needs to protect the mirror from potentially being in a split brain situation. It does this by assuming that the partner server has been elevated to become the mirror’s principal, and so it goes into the partner state. Specifically, this means putting the database into recovery in order to be able to receive transactions from the new principal server.

Is this what actually happens? Well, it’s easy enough to test this out. On three different instances of SQL (they can even be on the same server), establish a mirror in high safety mode with a witness server. After it is established, ensure that SQL Agent is shut down (if running, it will restart SQL), and use task manager to end the processes for sqlservr.exe on the secondary and witness instances. Now look at the status of the database in SSMS, and you will see that its status is “Principal, Disconnected / In Recovery). Alternatively, you can run this following query to see what the status is:

SELECT  sd.name, sd.state_desc,
        sdm.mirroring_role_desc,
        sdm.mirroring_state_desc,
        sdm.mirroring_witness_state_desc
FROM    sys.DATABASES sd
        JOIN sys.database_mirroring sdm ON sd.database_id = sdm.database_id;

To wrap up, mirroring prevents a split-brain situation by:

  1. Both the partner and witness servers have to be in communication with each other, and have lost communication with the principal, in order for the partner server to be elevated to the mirror principal.
  2. If the principal server becomes disconnected from both the partner and witness servers, it goes into partner status and puts its database into recovery.

After installing SSAS, using the tabular model, you receive the error “The service cannot be started: The following system error occurred:  A required privilege is not held by the client.” when trying to start the service.

Examining http://msdn.microsoft.com/en-us/library/ms175371.aspx shows that the only security privilege needed for the SSAS service account is the “Log on as a service” privilege.

Alas, this is not accurate. When running in the tabular model mode, the service account also needs the “Increase a process working set” privilege.

In Windows, under a default installation, the local group “Users” has this privilege (which would allow any user on the system to have this privilege) as seen below:

Increase a process working set

However, many companies will lock down the security policies and remove this privilege. This is apparently a problem because there are a lot of requests on the internet for this issue. The advice generally given is to just run SSAS as an administrator – which is not a good choice, and may not be an option in your company. Now there is a solution.

I’ve filed a connect item to correct the documentation at https://connect.microsoft.com/SQLServer/feedback/details/869677/ssas-privileges-incorrectly-specified – please consider going out there and voting for it.

Wow, what a month. On March 22, 2014, the SQL Saturday train came into Richmond VA (RVA). I was one of the organizers of the event, and what a ride we had.

The rough part of the ride started off earlier in the week. A freak spring snowstorm rolled through the region. Richmond was spared, but points just north of us (Washington D.C. for instance) had 6 inches of snow. This caused a few speakers to need to cancel as things got rescheduled for them. As we got closer to the event, a few other sessions canceled. Now when you are having an event with people coming to speak, and other people coming to see their presentations, this is not what you want to have happening. At one point, we had seven vacancies in our speaking schedule. But the fun wasn’t over yet – on Saturday, two additional speakers failed to show up. They didn’t even call or email anyone that they would not be attending. We juggled things around, coaxed speakers into doing additional sessions, and continued on.

On Friday, we had our first ever RVA precon. About 50 people showed up to attend an all day training in two precon events. One of the precons was conducted by Andy Leonard (SSIS Design Patterns), and the second precon was conducted by Jason Brimhall and myself (Murder They Wrote). Since I was busy with own precon, I couldn’t drop in and see how things were going in Andy’s session. All I can say about ours is “WOW!” What a day of learning. We have heard from several of our attendees since, and they have been busy implementing various things to fix problems. Some comments that have been made about our precon session are:

Feeling like a newbie with #SQLServer while attending @DBAWayne and @sqlrnnr precon at #SQLSatRVA. Great class!

It was an eye and mind opening experience awesome session!

Friday’s session went beyond my expectations for the day.

The information was very useful.  I sat down with my team yesterday to discuss what I learned.  I am certain that we will see implementation of a few items…

Needless to say, I’m extremely pleased to have this feedback. Now I’m eager to do this precon at other events!

Friday night, we had our speaker / volunteer dinner. It was great to reunite with several of my #sqlfamily, and to meet folks that I had never met before and bring them into my #sqlfamily.

After the precon all day Friday, Saturday morning came way too early. Shortly after arriving, the attendees started showing up. Even without coffee yet, there was an air of excitement all around! This was going to be a perfect day! The weather was almost too perfect – a beautiful spring day, and the temperature eventually reached 70 degrees. Just a perfect spring day. The speakers were showing up early; all equipment was functioning, registration / sign in was humming along, and breakfast was being efficiently served. Changes we had made from last year were really working out. The first sessions started, and we could take a breather. We had a few Microsoft Premier Field Engineers (PFE) down from DC, and they manned a “SQL Clinic” room to answer questions. I was really curious about how this would work out for us today.

The second session of the day, I was presenting a fairly new presentation – “Crazy Things Developers Do”. As is typical of my presentations, it has a lot of demos. And as I was setting up, I ran into a technical glitch. Even though this was the same room that I had presented in all day on Friday, my laptop would not sync up with the video equipment. It had worked just fine for the speaker before. I tried rebooting. The facility tech guys checked it out, and couldn’t find anything wrong. So… I ended up doing my presentation with no props. I talked my way through it. At one point, I had everyone gather around to watch one demo that I just had to do, but the rest was all talking, with no slides or anything. In retrospect, I should have used the whiteboard in the room – I guess that I was too stunned and shell-shocked to even recognize that it was there. The attendees said it was still a good session, and that they learned. But oh, how I wish the tech stuff would have worked. Luckily, everything worked out for all of the other speakers for the rest of the day, and this was the only glitch that we had!

Lunch time came around, and again everything went smoothly. Did I mention that it was a beautiful day outside? Most attendees took advantage of it, and ate their lunch outside. You just couldn’t ask for more. A nice spring day, relaxing outside, chatting with friends.

The rest of the day went by too fast. Before long, we were having the closing, and raffling off the prizes from the vendors. I checked in with the Microsoft PFEs, and found that over 20% of the attendees had stopped in and asked questions that they had been able to answer. AWESOME!

People come to a SQL Saturday to learn. Which means that we need people in rooms presenting topics that people want to see. I want to thank all of the speakers that came to SQLSatRVA to present a session or two. I especially want to thank all of the ones that came from outside of the RVA area! A SQL Saturday just can’t be done without these great folks – you are what the attendees are coming to see. Words alone cannot express my appreciation for the time you have spent preparing the presentation(s) that you have delivered, and the time and expense that you have incurred to come to RVA to do your presentation. Thank you so very much!

There is another recurring theme to this post… smooth functioning. This doesn’t just happen – it takes a lot of effort from a lot of special people – people that are passionate about SQL Server, learning, and helping others to learn. People that give of their time to help ensure that the event functions smoothly. Some do work ahead of time, in planning the event, getting sponsors, figuring out the logistics for the location and food, working with the sponsors for advertising and ensuring that their equipment is there ahead of time, and in organizing the volunteers. But this isn’t all… there is a lot of work being done the day before and the day of the event. From room monitors to ensure that speakers have what they need. Spouses that come in to help with registration and ensuring that breakfast, lunch and break times function smoothly. So, I’d like to end this post with a special THANK YOU! to all of these wonderful volunteers, and a picture of a some of them. Of course, with our community, you know it won’t be a serious picture…

 

There’s a new SQL Server user group starting up over in Lynchburg, VA. It’s first meeting is February 27, 2014 (6pm-8pm), and I’m honored to be presenting there for their kickoff meeting. I’ll be presenting “Table Vars & Temp Tables – What you NEED to Know!” Visit http://lynchburg.sqlpass.org/ for more info (abstract, directions) and to sign up so that enough food can be brought in!

Hope to see you there!

 

TSQL TuesdayOn the second Tuesday of each month, the SQL Server universe all comes together to blog about a topic. Known as T-SQL Tuesday and started by Adam Machanic (Blog | @AdamMachanic), this month’s T-SQL Tuesday topic is hosted by friend and fellow MCM Jason Brimhall (@sqlrnnr) and he wants us to blog about bets. Specifically, “I want to know about the gambles within your databases or not within your databases that you have seen over the years. When has somebody (a CTO, Developer, Business User) placed a bet that was far too risky in your opinion?”

A friend of mine, someone that is a regular speaker at SQL Saturdays, was recently approached from an attendee of one of his sessions. This attendee was looking for help in setting up a cluster. But this would all have to be done by screen sharing and passing control over to my friend – directly being able to connect to the servers wasn’t an option. Further discussion revealed that the person was doing this without the company’s knowledge – and that the company wasn’t to be told. For, it seems, the person was hired as a senior DBA, but isn’t qualified to do the tasks, and doesn’t want the company to know that (s)he doesn’t have the skills necessary to do the job.

Wisely, my friend turned down this “opportunity”. However, this blog is to be about bets, so let’s look at some of the bets that were made, or were being contemplated. First off, the company placed a bet on hiring this person. I will assume that they did due diligence, doing things like performing background checks and conducting an interview. Their bet was that the person they were hiring for a senior DBA position actually has the skills necessary for a senior DBA. At this point, they have no idea of how bad of a bet they made.

What could they have done differently so that they would have had a better outcome? From my point-of-view, their interview process was not thorough enough to adequately determine whether their candidates had the required skill set. If they don’t have on-staff personnel competent to determine this, they should have spent a little bit of money to hire a competent senior DBA to evaluate and assess their candidates. (Small plug: I offer such a service – you tell me what you need, and I’ll grill the candidates (complete with a lab) to ensure that they know and can do what you need them to be able to do. Contact me for details.)

Other bets that were being contemplated:

  • The attendee was willing to let someone whom they had only met at a SQL Saturday do a task that they couldn’t do.
    • How do they know whether the person can do the job and wouldn’t mess things up?
  • The attendee was willing to let an unauthorized person into their companies systems.
    • How do they know that this person can be trusted while on the systems?
    • They were willing to take this risk with the potential of being fired.
  • If my friend had proceeded with accessing the company’s systems, there are several bets that he would have been taking also:
    • Legal – would the company take legal actions for his unauthorized access to the company’s systems?
    • IMO, it’s a sure bet that there would be security violations occurring. What would the repercussions of those be?
    • If word of his actions were to spread, there would be a loss of reputation – potentially causing future loss of clients.
      • While the attendee might know something of my friend’s background, my friend knows nothing of the attendees. Other than that the attendee is willing to circumvent ethics to keep from looking bad. What’s to keep the attendee from telling others of what my friend did?
    • He might face the loss of insurance – this would mean that he couldn’t consult.

There were a lot of bets being placed in this situation. The company lost – even if they don’t know it yet. The attendee lost – and probably knows it. Thankfully, my friend has ethics and didn’t take this bet, so he didn’t.

I’ve been using SQL Prompt for years. In September 2013, version 6 was released (quickly followed by 6.1 in late September and 6.2 in December ), so I thought I would highlight some of the new features available in it.

So lets start off with a table to show the new features and what version they were implemented in.

Version Feature Description
6.0 Tab History A history of query tabs opened in SSMS
6.0 Synonym support Synonyms (including columns from a synonym’s base object) are now suggested
6.0 CLR support CLR functions and stored procedures are now suggested
6.0 XML functions support XML functions are now suggested and fully supported
6.0 SQL Azure support SQL Azure databases are now supported
6.0 Custom snippets folder Share snippets with other SQL Prompt users by using a shared snippets folder
6.0 More snippet placeholders You can now use $PASTE$, $DBNAME$ and $SERVER$ in SQL Prompt snippets
6.0 Snippet placeholder format You can specify custom $DATE$ and $TIME$ format
6.0 Indent using tab characters You can customize SQL Prompt formatting to indent text using tab characters
6.0 JOIN clause improvements All relevant tables are now suggested in a join clause
6.0 Copy from object definition box You can now press CTRL + C to copy the contents of the object definition box
6.1 Server/database connection filtering You can now specify the databases you want SQL Prompt to load suggestions for
6.1 Major performance improvements
6.1 Added copy button back to object definition box
6.1 Added option to disable synonym support
6.1 Autocompletes ALTER statements with SQL Azure
6.2 SQL Server 2014 support SSMS and 2014 CTP2 Server
6.2 Phrase completion SQL Prompt now completes entire SQL phrases, for example PRIMARY KEY instead of just the first keyword PRIMARY
6.2 Insert suggestions by pressing ";" key
6.2 Tabs opened by SQL Prompt are no longer marked as unsaved
6.2 Options dialog box is now resizable
6.2 Improved formatting for MERGE statements

And, of course, bug fixes.

So let’s look at what are some of my favorite new features. First up – new features in v6.0.

The added support for synonyms, CLR functions and XML functions seems to me to be a natural extension of the product. If you use these, and you use SQL Prompt, you need SQL Prompt to prompt for what they can with these.

The custom snippets folder is all about being able to share the snippets. Point all developers to a network share, and they can all use the same set of snippets. Snippet changes made will be available to everyone at once (though they might have to refresh suggestions first). If you have different teams with different needs, (dev team 1, dev team 2, dbas, etc.) they can each point to a different share.

In thinking about needs for multiple teams, I think that they would have some common needs across these teams. Maybe someday we’ll have the ability to specify a hierarchy of shared folders, so that you can have common snippets to everyone, and then snippets common to the teams. Of course, all the dev teams could need something different from the DBAs, so there is the hierarchy. Is this worthwhile? I don’t know… I just now came up with this idea and haven’t really thought about it much. If this gets implemented, the biggest issue I foresee is duplicate-named snippets – there might want to be a mechanism to select which ones are to be the final one.

The changes for snippet placeholders are just neat. $PASTE$ will insert the contents of the clipboard. $SERVER$ and $DBNAME$ will be replaced with the name of the connected SQL Server instance and database name. I can see some potential uses for these. But what I really like are the formatting options available for $DATE$ and $TIME$. These are expressed within parenthesis just before the ending $, and follow the .NET formatting conventions for dates and times. I use these in a snippet for creating what I term a remark block – a remarked out section at the top of code for tracking changes to the code. The snippet I use is:

/******************************************************************************
$CURSOR$Description of code
*******************************************************************************
                               MODIFICATION LOG
*******************************************************************************
$DATE(yyyy-MM-dd)$ WGS Initial creation.
******************************************************************************/

 

The resulting code will look like:

/******************************************************************************
Description of code
*******************************************************************************
                               MODIFICATION LOG
*******************************************************************************
2013-12-28 WGS Initial creation.
******************************************************************************/

 

(Can you guess on what day I created this?)

Next up is new features in the point releases. The big one in v6.1 is the ability to control the databases that SQL Prompt will load suggestions for. If you are in a small database, and in your query you use three-part naming to specify the database an object is in, SQL Prompt will load that database. While it’s loading, you have to wait. If it happens to be that database (you know, the one where every customer has their own table and set of procedures, and you have 250,000 customers), this can take a while. And take a serious hit on the memory on your workstation. So you can exclude this database from getting suggestions for, and live happily ever after. (Or until Red-Gate figures out how to use a memory-optimized table to hold all of this information.)

The big new feature in v6.2 if phrase completion. In previous versions of SQL Prompt, you would be prompted for keywords one at a time. (Sounds like a cursor… row-by-agonizing-row.) But there are places where when you use the first word, you will always use the second word. I’m thinking of “FOREIGN KEY” right now – the only place you ever use “FOREIGN”, you will also use “KEY”. Selecting them together is just another speed enhancement in writing your code (which is what SQL Prompt is all about in the first place!). The example that Red-Gate gives using “PRIMARY KEY” isn’t so clear cut – there are places where you will use “PRIMARY” without “KEY”, like when specifying filegroups. However, in the correct context, the phrase is also valid. Since it has already figured out the valid keywords for the current context, then phrase completion is, IMO, what should have been there in the first place. (I have to admit that I requested this feature – only to find out that they were already working on it!)

A quick table of the SQL Prompt release notes:

I was recently investigating an issue that stemmed from a database being in the recovery_pending state. A quick investigation through the SQL logs showed that the server had restarted earlier that morning, and that this instance is running on a cluster. Assuming that this is probably just a timing issue, I decided to try to bring the database online.

ALTER DATABASE <database_name> SET ONLINE;

Now, you didn’t really think that this blog post would end so soon, did you? Well, running that little bit of code gave me this wonderful result:

Msg 5591, Level 16, State 5, Line 1
FILESTREAM feature is disabled.
Msg 5105, Level 16, State 14, Line 1
A file activation error occurred. The physical file name 'F:\MSSQL11.MSSQLSERVER\MSSQL\Filestream\<database_name>' may be incorrect. Diagnose and correct additional errors, and retry the operation.
Msg 5181, Level 16, State 5, Line 1
Could not restart database "<database_name>". Reverting to the previous status.
Msg 5069, Level 16, State 1, Line 1
ALTER DATABASE statement failed.

Sigh. Okay, that didn’t work out too well. Looking over the system a little bit more, I could see that the database had been online prior to the shutdown – backups had just happened 2 hours earlier.

I proceeded to check out all of the normal filestream things – things like making sure that the permissions on the specified folder had been granted to the SQL Server service account. But still, no luck – everything that I try continues to return the same error.

A friend reminded me that filestream needs to be enabled on the service. I’m thinking that surely, this isn’t it… the database had been up. Regardless, I launched SQL Server Configuration Manager, went to the properties of the SQL Server service, and clicked on the FILESTREAM tab. And what did I find? My assumption was wrong, for on this node, the “Enable FILESTREAM for Transact-SQL Access” setting was disabled. I quickly checked the other node (it was enabled there), and assumed that this must be the problem. I enabled this setting on the current node, and ran the alter statement to bring the database online again.

And the same error occurs. (DEEP Sigh!)

A little bit of internet searching brings me to a BOL article on how to Enable and Configure FILESTREAM (why do they always capitalize that anyway? It’s not like it’s an acronym, is it?) In this article, it mentions that we need to set the filestream access level. This is performed with the sp_configure stored procedure, so it means that this setting is stored in the master database. Since it was working on the other node, I assumed that the setting is set properly. But a quick check against sys.configurations shows that it isn’t set, so I run this bit of code:

EXECUTE sp_configure filestream_access_level, 2;
RECONFIGURE;

and then I run the ALTER DATABASE statement to put the database online. And… success. (Whew)

So did I really write this blog post just to impress you with my troubleshooting skills? Nope, not at all. The purpose of this post is to remind you not to make assumptions. Don’t assume that just because something was working, that it must be configured properly. And don’t assume that because everything with SQL is working properly on one node of a cluster, that it will work properly on the other node(s). I have seen too many times where nodes are not configured identically (in one case, this extended to the drive mappings, so when the cluster did fail over, it started causing all kinds of errors).

So, don’t assume. Step back, and investigate to ensure that the prerequisites for utilizing a feature are met, and that the settings are properly configured. Just look at all of the faulty assumptions that I made trying to get this situation resolved.

And when you’re working on a cluster, ensure that the nodes are configured identically.

I was recently just letting my mind wander. As it’s prone to do, all sorts of things just entered into it. Things like:

  • It’s a wonderful time of the year.
  • Lot’s of places I’m going to with #sqllearning potential.
  • There’s a lot of resolutions I need to work on.
  • Considering how well I did with last year’s resolutions, it’s a fresh start to those.
  • Perhaps if I had more time, I could tackle some of those resolutions.
  • Maybe if I can automate some of those things that I do, I would have that time.

Hmm. Automation? Interesting. Because Hemanth.D (blog|twitter) is hosting the T-SQL Tuesday blogging party this month, and he wants all of us to post about Automation. Specifically:

You could write about, what options you would consider when automating something? Where do you draw the line? What are our preferred tools for automation? T-SQL, PowerShell, VBScript or Batch files(?) or maybe just share something that you automated in the last couple of years.

Okay, I can do this. So, here goes:

Earlier this year, I was reading SQL Server Microsoft Certified Master (MCM) Robert Davis’ (blog|twitter) blog series on Disaster Recovery. On day 3, he has a nice script for seeing what backup files you need to restore, and the order to restore them, for a database. It starts from the most recent full backup, then the most recent differential backup (if available), and finally all the transaction log backups since the latter of those two.

One thing that I’ve been planning on writing is a script to detect if the files necessary to recover a database are available. With Robert’s script, most of this work is done. However, I wanted this to run for all databases (not just a specified one as Robert’s does). I also want to check for the presence of these backup files.

So, I bring to you my modified version of Robert’s script. The changes to it are:

  1. All databases are checked, not just a single specified database.
  2. Robert’s script stops if a broken transaction log chain is detected. My modification has all backups, but indicates a broken transaction log chain at the start of the chain.
  3. The modified script checks for the existence of all of the backups files. (If one doesn’t exist, you have a broken transaction log chain (or worse))
  4. Various performance improvements.

What this modified script doesn’t do is check to see if multiple backup files written to the same file are all present in the file (though I’ll probably make this modification some day – sounds like a resolution?).

I’ve uploaded this script to my Code Library. I hope that you can find some use for it.

After you have verified the presence of your backup files, it’s time to start testing the backups. See Jason Brimhall’s post for how to do that!

Thanks for the topic!

Previously, I posted a list of interview questions that I ask job candidates. This post is about the questions that I ask when I am the candidate. (Remember that interviews are two way – you are interviewing the company to see if you want to work for them just as much as they are interviewing you to see if you will fit in working for them.) I’ve gleaned these questions over the years from several locations, including  monster.com – Monster also has an entire section to help interviewees do their best at an interview here.

No.Question
1What happened to the person who previously did this job? (If this is a new position, how has this job been performed in the past?)
2Where are your challenges, and where do you see me focusing?
3What is the first problem the person you hire must attend to?
4Can you describe what my first day, week and month would look like?
5What's your average employee's work schedule like, including work from home, and after hours?
6Can you describe the ideal candidate for this position?
7How would you describe the company culture?
8Why did you choose to work here, and what keeps you here?
(ask of each person involved in interviewing you)
9What don’t you like about working here?
(ask of each person – many won't answer unless they are alone with you)
10What one thing would you change about the company?
11What has surprised you the most about working here?
12What can you tell me about the individual to whom I would report?
13Can you give me a break down of the number of physical / virtual / clustered computers running SQL Servers that you have?
14Can you give me a break down of the number of SQL Server instances and databases you have installed by environment (Production/QA/Development)?
15What is the size of your largest database?
What is the size of the largest table in that database?
What is the total size of all databases?
16Do you use any form of Source Control?
Do you use Continuous Integration tools?
Do you have any issues with these processes?"
17Can you give me a birds-eye view of your disaster recovery plan?
18Tell me about your coding and release standards.
19Is your company required to adhere to any regulations, such as Sarbane-Oxley (SOX) / HIPAA / PCI?
If so, how do you ensure compliance (separate people assigned to review regulations and set policies, IT staff is responsible, etc.)
If not, does your company voluntarily follow any of the regulations?
20What are the company’s five-year sales and profit projections? (Or what are the company’s future plans for new products and services or any planned market expansion?)
21What’s our next step?