Wayne Sheffield

My blog about SQL Server

Browsing Posts published by Wayne Sheffield

SQLSat331You know, these 1 hour sessions that are at most SQL Saturdays are just too short sometimes – you just get going and bang! The times up. There’s only so much material that can be presented in just one hour. Recognizing this, last year my friend Jason Brimhall and I teamed up and created an all-day workshop for SQL training. We will be chugging into Denver, CO on the SQL Saturday crazy train to deliver this workshop, titled “Murder They Wrote”, at SQL Saturday #331 on Friday, September 19.

The Target Audience and what we cover

If you are a DBA, a database developer, or an application developer that connects to a SQL Server back-end database, then this session is for you. If you are experiencing performance issues, then this session is most definitely for you. If you’re looking for a fun way to spend a day and to get some #sqllearning, then this session is for you. This presentation is geared around decisions that have been made in the development life cycle and the effects that those decisions have on SQL Server, based upon the things that we have seen done to the instances that we have managed (and, in my case, some of the things that I did back when I was a developer and didn’t know better). With no topic being off limits, we also cover critical issues that just aren’t that great to do – and that still happen way too often. And demos? Oh yeah! You know that I like to show things instead of just telling them, so in that spirit, this is a demo-heavy presentation. We don’t just tell you that something is bad and something else is good… we’ll show it to you. We might even blow something up!

So, come on out for a day of learning, fun, and SQL camaraderie. We look forward to spending a day chatting with the attendees, and in helping them to overcome some of their issues.

The Official Session Abstract

Join Microsoft Certified Masters Wayne Sheffield and Jason Brimhall as they examine numerous crazy implementations they have seen over the years, and how these implementations can be murder on SQL Server.  No topic is off limits as they cover the effects of these crazy implementations from performance to security, and how the “Default Blame Acceptors” (DBAs) can use alternatives to keep the developers, DBAs, bosses and even the end-users happy.

Presented by:

WayneSheffieldWayne Sheffield, a Microsoft Certified Master in SQL Server, started working with xBase databases in the late 80′s. With over 20 years in IT, he has worked with SQL Server (since 6.5 in the late 90′s) in various dev/admin roles, with an emphasis in performance tuning. He is the author of several articles at www.sqlservercentral.com, a co-author of SQL Server 2012 T-SQL Recipes, and enjoys sharing his knowledge by presenting at SQL PASS events and blogging at http://blog.waynesheffield.com/wayne.

 

 

 

JasonBrimhallJason Brimhall has 10+ yrs experience and has worked with SQL Server from 6.5 through SQL 2012. He has experience in performance tuning, high transaction environments, as well as large environments.  Jason also has 18 years experience in IT working with the hardware, OS, network and even the plunger (ask him sometime about that). He is currently a Consultant and a Microsoft Certified Master(MCM). Jason is the VP of the Las Vegas User Group (SSSOLV).

 

 

Course Objectives

  1. Recognize practices that are performance pitfalls
  2. Learn how to Remedy the performance pitfalls
  3. Recognize practices that are security pitfalls
  4. Learn how to Remedy the security pitfalls
  5. Demos, demos and even more demos – (and the scripts to demonstrate both the pitfalls and their remedies will be provided to attendees!)
  6. Have fun and discuss
  7. We might blow up a database

We are trying to have a nice mix of real world examples (and some contrived ones), all of which demonstrate a good and useful point.

Registration – or what to do to join us

If you will be in the area, and you are looking for high quality content with a good mix of enjoyment (and some self-ridiculing), come and join us! You can register for this session at Eventbrite.  There is a limit of 30 seats for this murder mystery theater, so sign up now to ensure that you have a spot saved for you. The cost is a mere $125.

And then, on Friday September 19th, join us at the Cherry Creek Presbyterian Church – Community Life Center, located at 10150 E. Belleview Ave., Englewood, CO 80111. The train gets underway at 9am.

But wait, there’s more!!!

Come on back Saturday for the main event – Denver’s SQL Saturday is being held at University of Denver – Driscoll Student Center, 2055 East Evans Avenue, Denver, CO 80208. There are 8 tracks going on, so there’s bound to be something there that you just have to see all day long! The complete session schedule is here, and Jason and I are each presenting during the first session of the day, so come on out and join one of us!

Related posts:

The Murder Train chugs into Raleigh

SQL Saturday RVA recap

History of the SQL crazy train

Shameless plug:

Jason and I both present regularly at SQL Saturdays. If you are organizing an event and would like to have us deliver our workshop, please contact Jason, myself, or both of us.

TSQL TuesdayWell, here it is again. The second Tuesday of the month, which means that it’s T-SQL Tuesday. T-SQL Tuesday… that wonderful monthly blogging party started by Adam Machanic (Blog | @AdamMachanic). The intent is to get the entire SQL Community together on a monthly basis to blog about a common theme – to get each person’s perspective on the chosen subject. This month, it is being hosted by Jeffrey Verheul (Blog | @DevJef), and the topic he has chosen is SQL Family and Community. Jeffrey lays out the topic like this:

This month I would like to give everyone the opportunity to write about SQL Family. The first time I heard of SQL Family, was on Twitter where someone mentioned this. At first I didn’t know what to think about this. I wasn’t really active in the community, and I thought it was a little weird. They were just people you meet on the internet, and might meet in person at a conference some day. But I couldn’t be more wrong about that!Once you start visiting events, forums, or any other involvement with the community, you’ll see I was totally wrong. I want to hear those stories. How do you feel about SQL Family? Did they help you, or did you help someone in the SQL Family? I would love to hear the stories of support, how it helped you grow and evolve, or how you would explain SQL Family to your friends and family (which I find hard). Just write about whatever topic you want, as long as it’s related to SQL Family or community.

I think that this is a wonderful topic. Thank you Jeffrey for selecting this. The #SQLFamily, and the entire SQL community is so fantastic, and this is a wonderful opportunity to shine the light on it. So, what is #SQLFamily? How would you explain it to others? Well, let’s start off with what is family? It’s simply all the people that you are related to. So, #SQLFamily is all the people that you know through SQL Server.

However, #SQLFamily isn’t just about who they are; it’s also about what they are. They are the SQL community. Unlike Other RDBMS platforms, the #SQLFamily openly shares what they know. From folks writing online articles at places like sqlservercentral.com and their own blogs or books. Presenters at user groups, SQLSaturdays, or larger conferences like the PASS Summit. Or perhaps what is most unique about our SQL community is the folks that help out on the #SQLhelp twitter “group”. You see, most of these do it for free (or in the case of SQLSaturday, it comes out of their own pocket). Okay, those that write books get paid for that, and there might be a pittance for some articles, but almost everything else is all volunteering, of both time and money. But they don’t try to hog the spotlight – they are just trying to help and encourage others in all things SQL Server.

Why would they do this? Some may be craving name recognition, maybe even perhaps hoping for a boost in their career. But I’d venture that most are like me, and they like helping others learn. When you get feedback of how you’ve helped someone, or you see the light go on in someone’s eyes while speaking, it is such an incredible rush. When you have taught someone something, you have changed the world. Talk about paying it forward!

My first exposure to #SQLFamily was at my local user group. There, I was encouraged to apply myself – to write and to present. Little did I realize just how much this would make me grow myself, but I suspect that those encouraging me knew.

But this isn’t all that that #SQLFamily is about. It’s also about what else you do away from SQL. The #SQLFamily encourages each other in all kinds of areas. Runners get encouragement where the #SQLFamily tweets messages to them while they are running. We sponsor each other for things that are important to each other (like polar plunges for charity). Or like Brent and Grant are doing for Summit this year and wearing funky leggings for a good cause. We get together for fun nights of SQLKaraoke. When you are down, or if something unfortunate happens, someone is there to lift you up and to offer support. And like all families, we grow. Spouses get added to the mix, and they become part of our #SQLFamily also, sometimes an active part.

So there you have it… my recap of what SQLFamily is all about. Thanks again Jeffrey for the topic, and I look forward to reading what everyone else has posted.

SQLSat320

You know, these 1 hour sessions that are at most SQL Saturdays are just too short sometimes – you just get going and bang! The times up. There’s only so much material that can be presented in just one hour. Recognizing this, last year my friend Jason Brimhall and I teamed up and created an all-day workshop for SQL training. We will be chugging into Raleigh, NC on the SQL Saturday crazy train to deliver this workshop, titled “Murder They Wrote”, at SQL Saturday #320 on Friday, September 5.

The Target Audience and what we cover

If you are a DBA, a database developer, or an application developer that connects to a SQL Server back-end database, then this session is for you. If you are experiencing performance issues, then this session is most definitely for you. If you’re looking for a fun way to spend a day and to get some #sqllearning, then this session is for you. This presentation is geared around decisions that have been made in the development life cycle and the effects that those decisions have on SQL Server, based upon the things that we have seen done to the instances that we have managed (and, in my case, some of the things that I did back when I was a developer and didn’t know better). With no topic being off limits, we also cover critical issues that just aren’t that great to do – and that still happen way too often. And demos? Oh yeah! You know that I like to show things instead of just telling them, so in that spirit, this is a demo-heavy presentation. We don’t just tell you that something is bad and something else is good… we’ll show it to you. We might even blow something up!

So, come on out for a day of learning, fun, and SQL camaraderie. We look forward to spending a day chatting with the attendees, and in helping them to overcome some of their issues.

The Official Session Abstract

Join Microsoft Certified Masters Wayne Sheffield and Jason Brimhall as they examine numerous crazy implementations they have seen over the years, and how these implementations can be murder on SQL Server.  No topic is off limits as they cover the effects of these crazy implementations from performance to security, and how the “Default Blame Acceptors” (DBAs) can use alternatives to keep the developers, DBAs, bosses and even the end-users happy.

Presented by:

WayneSheffieldWayne Sheffield, a Microsoft Certified Master in SQL Server, started working with xBase databases in the late 80′s. With over 20 years in IT, he has worked with SQL Server (since 6.5 in the late 90′s) in various dev/admin roles, with an emphasis in performance tuning. He is the author of several articles at www.sqlservercentral.com, a co-author of SQL Server 2012 T-SQL Recipes, and enjoys sharing his knowledge by presenting at SQL PASS events and blogging at http://blog.waynesheffield.com/wayne.

 

 

 

JasonBrimhallJason Brimhall has 10+ yrs experience and has worked with SQL Server from 6.5 through SQL 2012. He has experience in performance tuning, high transaction environments, as well as large environments.  Jason also has 18 years experience in IT working with the hardware, OS, network and even the plunger (ask him sometime about that). He is currently a Consultant and a Microsoft Certified Master(MCM). Jason is the VP of the Las Vegas User Group (SSSOLV).

 

 

Course Objectives

  1. Recognize practices that are performance pitfalls
  2. Learn how to Remedy the performance pitfalls
  3. Recognize practices that are security pitfalls
  4. Learn how to Remedy the security pitfalls
  5. Demos, demos and even more demos – (and the scripts to demonstrate both the pitfalls and their remedies will be provided to attendees!)
  6. Have fun and discuss
  7. We might blow up a database

We are trying to have a nice mix of real world examples (and some contrived ones), all of which demonstrate a good and useful point.

Registration – or what to do to join us

If you will be in the area, and you are looking for high quality content with a good mix of enjoyment (and some self-ridiculing), come and join us! You can register for this session at Eventbrite.  There is a limit of 24 seats available for this murder mystery theater, so sign up now to ensure that you have a spot saved for you. The cost is a mere $110.

And then, on Friday September 5th, join us at the Courtyard by Marriott Raleigh North at the Triangle Town Center (3401 Sumner Boulevard, Raleigh, NC 27616). The train gets underway at 9am. Yes, it’s at a hotel, which means that if you’re from out-of-town, you can stay here and just wander downstairs.

But wait, there’s more!!!

Come on back Saturday for the main event – Raleigh’s SQL Saturday is being held at Wake Tech Northern Campus, 6600 Louisburg Road, Raleigh, NC 27616. There are 8 tracks going on, so there’s bound to be something there that you just have to see all day long! The complete session schedule is here. (If you are staying at the Courtyard, it’s only 3 miles away.)

Related posts:

SQL Saturday RVA recap

History of the SQL crazy train

Shameless plug:

Jason and I both present regularly at SQL Saturdays. If you are organizing an event and would like to have us deliver our workshop, please contact Jason, myself, or both of us to make arrangements.

TSQL TuesdayWell, here it is again. The second Tuesday of the month, which means that it’s T-SQL Tuesday. This month, Dev Nambi (blog | twitter) is hosting the party, and he has decided that the topic is to be on Assumptions. Specifically, he wants us to write about:

A big assumption you encounter at work, one that people are uncomfortable talking about. Every team has an elephant in the room.

One assumption that I have seen repeatedly is “if there is a table with a clustered index, I can select all of the rows out of that table and they will be in the order of the clustered index without having to specify an ORDER BY clause”.

So, let’s test this out. Remember, to prove that this assumption is false, all that is needed is to get the results out of order.

First up for this test: create and populate a table.

 

-- let's use a safe database
USE tempdb;
GO
-- create a table to play around with
IF OBJECT_ID('tempdb..temp') IS NOT NULL DROP TABLE temp;
CREATE TABLE temp (
    RowID INTEGER PRIMARY KEY CLUSTERED,
    MyGUID UNIQUEIDENTIFIER DEFAULT (NEWID()));
 
-- populate the table with an ascending number
-- let's try a million rows, generated with a dynamic tally table
WITH Tens    (N) AS (SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
                     SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
                     SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1),
     Hundreds(N) AS (SELECT 1 FROM Tens t1, Tens t2),
     Millions(N) AS (SELECT 1 FROM Hundreds t1, Hundreds t2, Hundreds t3),
     Tally   (N) AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM Millions)
INSERT INTO temp (RowID)
SELECT  N
FROM    Tally;
 
 
-- now, checkpoint the database (it's in simple, so all pages will now be "clean")
CHECKPOINT;
-- and let's remove the clean pages from memory
DBCC DROPCLEANBUFFERS;

 

Now let’s just do a few random searches on this data.

-- let's select some data from the table
SELECT  RowID, MyGUID
FROM    temp
WHERE   RowID BETWEEN 100 AND 200;
 
-- and some more data
SELECT  RowID, MyGUID
FROM    temp
WHERE   RowID BETWEEN 1000 AND 1200;
 
-- and some more data
SELECT  RowID, MyGUID
FROM    temp
WHERE   RowID BETWEEN 5000 AND 5100;
 
-- notice that all of these are ordered?

Now let’s run our query against the entire table.

-- now, let's get some more data
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
SELECT  RowID, MyGUID
FROM    temp;

On my system, I get the following results:

CI not in order

As you can see, we went from value 1953 to 867133. As can be plainly seen, the assumption is false.

So, what happened? And are you complaining about that READ UNCOMMITTED transaction isolation level? Well, it turns out that this isolation level isn’t necessary to reproduce this – this query will also reproduce the same result set:

SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
SELECT  RowID, MyGUID
FROM    temp
WITH (TABLOCKX);

Are you wondering what is happening here? Well, anytime that you are in the read uncommitted isolation level (you might be familiar with it as the NOLOCK query hint), you are telling SQL that you don’t care about data consistency, so SQL will read the data as fast as it can. Which happens to be an Allocation Order Scan… it will read the pages from the data file in the order that they were allocated for this table. And, for higher isolation levels, anytime that SQL can be sure that the data won’t change underneath it (such as when you have an exclusive lock on the table, or reading from a read-only filegroup), it will also perform an Allocation Order Scan.

Can we see that this is what is happening? Sure we can!

First off, re-run this query with the Actual Execution Plan turned on. You’ll get an execution plan like this:

Non-Ordered Scan

Click image to see full screen

Notice that the scan is not ordered. (If you re-run any of the previous queries, you will see that they are ordered.) With an unordered scan, SQL will get the data as fast as it can since it doesn’t need to follow an index… and in this case, it is an Allocation Order Scan (if you run this query without the TABLOCKX query hint in the READ COMMITTED isolation level, it will still be unordered, but it can’t do an Allocation Order Scan since there isn’t a guarantee that the data can’t be changed). This can include reading pages that are already in memory prior to others (I’ve seen this happen before, and that is why I included those previous queries).

But let’s continue to see if these results are indeed in allocation order. In a separate query window, run this query (undocumented command, but Paul Randal blogged about it here) to see the pages that are allocated for this table:

DBCC IND (tempdb, 'temp', -1)

The results that I get are (unnecessary columns are removed for the sake of brevity):

DBCCIND

And it continues, but this is enough. We are interested in the rows where PageType = 1; these are the pages where the data is stored.

Side note: Do you see a few rows where the next logical page (NextPagePID) is not the next physical page? What we have here is a fragmented index.

So, let’s modify the query to return what file, page and slot that the row is being returned from:

SELECT  RowID, MyGUID, sys.fn_PhysLocFormatter(%%physloc%%)
FROM    temp
WITH (TABLOCKX);

Here we added a function to the result. The %%physloc%% system variable returns the row/page/slot that a row is on, but in a binary format. The sys.fn_PhysLocFormatter function transforms that to something easier to read, in the format “File:Page:Slot”. (Again, this is undocumented, and Paul Randal blogged about this also here). When this query is run, browse through the results, and you will see the pages being returned in the same order as they were allocated in (from the DBCC IND statement earlier). On my system, the results that I get are:

Non-Ordered Scan with physloc

Click image to see full screen

Browse through this result set for the earlier rows and you will see the pages returned in the same order of the physical file allocation of these pages.

In summary: under certain conditions, SQL will choose to perform an Allocation Order Scan of the data. This is not the same as the physical order of the pages in the file, but the order that the pages were actually allocated for this data. The results will be in the order of the pages, and an assumed order is not be guaranteed without using an ORDER BY clause.

You’re utilizing the database mirroring high-availability feature, configured to be in the High Safety mode with automatic failover, which means that the mirroring configuration includes having a witness server to ensure that the system can automatically switch over to the partner in the event of a failure of the principal.

Under this scenario, if the partner server loses its connection to the principal, it will contact the witness server to figure out what to do. In order for automatic failover to occur, both the partner and witness servers have to agree that the principal server is out of commission. There are three possible outcomes:

  1. If the witness is still in contact with the principal, then an automatic failover will not occur. The principal server will remain online, serving the application. However, the transaction log will start to accumulate transactions that cannot be removed until the connection has been re-established and these transactions have been sent to the partner.
  2. If the witness has also lost its connection with the principal, then the partner and witness servers will reach a quorum and the partner database will be brought online, becoming the new principal (specifically, it will undo any un-committed transactions and then bring the database online).
  3. If the partner server is also disconnected from the witness server, then automatic failover will not occur.

The purpose of this last outcome is to prevent what is known as a split brain – a situation where both of the servers are online, serving up results to application requests. If this were to occur, then there would be two different versions of the database, and reconciling these differences would become problematic.

Now let’s consider what would happen if the principal server were to lose its connection to both the partner and witness servers, however the principal server is still online (let’s just say that the partner and witness servers are each in separate remote locations that has had a service interruption – such as a change to a network switch that has isolated these servers from the principal). With the principal server still being online, and potentially able to service at least some users, what does it need to do to ensure that the mirror does not end up in a split brain situation?

Let’s first look at what the possible scenarios are now:

  1. The partner server is in communication with the witness server, and they have reached a quorum where the partner server has been brought online.
  2. The partner server is not in communication with the witness server, and thus it is still in a partner state.

Since the principal server has lost communication with both servers, it can’t know in which situation the mirror is in. Therefore, it needs to protect the mirror from potentially being in a split brain situation. It does this by assuming that the partner server has been elevated to become the mirror’s principal, and so it goes into the partner state. Specifically, this means putting the database into recovery in order to be able to receive transactions from the new principal server.

Is this what actually happens? Well, it’s easy enough to test this out. On three different instances of SQL (they can even be on the same server), establish a mirror in high safety mode with a witness server. After it is established, ensure that SQL Agent is shut down (if running, it will restart SQL), and use task manager to end the processes for sqlservr.exe on the secondary and witness instances. Now look at the status of the database in SSMS, and you will see that its status is “Principal, Disconnected / In Recovery). Alternatively, you can run this following query to see what the status is:

SELECT  sd.name, sd.state_desc,
        sdm.mirroring_role_desc,
        sdm.mirroring_state_desc,
        sdm.mirroring_witness_state_desc
FROM    sys.DATABASES sd
        JOIN sys.database_mirroring sdm ON sd.database_id = sdm.database_id;

To wrap up, mirroring prevents a split-brain situation by:

  1. Both the partner and witness servers have to be in communication with each other, and have lost communication with the principal, in order for the partner server to be elevated to the mirror principal.
  2. If the principal server becomes disconnected from both the partner and witness servers, it goes into partner status and puts its database into recovery.

After installing SSAS, using the tabular model, you receive the error “The service cannot be started: The following system error occurred:  A required privilege is not held by the client.” when trying to start the service.

Examining http://msdn.microsoft.com/en-us/library/ms175371.aspx shows that the only security privilege needed for the SSAS service account is the “Log on as a service” privilege.

Alas, this is not accurate. When running in the tabular model mode, the service account also needs the “Increase a process working set” privilege.

In Windows, under a default installation, the local group “Users” has this privilege (which would allow any user on the system to have this privilege) as seen below:

Increase a process working set

However, many companies will lock down the security policies and remove this privilege. This is apparently a problem because there are a lot of requests on the internet for this issue. The advice generally given is to just run SSAS as an administrator – which is not a good choice, and may not be an option in your company. Now there is a solution.

I’ve filed a connect item to correct the documentation at https://connect.microsoft.com/SQLServer/feedback/details/869677/ssas-privileges-incorrectly-specified – please consider going out there and voting for it.

Wow, what a month. On March 22, 2014, the SQL Saturday train came into Richmond VA (RVA). I was one of the organizers of the event, and what a ride we had.

The rough part of the ride started off earlier in the week. A freak spring snowstorm rolled through the region. Richmond was spared, but points just north of us (Washington D.C. for instance) had 6 inches of snow. This caused a few speakers to need to cancel as things got rescheduled for them. As we got closer to the event, a few other sessions canceled. Now when you are having an event with people coming to speak, and other people coming to see their presentations, this is not what you want to have happening. At one point, we had seven vacancies in our speaking schedule. But the fun wasn’t over yet – on Saturday, two additional speakers failed to show up. They didn’t even call or email anyone that they would not be attending. We juggled things around, coaxed speakers into doing additional sessions, and continued on.

On Friday, we had our first ever RVA precon. About 50 people showed up to attend an all day training in two precon events. One of the precons was conducted by Andy Leonard (SSIS Design Patterns), and the second precon was conducted by Jason Brimhall and myself (Murder They Wrote). Since I was busy with own precon, I couldn’t drop in and see how things were going in Andy’s session. All I can say about ours is “WOW!” What a day of learning. We have heard from several of our attendees since, and they have been busy implementing various things to fix problems. Some comments that have been made about our precon session are:

Feeling like a newbie with #SQLServer while attending @DBAWayne and @sqlrnnr precon at #SQLSatRVA. Great class!

It was an eye and mind opening experience awesome session!

Friday’s session went beyond my expectations for the day.

The information was very useful.  I sat down with my team yesterday to discuss what I learned.  I am certain that we will see implementation of a few items…

Needless to say, I’m extremely pleased to have this feedback. Now I’m eager to do this precon at other events!

Friday night, we had our speaker / volunteer dinner. It was great to reunite with several of my #sqlfamily, and to meet folks that I had never met before and bring them into my #sqlfamily.

After the precon all day Friday, Saturday morning came way too early. Shortly after arriving, the attendees started showing up. Even without coffee yet, there was an air of excitement all around! This was going to be a perfect day! The weather was almost too perfect – a beautiful spring day, and the temperature eventually reached 70 degrees. Just a perfect spring day. The speakers were showing up early; all equipment was functioning, registration / sign in was humming along, and breakfast was being efficiently served. Changes we had made from last year were really working out. The first sessions started, and we could take a breather. We had a few Microsoft Premier Field Engineers (PFE) down from DC, and they manned a “SQL Clinic” room to answer questions. I was really curious about how this would work out for us today.

The second session of the day, I was presenting a fairly new presentation – “Crazy Things Developers Do”. As is typical of my presentations, it has a lot of demos. And as I was setting up, I ran into a technical glitch. Even though this was the same room that I had presented in all day on Friday, my laptop would not sync up with the video equipment. It had worked just fine for the speaker before. I tried rebooting. The facility tech guys checked it out, and couldn’t find anything wrong. So… I ended up doing my presentation with no props. I talked my way through it. At one point, I had everyone gather around to watch one demo that I just had to do, but the rest was all talking, with no slides or anything. In retrospect, I should have used the whiteboard in the room – I guess that I was too stunned and shell-shocked to even recognize that it was there. The attendees said it was still a good session, and that they learned. But oh, how I wish the tech stuff would have worked. Luckily, everything worked out for all of the other speakers for the rest of the day, and this was the only glitch that we had!

Lunch time came around, and again everything went smoothly. Did I mention that it was a beautiful day outside? Most attendees took advantage of it, and ate their lunch outside. You just couldn’t ask for more. A nice spring day, relaxing outside, chatting with friends.

The rest of the day went by too fast. Before long, we were having the closing, and raffling off the prizes from the vendors. I checked in with the Microsoft PFEs, and found that over 20% of the attendees had stopped in and asked questions that they had been able to answer. AWESOME!

People come to a SQL Saturday to learn. Which means that we need people in rooms presenting topics that people want to see. I want to thank all of the speakers that came to SQLSatRVA to present a session or two. I especially want to thank all of the ones that came from outside of the RVA area! A SQL Saturday just can’t be done without these great folks – you are what the attendees are coming to see. Words alone cannot express my appreciation for the time you have spent preparing the presentation(s) that you have delivered, and the time and expense that you have incurred to come to RVA to do your presentation. Thank you so very much!

There is another recurring theme to this post… smooth functioning. This doesn’t just happen – it takes a lot of effort from a lot of special people – people that are passionate about SQL Server, learning, and helping others to learn. People that give of their time to help ensure that the event functions smoothly. Some do work ahead of time, in planning the event, getting sponsors, figuring out the logistics for the location and food, working with the sponsors for advertising and ensuring that their equipment is there ahead of time, and in organizing the volunteers. But this isn’t all… there is a lot of work being done the day before and the day of the event. From room monitors to ensure that speakers have what they need. Spouses that come in to help with registration and ensuring that breakfast, lunch and break times function smoothly. So, I’d like to end this post with a special THANK YOU! to all of these wonderful volunteers, and a picture of a some of them. Of course, with our community, you know it won’t be a serious picture…

 

There’s a new SQL Server user group starting up over in Lynchburg, VA. It’s first meeting is February 27, 2014 (6pm-8pm), and I’m honored to be presenting there for their kickoff meeting. I’ll be presenting “Table Vars & Temp Tables – What you NEED to Know!” Visit http://lynchburg.sqlpass.org/ for more info (abstract, directions) and to sign up so that enough food can be brought in!

Hope to see you there!

 

TSQL TuesdayOn the second Tuesday of each month, the SQL Server universe all comes together to blog about a topic. Known as T-SQL Tuesday and started by Adam Machanic (Blog | @AdamMachanic), this month’s T-SQL Tuesday topic is hosted by friend and fellow MCM Jason Brimhall (@sqlrnnr) and he wants us to blog about bets. Specifically, “I want to know about the gambles within your databases or not within your databases that you have seen over the years. When has somebody (a CTO, Developer, Business User) placed a bet that was far too risky in your opinion?”

A friend of mine, someone that is a regular speaker at SQL Saturdays, was recently approached from an attendee of one of his sessions. This attendee was looking for help in setting up a cluster. But this would all have to be done by screen sharing and passing control over to my friend – directly being able to connect to the servers wasn’t an option. Further discussion revealed that the person was doing this without the company’s knowledge – and that the company wasn’t to be told. For, it seems, the person was hired as a senior DBA, but isn’t qualified to do the tasks, and doesn’t want the company to know that (s)he doesn’t have the skills necessary to do the job.

Wisely, my friend turned down this “opportunity”. However, this blog is to be about bets, so let’s look at some of the bets that were made, or were being contemplated. First off, the company placed a bet on hiring this person. I will assume that they did due diligence, doing things like performing background checks and conducting an interview. Their bet was that the person they were hiring for a senior DBA position actually has the skills necessary for a senior DBA. At this point, they have no idea of how bad of a bet they made.

What could they have done differently so that they would have had a better outcome? From my point-of-view, their interview process was not thorough enough to adequately determine whether their candidates had the required skill set. If they don’t have on-staff personnel competent to determine this, they should have spent a little bit of money to hire a competent senior DBA to evaluate and assess their candidates. (Small plug: I offer such a service – you tell me what you need, and I’ll grill the candidates (complete with a lab) to ensure that they know and can do what you need them to be able to do. Contact me for details.)

Other bets that were being contemplated:

  • The attendee was willing to let someone whom they had only met at a SQL Saturday do a task that they couldn’t do.
    • How do they know whether the person can do the job and wouldn’t mess things up?
  • The attendee was willing to let an unauthorized person into their companies systems.
    • How do they know that this person can be trusted while on the systems?
    • They were willing to take this risk with the potential of being fired.
  • If my friend had proceeded with accessing the company’s systems, there are several bets that he would have been taking also:
    • Legal – would the company take legal actions for his unauthorized access to the company’s systems?
    • IMO, it’s a sure bet that there would be security violations occurring. What would the repercussions of those be?
    • If word of his actions were to spread, there would be a loss of reputation – potentially causing future loss of clients.
      • While the attendee might know something of my friend’s background, my friend knows nothing of the attendees. Other than that the attendee is willing to circumvent ethics to keep from looking bad. What’s to keep the attendee from telling others of what my friend did?
    • He might face the loss of insurance – this would mean that he couldn’t consult.

There were a lot of bets being placed in this situation. The company lost – even if they don’t know it yet. The attendee lost – and probably knows it. Thankfully, my friend has ethics and didn’t take this bet, so he didn’t.

I’ve been using SQL Prompt for years. In September 2013, version 6 was released (quickly followed by 6.1 in late September and 6.2 in December ), so I thought I would highlight some of the new features available in it.

So lets start off with a table to show the new features and what version they were implemented in.

Version Feature Description
6.0 Tab History A history of query tabs opened in SSMS
6.0 Synonym support Synonyms (including columns from a synonym’s base object) are now suggested
6.0 CLR support CLR functions and stored procedures are now suggested
6.0 XML functions support XML functions are now suggested and fully supported
6.0 SQL Azure support SQL Azure databases are now supported
6.0 Custom snippets folder Share snippets with other SQL Prompt users by using a shared snippets folder
6.0 More snippet placeholders You can now use $PASTE$, $DBNAME$ and $SERVER$ in SQL Prompt snippets
6.0 Snippet placeholder format You can specify custom $DATE$ and $TIME$ format
6.0 Indent using tab characters You can customize SQL Prompt formatting to indent text using tab characters
6.0 JOIN clause improvements All relevant tables are now suggested in a join clause
6.0 Copy from object definition box You can now press CTRL + C to copy the contents of the object definition box
6.1 Server/database connection filtering You can now specify the databases you want SQL Prompt to load suggestions for
6.1 Major performance improvements
6.1 Added copy button back to object definition box
6.1 Added option to disable synonym support
6.1 Autocompletes ALTER statements with SQL Azure
6.2 SQL Server 2014 support SSMS and 2014 CTP2 Server
6.2 Phrase completion SQL Prompt now completes entire SQL phrases, for example PRIMARY KEY instead of just the first keyword PRIMARY
6.2 Insert suggestions by pressing ";" key
6.2 Tabs opened by SQL Prompt are no longer marked as unsaved
6.2 Options dialog box is now resizable
6.2 Improved formatting for MERGE statements

And, of course, bug fixes.

So let’s look at what are some of my favorite new features. First up – new features in v6.0.

The added support for synonyms, CLR functions and XML functions seems to me to be a natural extension of the product. If you use these, and you use SQL Prompt, you need SQL Prompt to prompt for what they can with these.

The custom snippets folder is all about being able to share the snippets. Point all developers to a network share, and they can all use the same set of snippets. Snippet changes made will be available to everyone at once (though they might have to refresh suggestions first). If you have different teams with different needs, (dev team 1, dev team 2, dbas, etc.) they can each point to a different share.

In thinking about needs for multiple teams, I think that they would have some common needs across these teams. Maybe someday we’ll have the ability to specify a hierarchy of shared folders, so that you can have common snippets to everyone, and then snippets common to the teams. Of course, all the dev teams could need something different from the DBAs, so there is the hierarchy. Is this worthwhile? I don’t know… I just now came up with this idea and haven’t really thought about it much. If this gets implemented, the biggest issue I foresee is duplicate-named snippets – there might want to be a mechanism to select which ones are to be the final one.

The changes for snippet placeholders are just neat. $PASTE$ will insert the contents of the clipboard. $SERVER$ and $DBNAME$ will be replaced with the name of the connected SQL Server instance and database name. I can see some potential uses for these. But what I really like are the formatting options available for $DATE$ and $TIME$. These are expressed within parenthesis just before the ending $, and follow the .NET formatting conventions for dates and times. I use these in a snippet for creating what I term a remark block – a remarked out section at the top of code for tracking changes to the code. The snippet I use is:

/******************************************************************************
$CURSOR$Description of code
*******************************************************************************
                               MODIFICATION LOG
*******************************************************************************
$DATE(yyyy-MM-dd)$ WGS Initial creation.
******************************************************************************/

 

The resulting code will look like:

/******************************************************************************
Description of code
*******************************************************************************
                               MODIFICATION LOG
*******************************************************************************
2013-12-28 WGS Initial creation.
******************************************************************************/

 

(Can you guess on what day I created this?)

Next up is new features in the point releases. The big one in v6.1 is the ability to control the databases that SQL Prompt will load suggestions for. If you are in a small database, and in your query you use three-part naming to specify the database an object is in, SQL Prompt will load that database. While it’s loading, you have to wait. If it happens to be that database (you know, the one where every customer has their own table and set of procedures, and you have 250,000 customers), this can take a while. And take a serious hit on the memory on your workstation. So you can exclude this database from getting suggestions for, and live happily ever after. (Or until Red-Gate figures out how to use a memory-optimized table to hold all of this information.)

The big new feature in v6.2 if phrase completion. In previous versions of SQL Prompt, you would be prompted for keywords one at a time. (Sounds like a cursor… row-by-agonizing-row.) But there are places where when you use the first word, you will always use the second word. I’m thinking of “FOREIGN KEY” right now – the only place you ever use “FOREIGN”, you will also use “KEY”. Selecting them together is just another speed enhancement in writing your code (which is what SQL Prompt is all about in the first place!). The example that Red-Gate gives using “PRIMARY KEY” isn’t so clear cut – there are places where you will use “PRIMARY” without “KEY”, like when specifying filegroups. However, in the correct context, the phrase is also valid. Since it has already figured out the valid keywords for the current context, then phrase completion is, IMO, what should have been there in the first place. (I have to admit that I requested this feature – only to find out that they were already working on it!)

A quick table of the SQL Prompt release notes: