Wayne Sheffield

My blog about SQL Server

Browsing Posts published by Wayne Sheffield


October… the month of Halloween. Spooky events. SQL Server has all kinds of Halloween-ish themed activities like Split Brains, Zombies, Ghosts, Phantoms, and of course Halloween itself. In this post, we’re going to mill about with Phantoms a bit.

Phantoms in SQL Server are actually called “Phantom Reads”. This ectoplasmic phenomenon manifests itself when an identical query being run multiple times, in a single connection, returns a different result set for each time it is run. A Phantom Read is one of the transaction isolation level concurrency events. The read uncommitted, read committed and repeatable read transaction isolation levels may exhibit Phantom Reads; the serializable and snapshot transaction isolation levels are not susceptible to this phenomenon.

Let’s set up a demo to see this in action. First, let’s create our sample data:

Now, let’s test phantom reads in the repeatable read transaction isolation level. Open the following 2 queries in separate query windows. Run Code Script 1, and within 10 seconds run Code Script 2.


The repeatable read transaction isolation level guarantees that there will not be any updates or deletes, so the second query is blocked until the transaction in Code Script 1 has finished. You can see that the second result set is identical to the first one. Once the transaction completes, the update in Code Script 2 can run, and the third result set from Code Script 1 shows that the result set is now changed. However, the repeatable read transaction isolation level still allows inserts, which means that you can still have a phantom read. Let’s see this by changing Code Script 2 to:

Run both code scripts again (Code Script 1 first, and Code Script 2 within 10 seconds). This time, you will see that Code Script 2 completes immediately without being blocked, and when Code Script 1 finishes, it has spawned additional data in its result set. A phantom read in action.

If you want to hide from phantom reads completely, then you’ll need to use either the serializable or snapshot transaction isolation levels. Both of these have the same concurrency effects: No dirty reads, non-repeatable reads, or phantom reads. The difference is in how they are implemented: the serializable transaction isolation level will block all other transactions affecting this data, while the snapshot isolation level utilizes row versions to create connection-specific versions of the table for the transaction – all of these row versions will cause increased activity in the tempdb database. Let’s take a look at how the snapshot isolation level will eradicate the phantom reads. First off, we need to modify the database to accept this isolation level.

To see this in action, we will once again have two queries. Run the code from Code Script 3 in one query window, and within 10 seconds run the code from Code Script 4 in another query window.

Notice that Code Snippet 2 was executed immediately, however the row that it inserted was not returned in the result set from Code Snippet 1.

And now you know how to dodge phantoms.

This post is re-published from my original post on SQL Solutions Group.

Page Splits 101

In SQL Server, a page split occurs on index pages when a row is required to be on a certain page (because of the index key), and there isn’t enough room on the page for it. This can be from either an insert operation, or an update operation. When this occurs, that page is split into two pages, with roughly half of the rows of that original page on each of the pages. The row is then put into the proper page. It is possible that a page split causes higher level leaf nodes to undergo page splits also. Furthermore, all of the page allocations and the data movement is logged in the transaction log.

Paul Randal has defined two different types of page splits. Paul calls the first type of page split a “good” page split, where the storage engine has to add a new page on the right hand side of the index’s leaf level. If you think of a table with an identity column, where the last page for the index for the identity column is too full to hold a new row, then adding a new row will allocate a new page so that the row can be inserted. Paul calls the second type of page split a “nasty” page split, which is when a row expands and the page doesn’t have enough space to hold the changed data, or if a new row needs to go on the page and there isn’t room for it.

In my book, the “good” page split isn’t really a page split, it’s just a new page allocation. However, this is deemed a page split in SQL Server, therefore this is the type of page split that we want to have happening.

Identity Columns

A recent SQLSkills newsletter has a discussion about a table that uses an integer identity column, and running out of values. Under normal usage (and the default unless otherwise specified) an identity column starts with the value of 1, and increments by 1. If you have enough rows where you exhaust the positive values, you need to do something so that your application will continue to work. Obviously, the best thing to do is to change this integer column into a bigint column. However, with over 2 billion rows in this table, a long maintenance window is needed to perform this conversion. What if you need to do something now, before this maintenance window? There are another 2+ billion negative values available for use in the integer data type, so we will use those.

In order to do this, the identity column needs to be changed. It can either be changed to start at the most negative value and increment by one, or start at -1 and be decremented by one. In other words, it can either be set to (-2147483648 , 1), or be set to (-1, -1).

Page Splits on Identity Columns

In considering which of these methods is preferred, we need to consider whether page splits impact these methods – especially nasty page splits. Furthermore, how will index maintenance affect each choice? So let’s think this through.

When there are negative values in this column, and the index is rebuilt, there will be a page with both negative and positive values in it. If the identity column is set to (-1, -1), there won’t be a gap (excluding the 0) in the values, and newly added rows will get a new page allocated – a good page split. If the identity column is set to (-2147483648 , 1), then there will be a full page with the records for the most recently used identity value, and with the values starting with 1 – a rather large gap.

When a new row is added, it will need to be added into the gap on this page, which will need to be split so that the new row can fit in before the value of 1. As more rows are added, the page with the value of 1 will again be used, and then need to be split again. As long as there are rows to be added, this cycle will continue. Therefore, considering the page splits, this choice seems to be a bad choice.

Well, that is my reasoning for how things will work. However, it’s best to test out your theory – especially if you are disagreeing with Paul. So, I will create a table with an identity column (set to 1, 1), and insert some rows. I will then change the identity column to the values to be tested, and add some more rows. I’ll throw in an index rebuild during this mix so that there will be a page with both the positive and negative values, and then insert more rows and see how this affects the page splits. With this in mind, the code to set up this test environment is as follows:

Reset the Identity Column

Next, the identity column needs to be reset to use the new seed / increment value – and this is where I ran into a problem with the test. While I can use DBCC CHECKIDENT to change the seed value, there is no way to change the increment value. Therefore, I can’t test my preferred method by changing the identity column to (-1, -1). However, I can simulate it by creating a table with the initial identity value at (-1, -1) and then using SET IDENTITY_INSERT to put in the positive values. Furthermore, I’ll need a second table to test the positive increment, so I’ll just create it with the most negative seed value and insert the positive values into it. The new code to setup the environment is:

The next part of this test is to create a page that has both positive and negative values in it. Since this test is to test out the second option, I’ll do this just for the second table. This code will insert a few rows, and then rebuild the index on this table. Finally it will show how the data on the page looks.


In this code, I use the undocumented virtual system column %%physloc%% to get the physical file/page/slot that a row is on, and then this binary value is cracked to return the actual file/page/slot. I then get the number of rows on this page, and the starting/ending values, and report by page. It can be seen that a page was created that will need to be split when more rows are inserted.

Checking for a page split

Continuing on, let’s add more rows to each table, and see how many page splits occurred. This will be performed for each table one at a time. This code and the results it produces is:


Now this is unexpected – there is only one page split on the LotsaSplits2 table, where I was expecting many more. Let’s look at what data is on these pages for these tables:


The second result set (for the identity column set at (-2147483648 , 1) ) shows that the page split occurred at the value of 1. If we can get the page to split where it starts with 1, then inserted rows won’t go onto this page, thus meaning that additional page splits won’t occur. It seems like I really got lucky here. Since the page can hold 7 rows, I repeated this test several times with between 1 and 6 filler rows, and it always split at the value of 1. The page split algorithm seems to be smart enough to realize that this gap exists, and that there will be more page splits occurring unless there is a page that starts with that value of 1. In talking with Paul about this observed behavior, he replied to me:

You’ll get at most two nasty page splits. The Access Methods is smart enough on the second split to split at the +ve/-ve boundary and no more nasty page splits occur. Yes, it seems somewhat counter-intuitive until the observation about the smart page split.

So there we go – the page split algorithm is smart enough to look at the data and to try to prevent future page splits if possible. Very nice!

Did you notice earlier that LotsaSplits1 also showed a page split? My assumption is that since there is still a gap to hold a value of zero, that the page was used and was then split. I tested this by changing the setup code to use “N-1” instead, so that it will start with the value of zero. However, this still incurs a page split. Looking closely at Paul’s definition of a good page split, notice that he states that a good split is when adding a new page on the right hand side of the index’s leaf level. It so happens that we are adding to the left hand side. I still think that all that should be occurring is a new page allocation, yet these tests prove that internally SQL Server is performing a page split.

TCP Chimney Offload transfers network traffic workload processing from the CPU to a network adapter that supports TCP Chimney Offload. This feature was introduced with Windows Server 2003 SP2, and it was called the Microsoft Scalable Networking Pack (SNP). Since Windows Server 2008, these features are a base part of these operating systems, so they no longer go by this name. To utilize this feature, the network adapter (and driver) must support this feature, and both the operating system and the network adapter must have this setting enabled.

This feature is not suitable for all applications. Microsoft says (at http://technet.microsoft.com/en-us/library/gg162709%28v=WS.10%29.aspx):

Because of the overhead associated with moving TCP/IP processing to the network adapter, TCP Chimney Offload offers the most benefit to applications that have long-lived connections and transfer large amounts of data. Servers that perform database replication, function as file servers, or perform backup functions are examples of computers that may benefit when you enable TCP Chimney Offload.

Default state by Windows Version

With the different operating systems versions, this feature is by default in different states:

Windows OS Default
Windows Server 2003 enabled
Windows Server 2008 disabled
Windows Server 2008 R2 automatic
Windows Server 2012 disabled

With all of these changes to the OS, which setting should we use for SQL Server? In general, for all of these operating systems, I recommend that TCP Chimney Offload be disabled – because you can see odd connectivity problems in any other state. Notice in the above quote that Microsoft says that this feature is best used for applications with long-lived connections that transfer large amounts of data – hopefully your OLTP database is performing lots of short-lived connections and they are not transferring large amounts of data (if they are, I can help you with that!). Some of the error messages that you can encounter are:

[Microsoft][ODBC SQL Server Driver][DBNETLIB] General Network error. Check your network documentation

ERROR [08S01] [Microsoft][SQL Native Client]Communication link failure

System.Data.SqlClient.SqlException: A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 – An existing connection was forcibly closed by the remote host.)

These errors are not exclusive to having problems with the TCP Chimney Offload. Additionally, they may only occur during times of high network load on the server.

To determine the Current TCP Chimney Offload Setting and to Disable it

In typical Microsoft format, this also varies between different operating system versions.

For Windows Server 2003, you need to check the registry. From a DOS command prompt, run:

If disabled, this will have a value of 0x0; if enabled, it will have a value of 0x1.

To disable, from an elevated DOS command prompt, run:

From Windows Server 2008 on, you can check the setting with this DOS command:

To disable it run the following from an elevated DOS command prompt:







This post is re-published from my original post on SQL Solutions Group.
Sure wish we could search this site

Click image to find an event near you!

SQL Saturdays are a neat idea – they’re a day-long event of free training, encompassing several one-hour sessions. These events will normally have 5 or 6 time slots during the entire day. Therefore, if the event has 6 concurrent sessions, that is 36 hours of training material. There is usually more than one event going on most weekends. If we assume 100 events at 36 sessions each, then there are 3600 sessions in a year. Since the speakers normally post their presentation materials and demo scripts on the site, the site itself has become a resource for additional training material. While the program is fantastic, the problem is that there isn’t a way to search the site for sessions.

Search the SQL Saturday sessions

5315.powershell-logo.gif-550x0Therefore, I’ve developed a PowerShell script that will search the SQL Saturday site. It searchs for sessions where the search phase is in either the session title or abstract. Without further ado, I introduce Get-SQLSaturdaySessionTopicSearch (which is available in my Code Library).


Name Required? What it does
-SearchTopic Required The phrase that you are searching for.
-StartDate Optional The first event date that you want to search. This defaults to 30 days before the current date.
-EndDate Optional The last event date that you want to search. This defaults to 30 days after the current date.
-EventNumber Optional The first event number where you want to start searching. Defaults to 500.
-ExportFile Optional The path and filename of the file to export the results to.
-DebugLevel Optional Controls the display of debugging and progress messages. Defaults to 1.

-DebugLevel values:

1. Displays the SQL Saturday URL as it is being processed.
2. Displays the event name and date if the feed for that SQL Saturday could be opened.
3. Displays session title for matched sessions.
4. Displays all session titles.


.\Get-SQLSaturdaySessionTopicSearch.ps1 -SearchTopic ‘Query Store’

.\Get-SQLSaturdaySessionTopicSearch.ps1 -SearchTopic ‘Query Store’ -ExportFile ‘C:\Temp\SQLSatSearchResults.csv’


The output to the screen is the Event #, Speaker, Session Title and URL for the presentation.

The generated export file will also include the event name and session abstract. Additionally, the URL will be encased with the Excel HYPERLINK() function. When the export file is opened up with Excel, clicking the URL will open your browser to the session information, where the presentation material can be downloaded.

I hope that you get a lot of use out of this script.

Free SQL TrainingAre you going to the PASS Summit this year? (and if not… why not?)

Are you coming in to Seattle on or before Monday?

Are you not attending any of the wonderful precons at the PASS Summit on Monday?

Then I have a great deal for you.

Free SQL Training!

Several friends have teamed up to run a day of SQL training (while the training is free, the hotel we are using requires us to use their catering for lunch, so we are charging for lunch). Join us on Monday, October 24th, 2016 at the SpringHill Suites located at 1800 Yale Avenue, Seattle, WA 98101. This is about 4-5 blocks (just a bit over a half mile) from the Convention Center, so it’s just a short walk for anyone in the area. The only catch is that it starts promptly at 9am, so no late-night rebel-rousing if you want to spend your day learning!

From the speakers, we have 4 Microsoft Certified Masters (MCMs), 4 MVPs, 4 Friends of RedGate and 5 Threadizens. These SQL experts are:
Chad Crawford
Gail Shaw
Grant Fritchey
Jason Brimhall
TJay Belt
Wayne Sheffield

Just take a look at these speakers! Wow, what a great list! However, the speakers won’t make it a great day by themselves. While they do come close, this day is really all about the training. Topics for the day include:
An Introduction to SQL 2016’s Temporal Tables
From on-prem to Azure Sql Data Warehouse using Redgate Data Platform Studio
Impact Analysis – Discovering the true impact of database changes
PowerBI, from a DBA
Waits Analysis Deep Dive
The Adhoc XE Showcase: How XEs can help troubleshoot specific things from these other sessions

Are you really eager to go to this now? Well, just jump over to our registration page, and sign on up! We look forward to seeing you there!

Rename SQL Server

Rename SQL ServerSometimes you make a mistake, and forget to rename a syspred’d server before installing SQL Server. Or perhaps your corporate naming standard has changed, and you need to rename a server. Maybe you like to waste the time involved in troubleshooting connection issues after a server rename. In any case, you now find yourself where the name of the SQL Server is different than the physical name of the server itself, and you need to rename SQL Server to match the server’s physical name.

You could always rerun the setup program to rename the server. Fortunately, SQL Server provides an easier way to do this. You just need to run two stored procedures: sp_dropserver and sp_addserver. The following script demonstrates this concept. First, it will get the current name of the SQL Server name, the name of the computer, and the name of the SQL Server instance. Next, if the computer name plus the instance name is not the same as the SQL Server name, then it runs the sp_dropserver and sp_addserver stored procedures to rename SQL Server. The “LOCAL” parameter of sp_addserver denotes that this is the name of the local server. Consequently, you will need to restart the instance for the name change to take effect.

Before you just run this script, there are a few things to take into consideration:

  • If this is part of a SQL Server failover cluster, then a different process is needed. See this link to rename the cluster’s virtual SQL Server name. To rename the individual nodes, each node must be evicted from the cluster, the instance renamed (per this script), and then the node added back to the cluster.
  • SQL Server does not support renaming a server involved in replication.
  • Renaming a server that runs Reporting Services (SSRS) may result in SSRS not being available after the rename. If this happens, see this link.
  • When using database mirroring, you need to stop the mirroring before the rename, and reestablish it when finished.
  • If Remote Logins, Linked Servers, or Client Alias Names are used, see the “Other Considerations” section in this link.


This post is re-published from my original post on SQL Solutions Group. I hope that you enjoy it.

In my last article, I started off talking about checking various settings that make a performance difference prior to jumping into query tuning. This article continues the investigation into performance affecting settings, by jumping straight into the storage subsystem. How would you like to make a single change to your drives and have the IO operations improve by up to 40%?

Just reformat the drives that contain your database data and log files.

If you are running on a version of Windows prior to Windows Server 2008, or if the drives were migrated to a new system without reformatting them, then you really could improve IO performance simply by reformatting the drives. Older versions of Windows will create disk partitions that default to having the boundaries based upon the ancient Cylinder/Head/Sector addressing scheme that was common in disk controllers of that era. Once these boundaries are set, the only way to change them is to reformat the drive, so migrating this drive to a Windows Server 2008 or newer server alone does not fix the issue.

How reformatting the drive affects IO performance

File Allocation Size

On the hard drive, the minimum amount of data that can be read or written is called a sector, which historically was a mere 512 bytes (newer drives may have sector sizes up to 4KB). A group of sectors is a cluster (no not that type of cluster). A drive with a 64KB cluster size (at 512 bytes per cluster) will have 128 sectors, which is also known as the file allocation size. The first 63 sectors of a drive are reserved, or hidden, and include the master boot record (MBR). Simply multiplying the 63 sectors by the sector size of 512 bytes shows that this hidden area occupies 32,256 bytes, or 31.5KB. The drive dutifully starts storing data immediately after this.

SQL Server IO operations

Now, consider how SQL Server performs disk IO operations – an extent at a time (an extent is 8 pages, each of which are 8KB, for a total of 64KB). To store this information on disk, with the default settings, this 64KB will start immediately after the 31.5KB and continue for 64KB… and it will span two clusters. When the drive goes to read this data, it will have to read the first cluster, and then the second cluster, in order to get all of the data needed for the extent being read from disk. When the next extent is read, it will re-read the second cluster, and read the third cluster. The wasted IO operation should be obvious.

How to have SQL Server work better with disk

What we need to do is to offset the beginning of the data being stored on disk to a location more conducive to how the program is operating. This offset is known as the “Partition Alignment Offset”. To be in tune with SQL Server, this value should be an increment of 64KB. However, you also need to consider the entire storage subsystem – the disks, controllers and memory. Starting with Windows Server 2008, this offset is at 1024KB – a nice increment of 64KB that also works very nicely with most RAID disks/controllers. Prior to Windows Server 2008, partition alignment offset was not explicitly performed, so this will need to be performed.

Determining a drive’s partition alignment offset

To determine what the partition alignment offset is for a windows basic disk, there is a very simple wmic command to run:

The Index value is the same as the disk number reported in the Disk Management tool, and also from the wmic command wmic Volume. However, we tend to think in terms of drive letters, not drive indexes, so the following PowerShell script will return Partition Alignment Offset for all drives on the system that it is run on and perform the mapping from index to drive letter:

If you want to open the results in Notepad, pass in the parameter $True.

You may note that I said that using these wmic commands is for a Windows basic disk. If you are using a dynamic disk, you need to use alternative methods. With Windows Server 2008, you would use the tool diskdiag.exe (in earlier versions of Windows, this was called dmdiag). For either, you need to use the –V flag: dmdiag.exe –V or diskdiag.exe –V.

Changing the Partition Alignment Offset

To change the Partition Alignment Offset on Windows Server 2008 or higher, simply format the drive. On earlier systems, you will need to use the diskpart.exe tool to change the partition alignment offset, and then format the drive. In either case, this is a destructive operation – you will lose all data on the drive.

For more information on disk partition alignment, refer to this TechNet article: http://technet.microsoft.com/en-us/library/dd758814(v=sql.100).aspx.

These steps should yield a noticeable IO performance improvement. Test it for yourself and let me know what you experience.

TSQL TuesdayIt’s that time of the month… the time when all of the T-SQL bloggers have a party and blog about a specific topic. This month, the host is my friend Jason Brimhall (b/t). The topic that he has selected is “Sharpen Something”. Specifically, he wants us to:

I am asking you to not only write a post but to do a little homework – first. In other words, plan to do something, carry out that plan, and then write about the experience.

Immediately, I’m thinking of something that I could write on. Then I read on and see this other part of his post where he has some examples:

You know you are extremely deficient at a certain SQL Skill. Tell me what that skill is and develop a plan to get better at that skill. Report on the implementation of this skill and how you are doing at improving. Maybe that skill is about Extended Events, PoSH or availability groups.

And now I’m thinking that Jason’s picking on me. He knows that there is one particular skill that I’m deficient on, and I’ve been meaning to work on it. So Jason, thank you for prodding me along.

To wrap up, Jason says:

In recap, this is an invite to make a short term goal covering the next two weeks. Tell everybody what that goal is (in your tsql tuesday post of course) and how you went about creating a plan for that goal and how you have progressed during the two week interval.

“Tell me what the skill is”

So what is the skill that I’m deficient on? It’s just the most popular feature released in SQL 2012: Availability Groups (did you notice how Jason even mentioned that above? See, he’s picking on me!) It’s not that I’m completely ignorant about Availability Groups – it’s just that I’ve never had the opportunity to work with this feature. I’ve even tried to work with it before, however higher priority things keep popping up, delaying my getting around to this.

“Develop a plan to get better”

The short and simple plan is to create an environment where I have Availability Groups installed, and utilize it on some databases.

The longer plan goes like this: Since Availability Groups is built on top of Windows Server Failover Clustering, I need to:

  1. Build an environment where I have a domain controller and two servers that have clustering implemented.
  2. Install SQL Server on the two servers.
  3. Create a few databases – with data.
  4. Create an Availability Group with these databases.
  5. Test the secondary replicas being read-only.
  6. Repeat for the following environments to assess differences:
    1. Windows Server 2008R2 with SQL 2012
    2. Windows Server 2012R2 with SQL 2012
    3. Windows Server 2012R2 with SQL 2014
    4. Windows Server 2012R2 with SQL 2016
    5. Windows Server 2016 with SQL 2016

“Report on the implementation of this skill”

This is the section where I humble myself. To date, I have created three virtual machines running Windows Server 2008R2. They aren’t even in a domain yet. A setback that I faced was that I already had a domain set up for this… but since I’m using evaluation edition of Windows, they had expired. Everything had to be rebuilt, and now I’m running into domain setup issues. It seems to me that the Windows setup is harder than the SQL setup will be. But I will persist, and this will be done.

Because I am extremely deficient in this skill, and I need to be working with it.



Service Packs and Updates

Upgrades and Service Packs

Source: http://www.digipas.com/files/photos-firmware/software-btn1.png

One of the things that I always check on client servers is that they are running the latest service packs, for both SQL Server and for the Windows OS itself. Now, sometimes I know what the latest service pack is – but other times I don’t. Therefore, I end up spending more time than I would like looking up the latest service pack. Therefore, I’m making a list of service packs here, along with download links (as I find them – navigating Microsoft’s Download Center isn’t exactly a piece of cake). I tried to get the download links for both 32-bit and 64-bit architectures, where applicable. Hopefully, you can use this chart also.

If you have a link to any that I have missed, please leave me a comment so that I can update this! I’d appreciate your help.

Windows Update Rollups

Windows Version(s) Date Update Prerequisites
8.1, 2012R2 2016-07-21 July 2016 Update Rollup April 2014 Update Rollup
Service Stack Update
8, 2012 2012-12 Dec 2014 Update Rollup none
7, 2008R2 2016-07-21 August 2016 Update Rollup SP1
Servicing Stack Update

Back to Contents

Windows Server Versions

Operating System RTM SP1 SP2 SP3 SP4
Windows 2016 CTP5
Windows 2012 R2 6.3.9200
Windows 2012 6.2.9200
Windows 2008 R2 6.1.7600.16385 6.1.7601
Windows 2008 6.0.6000 6.0.6001
32-bit, 64-bit
32-bit, 64-bit
Windows 2003 R2 5.2.3790.1180 5.2.3790.?
32-bit, 64-bit
Windows 2003 5.2.3790 5.2.3790.1180
32-bit, 64-bit
32-bit, 64-bit
Windows 2000 5.00.2195 32-bit

Back to Contents

Windows (non-server) Versions

Operating System RTM SP1 SP2 SP3 SP4
Windows 10 10.0.10240
Windows 8.1 6.3.9200 6.3.9600
32-bit, 64-bit
Windows 8 6.2.9200
Windows 7 6.1.7600 6.1.7601
Windows Vista 6.0.6000 6.0.6001
32-bit, 64-bit
32-bit, 64-bit
Windows XP x64 5.2.2600 5.2.2600.1105 5.2.2600.2180
Windows XP 5.1.2600 5.1.2600.1105 5.1.2600.2180 5.1.2600.5512
Windows 2000 5.00.2195 32-bit

Back to Contents

SQL Server Versions

Version RTM SP1 SP2 SP3 SP4
SQL Server 2016
Latest CU
SQL Server 2014
Latest CU
SQL Server 2012 11.0.2100.60
SQL Server 2008 R2 10.50.1600.1
SQL Server 2008 10.0.1600.22
SQL Server 2005 9.0.1399.06 9.0.2047 9.0.3042
SQL Server 2000 8.0.194 8.0.384 8.0.532 8.0.760 8.0.2039
SQL Server 7 7.0.623 7.0.699 7.0.842 7.0.961 7.0.1063

Back to Contents

SQL Server Tools

Tool Latest Download
SQL Server Management Studio (SSMS) Latest SSMS Release
SQL Server Data Tools (SSDT) Latest SSDT Release

Back to Contents

I’m happy to announce that I’ll be going on a Central Texas SQL Tour in mid-August.

First up is the San Antonio SQL Saturday – I’ve been selected to be a speaker. This is really special because this is San Antonio’s first ever SQL Saturday!!! I’ll be presenting what is probably my favorite presentation – “Table Variables & Temp Tables – What You Need to Know!” This session usually runs about 75 minutes, so I’ve arranged to have the session immediately prior to lunch to help stagger the lunch flow. If you are anywhere in the area, come on out and see this presentation – I’ll guarantee that you will learn something, or double your registration fee back! You can register for this wonderful, free event at this link. From that link, you can also see the schedule and gaze in awe and wonder at all of the wonderful free sessions that you can attend all day. The SQL Saturday is on August 13th, 2016, and it is at Texas A&M University, One University Way, San Antonio, Texas, 78224 (map).

Image source: http://bit.ly/29R5yce

The following week, I’ll be at two user group meetings. The first is CACTUSS North on Tuesday, August 16th in Austin Texas. The meeting starts at 6pm, and I’ll be doing an encore presentation of “Table Variables & Temp Tables – What You Need to Know!”, so if you couldn’t see it at the SQL Saturday, be sure to come out to see it here. Sign up to attend this user group meeting here, and the meeting is at the Microsoft offices at 10900 Stonelake Blvd., Suite 225, Austin, TX, 78759 (map).

Image source: http://salssa.sqlpass.org/portals/404/salssa.gif

The next night, Wednesday, August 17th, I’ll be back in San Antonio, presenting at their SALSSA user group meeting. The meeting starts at 6pm, and I’ll be doing a different presentation – “SSMS Tips and Tricks – Using What You’ve Already Got”. They meet at the New Horizons Computer Learning Center, located at 2727 NW Loop 410, San Antonio, TX, 78230 (map).

Image source: http://www.thesanantonioriverwalk.com/images/site/home_night_banner.jpg

You can read the abstracts for both of these presentations here. And yes, I’m sure I’ll be at the Riverwalk in San Antonio sometime during my free time. I love the Riverwalk, and my favorite restaurant is nearby.