Here we are at TSQL Tuesday #37 – the start of the fourth year. Sebastian Meine (@sqlity / blog) is hosting this month’s T-SQL Tuesday blogging party, and he wants us to “write about topics like the good and the bad patterns of joining you have seen out there”. (Sebastian is also running a blog series on A Join A Day.)
My first TSQL Tuesday blog post, way back at #23, was also about Joins. I wasn’t really sure about what to write for this TSQL Tuesday, and then I read Jason Brimhall’s post Let’s Talk About Joins. He wraps up with a brief discussion about the effect of implicit conversions, so I decided to blog further into this area.
I don’t know about you, but one of the biggest issues that I’ve seen in regards to performance and joining tables deals with when the columns being joined are of different data types, which is otherwise known as an implicit conversion. Heck, this is such an issue that Brent Ozar’s sp_blitz script now searches for implicit conversions in the plan cache, and others are writing similar queries.
Let’s start off by creating 4 tables. Two will have an INTEGER column, one with a VARCHAR, and the last with an NVARCHAR:
1 2 3 4 5 6 7 8 9 |
IF OBJECT_ID('tempdb..#tableA','U') IS NOT NULL DROP TABLE #tableA; IF OBJECT_ID('tempdb..#tableB','U') IS NOT NULL DROP TABLE #tableB; IF OBJECT_ID('tempdb..#tableC','U') IS NOT NULL DROP TABLE #tableC; IF OBJECT_ID('tempdb..#tableD','U') IS NOT NULL DROP TABLE #tableD; CREATE TABLE #tableA (id INT PRIMARY KEY CLUSTERED); CREATE TABLE #tableB (id VARCHAR(10) PRIMARY KEY CLUSTERED); CREATE TABLE #tableC (id INT PRIMARY KEY CLUSTERED); CREATE TABLE #tableD (id NVARCHAR(10) PRIMARY KEY CLUSTERED); |
Let’s populate these tables with the ROW_NUMBER function running against sys.all_columns. On my SQL 2012 instance, this adds 7,403 rows to each table.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
INSERT INTO #tableA (id) SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM sys.all_columns; INSERT INTO #tableB (id) SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM sys.all_columns; INSERT INTO #tableC (id) SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM sys.all_columns; INSERT INTO #tableD (id) SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM sys.all_columns; |
As you know, almost any time that you put a column in a function, you prevent that query from using any existing indexes that might otherwise be useful. Since these tables are all of different data types, one of them must be converted into a compatible data type in order to be evaluated. And since you didn’t do this conversion in the query, SQL will automatically perform it for you, in the form of an internal function known as CONVERT_IMPLICIT. So let’s examine the effects of the implicit conversions in joins between the different tables.
First off is our control join – a join between table A and tableC, both tables using an INTEGER data type so that the query has no implicit conversions:
1 2 3 4 |
SELECT * FROM #tableA a JOIN #tableC c ON a.id = c.id WHERE a.id < 5; |
When we look at the actual execution plan, we have:
There are no implicit conversions. Each table has an estimated 5 rows, and an actual 4 rows. This certainly looks good.
Now let’s look at joining tableA to tableB (with a varchar(10)):
1 2 3 4 |
SELECT * FROM #tableA a JOIN #tableB b ON a.id = b.id WHERE a.id < 5; |
So, here we have a completely different execution plan. The nested loops operator has been replaced with a hash match operator, a compute scalar has been added, and one of the clustered index seeks has been replaced with a clustered index scan, which is scanning all of the rows in the table. The compute scalar is performing the expression: CONVERT_IMPLICIT(int,[tempdb].[dbo].[#tableB].[id] as [b].[id],0).
Does reversing the order of the tables in the select statement make a difference?
1 2 3 4 |
SELECT * FROM #tableB b JOIN #tableA a ON b.id = a.id WHERE b.id < 5; |
Holy moly! Now we’ve thrown an expensive sort operator into the mix, in addition to that implicit conversion! And the seeks… they’re all gone. All that is left are two scans. But if we compare the estimated to actual, we have 7403 estimated rows from tableA, but 5 actual rows. We’re off by three orders of magnitude… this isn’t good.
How about comparing the varchar (tableB) to the nvarchar (tableD) instead?
1 2 3 4 |
SELECT * FROM #tableB b JOIN #tableD d ON b.id = d.id WHERE b.id < 5; |
Varchar to Nvarchar, with sort operator
Yikes! This isn’t any better. The sort operator is still there, and so are the implicit conversion and the scans. In fact, there are actually two implicit conversions going on – one that converts b.ID to an INT for the where clause, and the second that converts b.ID to an NVARCHAR (10) for the join. The estimated to actual isn’t good either: tableD is estimating 7403 with actual 3335, and tableB is estimating 2221 with actual 4. But what if we try reversing the tables again?
1 2 3 4 |
SELECT * FROM #tableD d JOIN #tableB b ON d.id = b.id WHERE d.id < 5; |
Varchar to Nvarchar, no sort operator
Whew… we got rid of the sort operator, but we still have the implicit conversions and the scans. tableD has estimated 2221 with actual 4, and tableB has estimated 7403 with actual 7403. So, just to complete this, let’s compare tableA (int) to tableD (nvarchar):
1 2 3 4 |
SELECT * FROM #tableA a JOIN #tableD d ON a.id = d.id WHERE a.id < 5; |
Integer to Nvarchar
Here we still have the implicit conversion and the scan of the table. tableA has estimated 5 with actual 4, and tableD has estimated 7403 with actual 7403
And how do all of these compare when run together?
As you can clearly see, the control (which is joining between identical data types) is definitely the way to join your tables together. As soon as you start introducing implicit conversions, you start adding scan and potentially sort operators into the execution plan – expensive operators that are totally unneeded, and only slow down the query.
So, design your database properly, and ensure that your queries are written to avoid these hidden conversions. They have adverse repercussions to the query.