Fully qualified SQL Server query speed changes depending on current DB - sql-server

I have a query that looks essentially like this:
SELECT *
FROM [TheDB].[dbo].[TheTable] [T1]
INNER JOIN [TheDB].[dbo].[TheTable] [T2] ON [T1].[Col] = [T2].[Col]
WHERE
[T1].[ID] > [T2].[ID]
AND [T1].[Type] = 'X'
AND [T2].[Type] = 'X'
A fairly simple query designed to find some cases where we have some duplicate data. (The > is used rather than <> just to keep from having the instances show up twice.)
We're explicitly naming the DB, schema, and table in each case. Seems fine.
The odd thing is that if my SSMS is pointing to the DB in question, TheDB, the query never returns...it just spins and spins. If, on the other hand, I point my SSMS to a different DB (changing it from, say, TheDB to TheOtherDB) or if I prefix my query with a line like USE TheOtherDB, then it returns instantly.
This is very repeatable, across restarts of SSMS, and for different users.
What's going on? I'm guessing there's a wildly wrong optimization plan or something that is being used in some cases but not others...but the whole thing seems very strange...that pointing to the DB I'm actually using makes it slower.
To answer some of the questions/comments in the comments:
What's the goal of the query? To find 'duplicate' rows. Do I need to use *? No...and in the real query I'm not. It's just part of the query anonymization. I can change it to [T1].[ID] with the same result.
Could be a bad cached plan...(any trivial modification to the query text should do, really) I can change the query in various ways (like described above, returning just the [T1].[ID] for example and continue to get the same results.
Perhaps the context databases have different compatibility levels...? In this case, both DBs are set to have a compatibility level of "Sql Server 2019 (150)".

Related

SQL Server Change Tracking - dm_tran_commit_table and CHANGETABLE don't match; commit_time is null

The gist of what I'm trying to do: get the commit time for changes in a SQL Server table with change tracking on. Easy, right? Just join with the sys.dm_tran_commit_table DMV and look at the commit_time column. Unfortunately, I'm getting inconsistent results.
Here's my query:
SELECT TOP 100 *
FROM CHANGETABLE(CHANGES [MyDB].[dbo].MyTable, 0) CT
LEFT JOIN [MyDB].[dbo].MyTable C ON C.ID = CT.ID
LEFT JOIN [MyDB].sys.dm_tran_commit_table TCI ON CT.sys_change_creation_version = TCI.commit_ts
LEFT JOIN [MyDB].sys.dm_tran_commit_table TC ON CT.sys_change_version = TC.commit_ts
WHERE TC.commit_time IS NULL
I'd like to get the time a record was initially inserted (sys_change_creation_version) and the time of the latest commit (sys_change_version). But for reasons I can't explain, the first join above to the DMV returns data, but the second does not when sys_change_creation_version and sys_change_version are the same value.
See this screenshot:
How in the world does a join on the same table for the same value return results for one join but not the other?
Thinking there may be an issue with the DMV changing during my query execution, I tried pulling out all data from sys.dm_tran_commit_table into a temp table and then used that instead in my query above, but I get the same null results.
There must be something deeper inside change tracking that I'm not grokking that is causing this. Frankly, I'm not sure how/why the sys.dm_tran_commit_table DMV wouldn't have the commit_ts in it if CHANGETABLE is reporting it exists. Why is there a discrepancy between these two objects, and why does one join work but not the other?
Anyone with expertise here?
After much research I think I'm going to close this one out as being part of the fundamental machinations of MSSQL. A few things that I had not taken into account in my queries:
Change tracking clean up is happening behind the scenes, regularly, and so record counts returned by the CHANGETABLE function would continue dropping while nothing else was happening. This made it hard to reconcile change counts
Isolation levels means data can change from underneath you. Imagine this code, run on a table with 1 million records:
SELECT Id, CHANGE_TRACKING_CURRENT_VERSION() FROM MyTable
A very simplistic understanding (what I had going into this) would let you think CHANGE_TRACKING_CURRENT_VERSION() should return the same value for all million records, but in fact, records are being added and deleted while this select statement is running, which means CHANGE_TRACKING_CURRENT_VERSION keeps changing. In essence, my isolation level meant data was changing out from underneath me, which obviously makes it hard to get "exact" counts.
I was trying to retrieve ALL changes in the system (by passing in 0 or null in the CHANGETABLE function). When you do this, changes roll up and it becomes very difficult to truly know whether a record was inserted or updated, given my previous two bullet points. MSSQL change tracking is really designed to give you snapshots between change versions, but if you try to view them all at once, you're not going to be happy with the results you get.
To sum up, if there's anyone out there trying to do deep change tracking analysis like I was, I'd leave this advice: rethink it. Unless you are saving external logs of all the changes between the versions you're looking at, or possibly the highest level of isolation in place, you're going to be very frustrated with results that don't stay consistent. In my case, I ended up writing down in another table the "log" of every extraction operation so that I had a static set of numbers when looking across change versions.

Large difference in performance of complex SQL query based on initial "Use dB" statement

Why does a complex SQL query run worse with a Use statement and implicit dB references than with a Use master statement and full references to the user dB?
I'm using SQL Server Std 64-bit Version 13.0.4466.4 running on Windows Server 2012 R2. This is an "academic" question raised by one of my users, not an impediment to production.
By "complex" I mean several WITH clauses and a CROSS APPLY, simplified query structure below. By "worse" I mean 3 min. vs. 1 sec for 239 Rows, repeatably. The "plain" Exec Plan for fast query will not show, however, the Exec Plan w/ Live Query Stats runs for both, analysis further below. Tanx in advance for any light shed on this!
USE Master versus USE <userdb>;
DECLARE #chartID INTEGER = 65;
WITH
with1 AS
( SELECT stuff FROM <userdb>.schema1.userauxtable ),
with2 AS
( SELECT lotsastuff FROM <userdb>.dbo.<views w/ JOINS> ),
with3 AS
( SELECT allstuff FROM with2 WHERE TheDate IN (SELECT MAX(TheDate) FROM with2 GROUP BY <field>, CAST(TheDate AS DATE)) ),
with4 AS
( SELECT morestuff FROM with1 WHERE with1.ChartID = #chartID )
SELECT finalstuff FROM with3
CROSS APPLY ( SELECT littelstuff FROM with4 WHERE
with3.TheDate BETWEEN with4.PreDate AND with4.AfterDate
AND with4.MainID = with3.MainID ) as AvgCross
The Exec Plan w/ Live Query Stats for slow query has ~41% Cost ea. (83% total) in two ops:
a) Deep under the 5th Step (of 15) Hash match (Inner Join) Hash Keys Build ... 41% Cost to Index Scan (non-clustered) of ...
b) Very deep under the 4th Step (of 15) Nested Loops (Left Semi Join) -- 42% Cost to near-identical Index Scan per (1) except addition of (... AND datediff(day,Date1,getdate() ) to Predicate.
While the Exec Plan w/ Live Query Stats for fast query shows an 83% Cost in a Columnstore Idx Scan (non-clustered) of quite deep under the 9th Step (of 12) Hash match (Inner Join) Hash Keys Build .
It would seem that the difference is in the Columnstore Idx, but why does the Use master stmt send the Execution down that road?
There may be several possible reasons for this kind of behaviour; however, in order to identify them all, you will need people like Paul Randall or Kalen Delaney to answer this.
With my limited knowledge and understanding of MS SQL Server, I can think of at least 2 possible causes.
1. (Most plausible one) The queries are actually different
If, as you are saying, the query text is sufficiently lengthy and complex, it is completely possible to miss a single object (table, view, user-defined function, etc.) when adding database qualifiers and leave it with no DB prefix.
Now, if an object by that name somehow ended up in both the master and your UserDB databases then different objects will be picked up depending on the current database context, the data might be different, indices and their fragmentation, even data types... well, you get the idea.
This way, queries become different depending on the database context, and there is no point comparing their performance.
2. Compatibility level of user database
Back in the heyday of the 2005 version, I had a database with its compatibility level set to 80, so that ANSI SQL-89 outer joins generated by some antiquated ORM in legacy client apps would keep working. Most of the tasty new stuff worked too, with one notable exception however: the pivot keyword.
A query with PIVOT, when executed in the context of that database, threw an error saying the keyword is not recognised. However, when I switched the context to master and prefixed everything with user database's name, it ran perfectly fine.
Of course, this is not exactly your case, but it's a good demonstration of what I'm talking about. There are lots of internal SQL Server components, invisible to the naked eye, that affect the execution plan, performance and sometimes even results (or your ability to retrieve them, as in the example above) that depend on settings such as database' compatibility level, trace flags and other similar things.
As a possible cause, I can think of the new cardinality estimator which was introduced in SQL Server 2014. The version of the SQL Server instance you mentioned corresponds to 2016 SP1 CU7, however it is still possible that:
your user database may be in compatibility with 2012 version (for example, if it was restored from 2012 backup and nobody bothered to check its settings after that), or
trace flag 9481 is set either for the session or for the entire SQL Server instance, or
database scoped configuration option LEGACY_CARDINALITY_ESTIMATION is set for the database, etc.
(Thankfully, SQL Server doesn't allow to change compatibility level of the master database, so it's always of the latest supported level. Which is probably good, as no one can screw the database engine itself - not this way, at least.)
I'm pretty sure that I have only scratched the surface of the subject, so while checking the aforementioned places definitely wouldn't hurt, what you need to do is to identify the actual cause of the difference (if it's not #1 above, that is). This can be done by looking at actual execution plans of the queries (forget the estimated ones, they are worthless) with a tool other than vanilla SSMS. As an example, SentryOne Plan Explorer might be a good thing to begin with. Even without that, saving plans in .sqlplan files and opening them with any XML-capable viewer/editor will show you much more, including possible leads that might explain the difference you observe.

How to subquery using M in Power Query/Power BI

So I have two queries that I'm working on, one comes from an Oracle DB and the other SQL Server DB. I'm trying to use PowerBI via Power Query as the cross over between the two. Because of the size of the Oracle DB I'm having a problem with running it, so my thought is to use one query as a clause/sub-query of the other to limit the number of results.
Based on the logic of MSFT's M language I'd assume there's a way to do sub-queries of another but I've yet to figure it out. Does anyone have any ideas on how to do this?
What I have learned to do is create a connection to the two under lying data sets, but do not load them. Then you can merge them, but do not use the default Table.NestedJoin().
After the PQ generates this, change it to:
= Table.Join(dbo_SCADocument,{"VisitID"},VISIT_KEY,{"VisitID"}) .
Also remove the trailing name. The reason is , it keeps query folding alive. For some reason, Table.NestedJoin() kills query folding. Note, if there are similar fields in the two sources other than the join it will fail.
Also it brings everything from both sources, but that is easy to alter . Also you will need to turn off the function firewall as this does not allow you to join potential sensitive data with non sensitive data. This is setting your privacy level to ignore all.
I would attempt this using the Merge command. I'm more of a UI guy, so I would click the Merge button. This generates a PQL statement for Table.Join
https://msdn.microsoft.com/en-us/library/mt260788.aspx
Setting Join Kind to Inner will restrict the output to matching rows.
I say attempt as your proposed query design will likely slow your query, not improve it. I would expect PQ to run both queries against the 2 servers and download all their data, then attempt the Join in Memory.

inner join Vs scalar Function

Which of the following query is better... This is just an example, there are numerous situations, where I want the user name to be displayed instead of UserID
Select EmailDate, B.EmployeeName as [UserName], EmailSubject
from Trn_Misc_Email as A
inner join
Mst_Users as B on A.CreatedUserID = B.EmployeeLoginName
or
Select EmailDate, GetUserName(CreatedUserID) as [UserName], EmailSubject
from Trn_Misc_Email
If there is no performance benefit in using the First, I would prefer using the second... I would be having around 2000 records in User Table and 100k records in email table...
Thanks
A good question and great to be thinking about SQL performance, etc.
From a pure SQL point of view the first is better. In the first statement it is able to do everything in a single batch command with a join. In the second, for each row in trn_misc_email it is having to run a separate BATCH select to get the user name. This could cause a performance issue now, or in the future
It is also eaiser to read for anyone else coming onto the project as they can see what is happening. If you had the second one, you've then got to go and look in the function (I'm guessing that's what it is) to find out what that is doing.
So in reality two reasons to use the first reason.
The inline SQL JOIN will usually be better than the scalar UDF as it can be optimised better.
When testing it though be sure to use SQL Profiler to view the cost of both versions. SET STATISTICS IO ON doesn't report the cost for scalar UDFs in its figures which would make the scalar UDF version appear better than it actually is.
Scalar UDFs are very slow, but the inline ones are much faster, typically as fast as joins and subqueries
BTW, you query with function calls is equivalent to an outer join, not to an inner one.
To help you more, just a tip, in SQL server using the Managment studio you can evaluate the performance by Display Estimated execution plan. It shown how the indexs and join works and you can select the best way to use it.
Also you can use the DTA (Database Engine Tuning Advisor) for more info and optimization.

Query hangs with INNER JOIN on datetime field

We've got a weird problem with joining tables from SQL Server 2005 and MS Access 2003.
There's a big table on the server and a rather small table locally in Access. The tables are joined via 3 fields, one of them a datetime field (containing a day; idea is to fetch additional data (daily) from the big server table to add data to the local table).
Up until the weekend this ran fine every day. Since yesterday we experienced strange non-time-outs in Access with this query. Non-time-out means that the query runs forever with rather high network transfer, but no timeout occurs. Access doesn't even show the progress bar. Server trace tells us that the same query is exectuted over and over on the SQL server without error but without result either. We've narrowed it down to the problem seemingly being accessing server table with a big table and either JOIN or WHERE containing a date, but we're not really able to narrow it down. We rebuilt indices already and are currently restoring backup data, but maybe someone here has any pointers of things we could try.
Thanks, Mike.
If you join a local table in Access to a linked table in SQL Server, and the query isn't really trivial according to specific limitations of joins to linked data, it's very likely that Access will pull the whole table from SQL Server and perform the join locally against the entire set. It's a known problem.
This doesn't directly address the question you ask, but how far are you from having all the data in one place (SQL Server)? IMHO you can expect the same type of performance problems to haunt you as long as you have some data in each system.
If it were all in SQL Server a pass-through query would optimize and use available indexes, etc.
Thanks for your quick answer!
The actual query is really huge; you won't be happy with it :)
However, we've narrowed it down to a simple:
SELECT * FROM server_table INNER JOIN access_table ON server_table.date = local_table.date;
If the server_table is a big table (hard to say, we've got 1.5 million rows in it; test tables with 10 rows or so have worked) and the local_table is a table with a single cell containing a date. This runs forever. It's not only slow, It just does nothing besides - it seems - causing network traffic and no time out (this is what I find so strange; normally you get a timeout, but this just keeps on running).
We've just found KB article 828169; seems to be our problem, we'll look into that. Thanks for your help!
Use the DATEDIFF function to compare the two dates as follows:
' DATEDIFF returns 0 if dates are identical based on datepart parameter, in this case d
WHERE DATEDIFF(d,Column,OtherColumn) = 0
DATEDIFF is optimized for use with dates. Comparing the result of the CONVERT function on both sides of the equal (=) sign might result in a table scan if either of the dates is NULL.
Hope this helps,
Bill
Try another syntax ? Something like:
SELECT * FROM BigServerTable b WHERE b.DateFld in (SELECT DISTINCT s.DateFld FROM SmallLocalTable s)
The strange thing in your problem description is "Up until the weekend this ran fine every day".
That would mean the problem is really somewhere else.
Did you try creating a new blank Access db and importing everything from the old one ?
Or just refreshing all your links ?
Please post the query that is doing this, just because you have indexes doesn't mean that they will be used. If your WHERE or JOIN clause is not sargable then the index will not be used
take this for example
WHERE CONVERT(varchar(49),Column,113) = CONVERT(varchar(49),OtherColumn,113)
that will not use an index
or this
WHERE YEAR(Column) = 2008
Functions on the left side of the operator (meaning on the column itself) will make the optimizer do an index scan instead of a seek because it doesn't know the outcome of that function
We rebuilt indices already and are currently restoring backup data, but maybe someone here has any pointers of things we could try.
Access can kill many good things....have you looked into blocking at all
run
exec sp_who2
look at the BlkBy column and see who is blocking what
Just an idea, but in SQL Server you can attach your Access database and use the table there. You could then create a view on the server to do the join all in SQL Server. The solution proposed in the Knowledge Base article seems problematic to me, as it's a kludge (if LIKE works, then = ought to, also).
If my suggestion works, I'd say that it's a more robust solution in terms of maintainability.

Resources