What order the test cases run in the tSQLt? - sql-server

I have 30 test cases in the 'EmployeeSchema'. I want to know in which order the test cases run in the tSqlt? If I know the order of execution of test cases, then I alter the execution order of my test cases one after another. Kindly advise.
-- Runs all the tests on MyTestClass
EXEC tSQLt.Run 'EmployeeSchema';

When running all tests within a test class, tSQLt.Run will run the tests in "random" order, meaning tSQLt leaves the order up to SQL Server and how it retrieves the data. Although, it shouldn't matter which order they run in since each test should be independent in and of itself.
If for some reason you want to run individual tests in a different order you will need to run tSQLt.Run separately for each individual test in whichever order you want. You can't specify a particular order when you call tSQLt.Run for an entire test class.
Note: thanks to #Sebastian Meine for correcting me
To confirm it is random execution plan dependent, here is the cursor declaration from tSQLt.Private_RunTestClass that pulls the list of tests within the requested test class. There is no ORDER BY so SQL Server will return the test names in whichever order it sees fit.
DECLARE testCases CURSOR LOCAL FAST_FORWARD
FOR
SELECT tSQLt.Private_GetQuotedFullName(object_id)
FROM sys.procedures
WHERE schema_id = #TestClassId
AND LOWER(name) LIKE 'test%';

Related

Top 1 without ordering

I encountered a problem with query result. The result wasn't same on the live environment and the test environment.
Table A
id, name
1,ValueA1
Table B
Id, name
1,ValueB1
2,ValueB2
3,ValueB3
4,ValueB4
On the test environment the result was ValueB1, on the live environment was ValueB2
After the quick investigation I found that the problem is that the procedure was using top 1 without ordering in conjunction. Ok, this is clear for everybody that if we have conjunction we have to add order by clause if we use top 1.
But I tried to explain the problem to a not technical person.
So I wrote a simple while loop with a 1 000 000 iteration. Inside I clear execution plans:
DBCC FREEPROCCACHE
and
sp_recompile 'ProcedureName'
and next I was executing procedure.
Unfortunately the result was always the same. On the both environments. (on the test: ValueB1, on the live: ValueB2)
I tried changing calling procedure to just copy the content of the procedure.
The results still exactly same as in previous case.
I tried to write a very simple conjunction query with top 1 without ordering. the result that time was same on both servers: ValueB1 in all 1M iterations.
The db procedure is pretty complicated it use CTE and it join to the table B multiple times.
I am very frustrated that I didn't manage to show the problem to the non technical person.
Can anyone explain what I did wrong or how it should have been done?
You can't.
If you don't specify an order by clause the engine is free to deliver the result to you as it sees fit.
In practice that mostly means its ordered by the clustered index, but it don't have to be!

In SQL Server query slows dramatically when variable in SELECT clause

I have the following simple code which I run in SQL Server Management Studio:
DECLARE #CurDeviceIndex int
SET #CurDeviceIndex = 314
SELECT TOP 1
DeviceIndex,
DatetimeOccurred
FROM
[dvm_data].[dbo].[CalculatedData]
WHERE
DeviceIndex = #CurDeviceIndex
ORDER BY
ID DESC
In some cases this query takes forever to run, actually I never waited until the end.
When I instead of variable put the DeviceIndex directly in the query it executes instantly:
SELECT TOP 1
DeviceIndex,
DatetimeOccurred
FROM
[dvm_data].[dbo].[CalculatedData]
WHERE
DeviceIndex = 314
ORDER BY
ID DESC
It seems that execution takes forever when 1) variable is used, and 2) the DeviceIndex is such that query returns nothing.
When query returns something, then also version with variable seems to be working instantly.
When I use hard coded DeviceIndex, query returns every time instantly, whether there are result or not.
Any ideas what might be causing this strange behavior?!
Since you're using variables in management studio, SQL Server doesn't know the value of it when you run it and has to optimize the plan for an unknown value.
If you use the same clause inside a procedure, it will know the value and optimize for it -- but this happens with the first call, and the rest of the executions will use the same plan (unless a recompile happens or the plan gets thrown out of the cache).
When you use the value in the SQL directly, the statement will be optimized for that exact value.
If you want to know why it takes a lot longer, you'll have to look at the query plan. Most likely there is a scan in one of the plan and a seek in the other. You can see the values of the parameters used to create the plan from the properties of the leftmost object in the plan.

Stored Procedure tracking

I have set up a stored procedure tracker table on our databases with the hope of using it to flush out procedures that we no longer use. I set this up a few months ago, and am now ready to be able to start the cleansing. The tables utilises the sys.procedures and sys.dm_exec_procedure_stats DMVs in SQL Server 2008 R2, and a job updates the static table every 10 minutes, 24hours a day
I have been checking through my list of procedures, and have come across a couple that I know for a fact have run very recently. The particular one I have found runs as step 2 of a job, but the sys.dm_exec_procedure_stats doesn't seem to contain any record of it having been run, but the procedure in step 1 has appeared at the correct time. I have checked the job history, and both steps 1 and 2 ran without any problems.
The only difference I can see is that the procedure in step 2 comes up with a "Warning: Null value is eliminated by an aggregate or other SET operation" whereas step one doesn't. Does this make a difference as to whether or not the procedure will appear in the sys.dm_exec_procedure_stats?
Hope someone can help!
While the reason that it doesn't show up in the DMV is likely the reason specified in the linked/related answer mentioned by #bastos.sergio in a comment on the question, that still leaves the issue of "what can be done to find procs not being used?".
The accepted answer in that linked question (this is the question referenced by #bastos.sergio: Last Run Date on a Stored Procedure in SQL Server ) is missing something so I will add to it here:
The ONLY way to know what is calling it is:
scan all code (app code, other Stored Procs, Job Steps [in msdb.dbo.sysjobsteps], SSRS report definition files, etc.) for references
IF you allow ad hoc access (e.g. someone referenced a Stored Proc in an Access app [or any Microsoft Office "app"]) then you need to do some of the additional steps mentioned in the accepted answer of that linked question, namely:
Add a RAISERROR(N'Deprecated! Please contact YourName.', 16, 1); RETURN; at the top of the proc and leave it there for a month or two.
Add a table to log proc calls and an INSERT into that log table at the top of any of the supposed obsolete code and check once a week to see if anything shows up. If also doing the RAISERROR, put the INSERT prior to the RAISERROR(...); RETURN;.
With regards to ad hoc access (i.e. access outside of the code that you control), be careful to always keep in mind that infrequent access can be just that: infrequent. If there is a code path that is executed monthly, quarterly, bi-annually, annually, when some manager remembers to ask for such and such report, etc., then you could potentially remove valid code if you do not allow a long-enough time frame to capture "highly" infrequent usage (and this is why, even if the DMV data was more reliable, you would still need to be just as cautious).
Again, if all access is within code that you control, just scan your code (most likely using Regular Expressions).
EDIT:
To answer the specific question of:
Does the "Warning: Null value is eliminated by an aggregate or other SET operation" warning that the query, running in the Stored Proc that does not show up in the DMV, gets have something to do with why it is not showing up in the DMV?
do the following test:
CREATE PROCEDURE #NoWarning
AS
SELECT AVG(tmp.col)
FROM (
SELECT 1.0
UNION ALL
SELECT 2
) tmp(col);
GO
EXEC #NoWarning;
GO
CREATE PROCEDURE #Warning
AS
SELECT AVG(tmp.col)
FROM (
SELECT 1.0
UNION ALL
SELECT null
) tmp(col);
GO
EXEC #Warning;
And then run the following query and you should see both proc names appearing in "tempdb":
SELECT DB_NAME(ps.database_id) AS [DatabaseName],
OBJECT_NAME(ps.[object_id], ps.database_id) AS [ProcName],
*
FROM sys.dm_exec_procedure_stats ps
ORDER BY [DatabaseName], [ProcName];

CONTAINSTABLE predicate fails when invoked several times in short time

I've got a pretty curious problem...
I have written a stored procedure with a CONTAINSTABLE predicate; something like
SELECT dbo.MyTable.MyPK
FROM dbo.MyTable INNER JOIN
CONTAINSTABLE(dbo.MyTable, FullTextField, 'mysearch') AS tbl1
ON tbl1.[KEY] = dbo.MyTable.MyPK
If I run this SP with SQL Server Management Studio, it's all ok.
Now I've prepared an automatic test suite to try the effectiveness of my work under heavy weight...
I call my SP several times, with different parameters, for a bunch of times, and here there's the problem: if I launch my test suite, it fails returning a wrong result (e.g. 1 result while I'm expecting 3 results, and so on...). But if I launch my test suite in debug mode, stepping through my test code, no errors occur. Moreover, if I catch the wrong result and try to re-execute the SP that gave the wrong result (simply placing a conditional breakpoint on the error condition and dragging the execution pointer on visual studio...), the re-execution returns the right result!!!
What can I do???
Any ideas?
Thank you very much for your help!!
Bye cghersi
Obviously running the same statement against your database should not yield different results with all else being the same. Something is changing.
Run SQLProfile while you're stepping through your code to confirm that:
The SQL you think you're sending to the database is what is actually hitting the database
No other users are updating the database while you're stepping
Make sure in your profile trace that you can identify the connection that you're using (an easy way is to alter your connection string by setting the app name). When you're stepping through your code watch the profile trace. Copy the SQL that you see there into SSMS and run it directly to confirm results. At the end of the day you should be able to isolate this to raw TSQL running in SSMS to find out where the problem is.

condition for creating a prepared statement using cfqueryparam?

Does cfquery becomes a prepared statement as long as there's 1 cfqueryparam? Or are there other conditions?
What happen when the ORDER BY clause or FROM clause is dynamic? Would every unique combination becomes a prepared statement?
And what happen when we're doing cfloop with INSERT, with every value cfqueryparam'ed, and invoke the cfquery with different number of iterations?
Any potential problems with too many prepared statements?
How does DB handle prepared statement? Will they be converted into something similar to store procedure?
Under what circumstances should we Not use prepared statement?
Thank you!
I can answer some parts of your question:
a query will become a preparedStatement as long as there is one <queryparam. I have in the past added a
where 1 = <cfqueryparam value="1" to queries which didn't have any dynamic parameters, in order to get them run as preparedStatements
Most DBs handle preparedStarements similarly to Stored Procedures, just held temporarily, rather than long-term, however the details are likely to be DB-specific.
Assuming you are using the drivers supplied with ColdFusion, if you turn on the 'Log Activity' checkbox in the advanced panel of the DataSource setup, then you'll get very detailed information about how CF is interacting with he DB and when it is creating a new preparedStatement and when it is re-using them. I'd recommend trying this out for yourself, as so many factors are involved (DB setup, Driver, CF version etc). If you do use the DB logging, re-start CF before running your test code, so you can see it creating the prepared statements, otherwise you'll just see it re-using statements by ID, without seeing what those statements are.
In addition, if you are asking about execution plans then there is more involved than just the number PreparedStatement's generated. It is a huge topic and very database dependent. I do not have a DBA's grasp on it, but I can answer a few of the questions about MS SQL.
What happen when the ORDER BY clause or FROM clause is dynamic? Would
every unique combination becomes a prepared statement?
The base sql is different. So you will end up with separate execution plans for each unique ORDER BY clause.
And what happen when we're doing cfloop with INSERT, with every value
cfqueryparam'ed, and invoke the cfquery with different number of
iterations?
MS SQL should reuse the same plan for all iterations because only the parameters change.
The sys.dm_exec_cached_plans view is very useful for seeing what plans are cached and how often they are reused.
SELECT p.usecounts, p.cacheobjtype, p.objtype, t.text
FROM sys.dm_exec_cached_plans p
CROSS APPLY sys.dm_exec_sql_text( p.plan_handle) t
ORDER BY p.usecounts DESC
To clear the cache first, use DBCC FLUSHPROCINDB. Obviously do not use it on a production server.
DECLARE #ID int
SET #ID = DB_ID(N'YourTestDatabaseName')
DBCC FLUSHPROCINDB( #ID )

Resources