I know there are variations of this question out there but cannot find one that answers what I am looking for.
I have inherited a database and reports from another programmer who is no longer in the picture.
One of the Queries uses this code:
Select
b.HospitalMasterID
,b.TxnSite
,b.PatientID
,b.TxnDate as KeptDate
From
Billing as b
Inner Join Patient as p
on b.HospitalMasterID = p.HospitalMasterID
and b.PatientID = p.PatientID
Where
b._IsServOrItem=1
and b.TxnDate >= '20131001'
and (Case
When b.ExtendedAmount > 0 Then 1
When (Not(p.PlanCode is null)) and (b.listAmount >0) then 1
End = 1)
When I run the Query I get apx 900,000 rows returned. If I remove the Case statement, I get over a million rows returned.
Can someone explain why this is so? What exactly is the case statement doing? Is there a better way to accomplish the same thing. I really don't like this statement as it stands and the entire report query is very difficult to read due to lack of structure.
Version of Sql is T-Sql 2012.
Thanks,
Seems to me like it's doing this:
(b.ExtendedAmount > 0 OR (Not(p.PlanCode is null) and (b.listAmount >0)))
Maybe it was copy / pasted from somewhere else and modified? Regardless, it's bizarre.
I think that's someone trying to avoid using the OR operator in order to promote index seeks over scans. It would be worth looking at the plan, but I would be surprised if it differed significantly over the logic in Greg's answer.
Related
The PACT documentation clearly states how to select for a single condition in the where clause, but it is not so clear on how to select for multiple clauses which seems much more general and important for real world use cases than a single clause example.
Pact-lang select row function link
For instance I was trying to select a set of dice throws across the room name and the current round.
(select 'throws (where (and ('room "somename") ('round 2)))
But this guy didn't resolve and the error was not so clear. How do I select across multiple conditions in the select function?
The first thing we tried was to simply select via a single clause which returns a list:
(select 'throws (where (and ('room "somename")))
A: [object{throw-schema},object{throw-schema}]
And then we applied the list operator "filter" to the result:
(filter (= 'with-read-function' 1) (select 'throws (where (and ('room "somename"))))
Please keep in mind we had a further function that read the round and spit back the round number and we filtered for equality of the round value.
This ended up working but it felt very janky.
The second thing we tried was to play around with the and syntax and we eventually found a nice way to express it although not quite so intuitive as we would have liked. It just took a little elbow grease.
The general syntax is:
(select 'throws (and? (where condition1...) (where condition2...))
In this case the and clause was lazy, hence the ? operator. We didn't think that we would have to declare where twice, but its much cleaner than the filter method we first tried.
The third thing we tried via direction from the Kadena team was a function we had yet to purview: Fold DB.
(let* ((qry (lambda (k obj) true)) ;;
(f (lambda(x) [(at 'firstName x), (at 'b x)])) ) (fold-db people (qry) (f)) )
This actually is the most correct answer but it was not obvious from the initial scan and would be near inscrutable for a new user to put together with no pact experience.
We suggest a simple sentence ->
"For multiple conditions use fold-db function."
In the documentation.
This fooled us because we are so used to using SQL syntax that we didn't imagine that there was a nice function like this lying around and we got stuck in our ways trying to figure out conditional logic.
I've got a very complex query and trying to give a simple example of one of the sub-tables I'm having problems with, if you need more information or context please let me know.
I've posed a CSV file with some sample data here:
https://drive.google.com/open?id=0B4xdnV0LFZI1dzE5S29QSFhQSmM
We make cakes, and 99% of our cakes are made by us. The 1% is when we have a cake delivered to us from a subcontractor and we 'Receive' and 'Audit' it.
What I wanted to do was to write something like this:
SELECT
Cake.Cake
Instruction.Cake_Instruction_Key
Steps
FROM
Cake
Join Instruction
ON Cake.Cake_Key = Instruction.Cake_Key
JOIN Steps
ON Instruction.Step_Key = Steps.Step_Key
WHERE
MIN(Steps.Step_Key) = 1
This fails because you can't have an aggregate in the WHERE clause.
The desired results would be:
Cake C 13 Receive
Cake C 14 Audit
Cake D 15 Receive
Cake D 16 Audit
Thank you in advance for your help!
Take a look at the HAVING keyword:
https://msdn.microsoft.com/en-us/library/ms180199.aspx
It works more or less the same as the WHERE clause but for aggregate functions after the GROUP BY clause.
Beware however this can be slow. You should try filtering down the number of records as much as possible in the WHERE and even consider using a tempory table to aggregate the data into first.
What you're talking about is the GROUP BY/HAVING clause, so in your case you would need to add something like
GROUP BY Cake.Cake, Instruction.Cake_Instruction_Key, Steps
HAVING MIN(Steps.Step_Key) = 1
I have this very old and SLOW query that I am trying to optimize, but I am not sure I can do anything to it, but add more indexes on columns involved in WHERE, JOIN and ORDER BY.
Query:
SELECT TOP 400 jobticket.jobnumber, jobticket.typeform, jobticket.filename, jobticket.req_number, jobticket.reqd_del_date, jobticket.point_of_contact, jobticket.status, jobticket.DapsDate, jobticket.elpod, job_info.IDOrderMaskedStatus, job_info.job_status, job_info.SalesID, job_info.location, job_info.TOMetadataID
FROM jobticket WITH (NOLOCK)
INNER JOIN job_info WITH (NOLOCK) ON job_info.jobnumber = jobticket.jobnumber
WHERE
(
NOT(
(jobticket.status = 'Complete' OR jobticket.status = 'Completed')
and (job_info.job_status = 'Actualized' OR job_info.job_status = ''
OR job_info.job_status = 'Actualized Credit Billed'
OR job_info.job_status = 'DWAS Actualized' OR job_info.job_status = 'DWAS Actualized Credit Billed'
)
)
or
((SELECT COUNT(job_status) AS Expr1 FROM tblConsolidatedBilling AS tblConsolidatedBilling_1 WITH (NOLOCK)
WHERE (job_status <> 'Actualized'
AND job_status <> 'Actualized Credit Billed')
AND (master_jobnumber = jobticket.jobnumber)) > 0)
)
and (jobticket.status != 'Waiting Approval' or (jobticket.status = 'Waiting Approval' and jobticket.DPGType is null))
and jobticket.typeform <> 'todpg'
and ((job_info.isHidden <> 1 or job_info.isHidden is null) and job_info.isInConcurrentRelease is null)
and job_info.deleted != '1'
and jobticket.status != 'New Job'
and jobticket.status != 'PRFYCLSFD'
ORDER BY
job_info.expediencyLevel DESC,
jobticket.jobnumber DESC
Execution Plan:
In all honesty I don't know what to do with this query.
Should I add individual nonclustered indexes on all columns involved in WHERE JOIN and ORDER BY?
There are many indexes on these tables, but I am not sure whether they are helpful in this query:
Looking at this SQL, I don't really see any clear criteria that is being used to fetch the rows. It looks like it's just excluding a lot of rows with different criteria. My guess is that the tickets usually end up in a state where most of the rows are, and those are not included in the results?
The problem with this is, that it doesn't really have any clear criteria for that, and it has a lot of different rules there, so that's why it ends up doing a clustered index scan + key lookups for all the rows. The scan starts from jobinfo, but I'm not sure if it would make any difference if it would start from jobticket.
Removing most of the indexes is probably a good place to start, but it won't speed up the select at all.
The query looks quite complex, so my guess is that you can't create an index view that would contain this data. That might help assuming this query is executed often and data is not changed that much (and the overhead of maintaining huge number of indexes would have been removed), but this might not be possible.
Another idea would be to investigate the rules when the rows can be excluded, and is there a possibility have more clear rules for that, so it could be indexed, maybe by adding a persisted computed column into the table.
You haven't mentioned how long this actually takes, and how many rows there are in the tables, so everything is basically just a guess. Including more data + statistics io output into the question might help.
ps. I don't personally recommend using NOLOCK except in really special cases, because it can cause problems that are really hard to solve, like reading the same data more than once or skipping rows totally.
A simple fix would be to make the indexes on job_info and tblConsolidatedBilling covering because a ton of time is spent in key lookups there. That should give an integer factor speedup. If that's not enough we need to investigate further.
My data model has an entity Person with 3 related (1:N) entities Jobs, Tasks and Dates.
My query looks like
var persons = (from x in context.Persons
select new {
PersonId = x.Id,
JobNames = x.Jobs.Select(y => y.Name),
TaskDates = x.Tasks.Select(y => y.Date),
DateInfos = x.Dates.Select(y => y.Info)
}).ToList();
Everything seems to work fine, but the lists JobNames, TaskDates and DateInfos are not all filled.
For example, TaskDates and DateInfos have the correct values, but JobNames stays empty. But when I remove TaskDates from the query, then JobNames is correctly filled.
So it seems that EF can only handle a limited number of these "subqueries"? Is this correct? If so, what is the max. number of these "subqueries" for a single statement? Is there a way to work around these issue without having to make more than one call to the database?
(ps: I'm not entirely sure, but I seem to remember that this query worked in LINQ2SQL - could it be?)
UPDATE
I'm getting crazy about this. I tried to repro the issue from ground up using a fresh, simple project (to post the entire piece of code here, not only an oversimplified example) - and I found I wasn't able to repro it. It still happens within our existing code base (apparently there's more behind this problem, but I cannot share this closed code base, unfortunately).
After hours and hours of playing around I found the weirdest behavior:
It works great when I don't SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; before calling the LINQ statement
It also works great (independent of the above) when I don't use a .Take() to only get the first X rows
It also works great when I add an additional .Where() statements to cut the the number of rows returned from SQL Server
I didn't find any comprehensible reason why I see this behavior, but I started to look at the SQL: Although EF generates the exact same SQL, the execution plan is different when I use READ UNCOMMITTED. It returns more rows on a specific index in the middle of the execution plan, which curiously ends in less rows returned for the entire SQL statement - which in turn results in the missing data, that is the reason for my question to begin with.
This sounds very confusing and unbelievable, I know, but this is the behavior I see. I don't know what else to do, I don't even know what to google for at this point ;-).
I can fix my problem (just don't use READ UNCOMMITTED), but I have no idea why it occurs and if it is a bug or something I don't know about SQL Server. Maybe there's some "magic max number of allowed results in sub-queries" in SQL Server? At least: As far as I can see, it's not an issue with EF itself.
A little late, but does calling ToList() on each subquery produce the required effect?
var persons = (from x in context.Persons
select new {
PersonId = x.Id,
JobNames = x.Jobs.Select(y => y.Name.ToList()),
TaskDates = x.Tasks.Select(y => y.Date).ToList(),
DateInfos = x.Dates.Select(y => y.Info).ToList()
}).ToList();
I'm using SQL Server 2008. I have a table with over 3 million records, which is related to another table with a million records.
I have spent a few days experimenting with different ways of querying these tables. I have it down to two radically different queries, both of which take 6s to execute on my laptop.
The first query uses a brute force method of evaluating possibly likely matches, and removes incorrect matches via aggregate summation calculations.
The second gets all possibly likely matches, then removes incorrect matches via an EXCEPT query that uses two dedicated indexes to find the low and high mismatches.
Logically, one would expect the brute force to be slow and the indexes one to be fast. Not so. And I have experimented heavily with indexes until I got the best speed.
Further, the brute force query doesn't require as many indexes, which means that technically it would yield better overall system performance.
Below are the two execution plans. If you can't see them, please let me know and I'll re-post then in landscape orientation / mail them to you.
Brute-force query:
SELECT ProductID, [Rank]
FROM (
SELECT p.ProductID, ptr.[Rank], SUM(CASE
WHEN p.ParamLo < si.LowMin OR
p.ParamHi > si.HiMax THEN 1
ELSE 0
END) AS Fail
FROM dbo.SearchItemsGet(#SearchID, NULL) AS si
JOIN dbo.ProductDefs AS pd
ON pd.ParamTypeID = si.ParamTypeID
JOIN dbo.Params AS p
ON p.ProductDefID = pd.ProductDefID
JOIN dbo.ProductTypesResultsGet(#SearchID) AS ptr
ON ptr.ProductTypeID = pd.ProductTypeID
WHERE si.Mode IN (1, 2)
GROUP BY p.ProductID, ptr.[Rank]
) AS t
WHERE t.Fail = 0
Index-based exception query:
with si AS (
SELECT DISTINCT pd.ProductDefID, si.LowMin, si.HiMax
FROM dbo.SearchItemsGet(#SearchID, NULL) AS si
JOIN dbo.ProductDefs AS pd
ON pd.ParamTypeID = si.ParamTypeID
JOIN dbo.ProductTypesResultsGet(#SearchID) AS ptr
ON ptr.ProductTypeID = pd.ProductTypeID
WHERE si.Mode IN (1, 2)
)
SELECT p.ProductID
FROM dbo.Params AS p
JOIN si
ON si.ProductDefID = p.ProductDefID
EXCEPT
SELECT p.ProductID
FROM dbo.Params AS p
JOIN si
ON si.ProductDefID = p.ProductDefID
WHERE p.ParamLo < si.LowMin OR p.ParamHi > si.HiMax
My question is, based on the execution plans, which one look more efficient? I realize that thing may change as my data grows.
EDIT:
I have updated the indexes, and now have the following execution plan for the second query:
Trust the optimizer.
Write the query that most simply expresses what you're trying to achieve. If you're having perfomance problems with that query, then you should look at whether there are any missing indexes. But you still shouldn't have to explicitly work with these indexes.
Don't concern yourself by considerations of how you might implement such a search.
In very rare circumstances, you may need to further force the query to use particular indexes (via hints), but this is probably < 0.1% of queries.
In your posted plans, your "optimized" version is causing scans against 2 indexes of your (I presume) Params table (PK_Params_1, IX_Params_1). Without seeing the queries, it's difficult to know why this is happening, but if you're comparing against having a single scan against a table ("Brute force") and two, it's easy to see why the second isn't more efficient.
I think I'd try:
SELECT p.ProductID, ptr.[Rank]
FROM dbo.SearchItemsGet(#SearchID, NULL) AS si
JOIN dbo.ProductDefs AS pd
ON pd.ParamTypeID = si.ParamTypeID
JOIN dbo.Params AS p
ON p.ProductDefID = pd.ProductDefID
JOIN dbo.ProductTypesResultsGet(#SearchID) AS ptr
ON ptr.ProductTypeID = pd.ProductTypeID
LEFT JOIN Params p_anti
on p_anti.ProductDefId = pd.ProductDefID and
(p_anti.ParamLo < si.LowMin or p_anti.ParamHi > si.HiMax)
WHERE si.Mode IN (1, 2)
AND p_anti.ProductID is null
GROUP BY p.ProductID, ptr.[Rank]
I.e. introduce an anti-join that eliminates the results you don't want.
In SQL Server Management Studio, put both queries in the same query window and get the query plan for both at once. It should determine the query plans for both and give you a 'percent of total batch' for each one. The query with the lower percent of the total batch will be the better performing one.
Does 6 seconds on a laptop = .006 seconds on productions hardware? The part of your queries which worry me are the clustered index scans shown in the query plan. In my experience any time a query plan includes a CI scan it means the query will only get slower when data is added to the table.
What do the two functions yield as it appears they are the cause of the table scans? Is it possible to persist the data in the db and update the LoMin and HiMax as rows are added.
Looking at the two execution plans neither is very good. Look how far to the left the wide lines are. The wide lines means there are many rows. We need to reduce the number of rows earlier in the process so we do not work with such large hash tables and large sorts and nested loops.
BTW how many rows does your source have and how many rows are included in the result set?
Thank you all for your input and help.
From reading what you wrote, experimenting, and digging into the execution plan, I discovered the answer is tipping point.
There were too many records being returned to warrant use of the index.
See here (Kimberly Tripp).