Improving SQL query efficiency - sql-server

I have a query which calculates a distance, it works as expected however it can take a long time to query with lots of data points.
I have tried adding distribution=hash and it has had some impact but not significant amounts.
CREATE TABLE #POI_PINGS_ALL
WITH
(
DISTRIBUTION=HASH([UUID])
)
AS
SELECT
l.[PoiId]
, v.[UUID]
, [CreatedOn]
, v.[Lat]
, v.[Lon]
FROM #LOCATION_PINGS_COUNCIL_DISTRIBUTED v
INNER JOIN #POI_LOOKUP l
ON l.[CouncilId] = v.[CouncilId]
INNER JOIN #POI p
ON p.[PoiId] = l.[PoiId]
WHERE
dbo.fn_GetDist(v.[Lat], v.[Lon], p.[Lat], p.[Lon]) <= p.[Radius]
Is there a more efficient way to write this query ?

Related

Optimise query with count and order by function

I have a problem with the optimization of this query, I have 3 tables (Products = Catalogo.GTIN, Sales Header = TEDEF.Factura and Sales Detail = TEDEF.Farmacia).
The query tries to find the Mode of the column VPRODEXENIGV_FAR. This query without the ORDER BY executes in less than 3 seconds (the table of details has about 30 million rows).
But when I add the ORDER BY clause, the query now takes more than 30 minutes to run.
I want to know how can I optimize this query or the indexes that I need to optimize this.
SELECT *
FROM Catalogo.GTIN G
CROSS APPLY
(SELECT TOP 1
COUNT(FAR.VPRODEXENIGV_FAR) [ROW],
YEAR(FAC2.VFECEMI_FAC) [AÑO],
MONTH(FAC2.VFECEMI_FAC) [MES],
FAR.VCODPROD_FAR_003,
CASE WHEN FAR.VPRODEXENIGV_FAR = 'A' THEN 1 ELSE 0 END AfectoIGV
FROM
TEDEF.Factura FAC2
INNER JOIN
TEDEF.Farmacia FAR ON FAC2.VTDOCPAGO_FAC = FAR.VTDOCPAGO_FAC
AND FAC2.VNDOCPAGO_FAC = FAR.VNDOCPAGO_FAC
WHERE
G.CODIGO = FAR.VCODPROD_FAR_003
GROUP BY
YEAR(FAC2.VFECEMI_FAC),
MONTH(FAC2.VFECEMI_FAC),
FAR.VCODPROD_FAR_003,
FAR.VPRODEXENIGV_FAR
ORDER BY
1 DESC --- <----- THE PROBLEM IS HERE
) GG
Ouch! You have a hugely expensive dependent subquery. It's expensive because SELECT TOP(n) ... ORDER BY col DESC does a whole lot of work to create a result set only to discard all but one row. And, it's a dependent subquery so it's run for every row of Catalogo.GTIN .
It looks like you want to count the resultset rows in the most recent month and year for each Catalogo.GTIN row. So, let's try to refactor your query to do that.
We'll start with a subquery to grab the month-start date of the latest Factura row for each catalog entry.
SELECT CODIGO,
DATEFROMPARTS(YEAR(maxd), MONTH(maxd),1) maxmes
FROM (
SELECT MAX(FAC2.VFECEMI_FAC) maxd,
G.CODIGO
FROM Catalogo.GTIN G
JOIN TDEF.Farmacia FAR
ON G.CODIGO = FAR.VCODPROD_FAR_003
JOIN TEDEF.Factura FAC2
ON FAC2.VTDOCPAGO_FAC = FAR.VTDOCPAGO_FAC
AND FAC2.VNDOCPAGO_FAC = FAR.VNDOCPAGO_FAC
GROUP BY G.CODIGO
) maxd
It's wise to test this and make sure it works correctly and performs tolerably well. If you test it in SSMS, you can use "Show Actual Execution Plan" and see if it recommends an extra index. This subquery need only be run once, rather than once per G.CODIGO row.
Then we'll use it in your larger query.
SELECT G.*,
COUNT(FAR.VPRODEXENIGV_FAR) [ROW],
YEAR(FAC2.VFECEMI_FAC) [AÑO],
MONTH(FAC2.VFECEMI_FAC) [MES],
FAR.VCODPROD_FAR_003,
CASE WHEN FAR.VPRODEXENIGV_FAR = 'A' THEN 1 ELSE 0 END AfectoIGV
FROM Catalogo.GTIN G
JOIN (
SELECT CODIGO,
DATEFROMPARTS(YEAR(maxd), MONTH(maxd),1) maxmes
FROM (
SELECT MAX(FAC2.VFECEMI_FAC) maxd,
G.CODIGO
FROM Catalogo.GTIN G
JOIN TDEF.Farmacia FAR
ON G.CODIGO = FAR.VCODPROD_FAR_003
JOIN TEDEF.Factura FAC2
ON FAC2.VTDOCPAGO_FAC = FAR.VTDOCPAGO_FAC
AND FAC2.VNDOCPAGO_FAC = FAR.VNDOCPAGO_FAC
GROUP BY G.CODIGO
) maxd
) maxmes ON G.CODIGO = maxmes.CODIGO
JOIN TEDEF.Farmacia FAR
ON G.CODIGO = FAR.VCODPROD_FAR_003
JOIN TEDEF.Factura FAC2
ON FAC2.VTDOCPAGO_FAC = FAR.VTDOCPAGO_FAC
AND FAC2.VNDOCPAGO_FAC = FAR.VNDOCPAGO_FAC
AND FAC2.VFECEMI_FAC >= maxmes.maxmes
GROUP BY maxmes.maxmes,
G.CODIGO,
FAR.VCODPROD_FAR_003,
FAR.VPRODEXENIGV_FAR
Here is the tricky bit:
DATEFROMPARTS(YEAR(maxd), MONTH(maxd),1) maxmes turns any date maxd into the first day of that month.
And, FAC2.VFECEMI_FAC >= maxmes.maxmes filters out rows before the first day of that month (for that CODIGO). It does so in a sargable way: a way that can exploit an index on FAC2.VFECEMI_FAC.
That is an alternative way to do TOP(1) ORDER BY d DESC. And faster.
It's all about sets of rows. Especially when using GROUP BY, it's performance-helpful to limit the number of rows in each set.
Obviously I cannot debug this.
Is me again, Finally i resolve the problem of the optimization, now the query delay is about 20 sec (with the sort instruction and with the count in a table over 30 million rows) i hope this way can help others or could be optimice more by the community.
I resolve the problem applying the sort but with the Row_Number instruction, in that way the server take my index for the sort instruction and make the magic:
WITH x
AS
(
SELECT *, ROW_NUMBER() OVER (PARTITION BY GG.COD, GG.[AÑO], GG.[MES] ORDER BY GG.[ROW] DESC) [ID]
FROM Catalogo.GTIN G
CROSS APPLY
(
SELECT COUNT(FAR.VPRODEXENIGV_FAR) [ROW]
, YEAR(FAC2.VFECEMI_FAC) [AÑO]
, MONTH(FAC2.VFECEMI_FAC) [MES]
, FAR.VCODPROD_FAR_003 [COD]
, CASE WHEN FAR.VPRODEXENIGV_FAR = 'A' THEN 1 ELSE 0 END AfectoIGV
FROM TEDEF.Factura FAC2
INNER JOIN TEDEF.Farmacia FAR
ON FAC2.VTDOCPAGO_FAC = FAR.VTDOCPAGO_FAC
AND FAC2.VNDOCPAGO_FAC = FAR.VNDOCPAGO_FAC
WHERE G.CODIGO = FAR.VCODPROD_FAR_003
GROUP BY YEAR(FAC2.VFECEMI_FAC)
, MONTH(FAC2.VFECEMI_FAC)
, FAR.VCODPROD_FAR_003
, FAR.VPRODEXENIGV_FAR
-- ORDER BY 1 DESC --- <---- this is the bad guy, please, don't do that xD
) GG
) SELECT *
FROM x WHERE ID = 1
In that way i can sort the Count instruction and calculate the Mode for the Column FAR.VPRODEXENIGV_FAR

Nested Loops performance issue on a very simple query

I have a very simple table and a very simple INNER JOIN query and a huge count of rows.
IF OBJECT_ID('tempdb..#blackIPAndMACs') IS NOT NULL
DROP TABLE #blackIPAndMACs
CREATE TABLE #blackIPAndMACs
(
ResourceID dsidentifier,
MACAddress VARCHAR(500),
IPAddress VARCHAR(50)
)
CREATE INDEX #blackIPAndMACs_idx1 ON #blackIPAndMACs(MACAddress)
CREATE INDEX #blackIPAndMACs_idx2 ON #blackIPAndMACs(IPAddress)
CREATE INDEX #blackIPAndMACs_idx3 ON #blackIPAndMACs(MACAddress, IPAddress)
CREATE INDEX #blackIPAndMACs_idx4 ON #blackIPAndMACs(ResourceID)
After this table has been filled with 2.514.000 rows, I am trying to find all ResourceID, that accessed from similar IP or MAC:
SELECT b1.*,
b2.*
FROM #blackIPAndMACs b1 with(NOLOCK, INDEX=#blackIPAndMACs_idx3)
INNER JOIN #blackIPAndMACs b2 with(NOLOCK, INDEX=#blackIPAndMACs_idx3)
ON (
b1.MACAddress = b2.MACAddress
OR b1.IPAddress = b2.IPAddress
)
WHERE 1 = 1
As a result, this query executes (possible) infinitely. Our server is really powerful. I think I can't disclose this information, but I can only say, that the RAM of the server counts in a lot of hundreds of GB.
What kind optimization should I use to speedup the query execution?
Update 1:
OK, I removed OR and changed SELECT to count (b1.ResourceID), but it didn't solve the issue. Even such simple query executes too long:
SELECT count (b1.ResourceID)
FROM #blackIPAndMACs b1 with(NOLOCK)
INNER JOIN #blackIPAndMACs b2 with(NOLOCK)
b1.MACAddress = b2.MACAddress
WHERE 1 = 1
AND b1.ResourceID != b2.ResourceID
As a force of habit I would refrain from using select *, even if you need a whole bunch of fields as a result. Having said that my approach would be something like this:
SELECT
b1.ResourceID
, b2.MACAddress
, b2.IPAddress
, b3.MACAddress
, b3.IPAddress
FROM #blackIPAndMACs AS b1
LEFT JOIN #blackIPAndMACs AS b2 ON b1.MACAddress = b2.MACAddress
LEFT JOIN #blackIPAndMACs AS b3 ON b1.IPAddress = b2.IPAddress;
Which uses a much more efficient query plan:

RIGHT\LEFT Join does not provide null values without condition

I have two tables one is the lookup table and the other is the data table. The lookup table has columns named cycleid, cycle. The data table has SID, cycleid, cycle. Below is the structure of the tables.
If you check the data table, the SID may have all the cycles and may not have all the cycles. I want to output the SID completed as well as missed cycles.
I right joined the lookup table and retrieved the missing as well as completed cycles. Below is the query I used.
SELECT TOP 1000 [SID]
,s4.[CYCLE]
,s4.[CYCLEID]
FROM [dbo].[data] s3 RIGHT JOIN
[dbo].[lookup_data] s4 ON s3.CYCLEID = s4.CYCLEID
The query is not displaying me the missed values when I query for all the SID's. When I specifically query for a SID with the below query i am getting the correct result including the missed ones.
SELECT TOP 1000 [SID]
,s4.[CYCLE]
,s4.[CYCLEID]
FROM [dbo].[data] s3 RIGHT JOIN [dbo].[lookup_data] s4
ON s3.CYCLEID = s4.CYCLEID
AND s3.SID = 101002
ORDER BY [SID], s4.[CYCLEID]
As I am supplying this query into tableau I cannot provide the sid value in the query. I want to return all the sid's and from tableau I will be do the rest of the things.
The expected output that i need is as shown below.
I wrote a cross join query like below to acheive my expected output
SELECT DISTINCT
tab.CYCLEID
,tab.SID
,d.CYCLE
FROM ( SELECT d.SID
,d.[CYCLE]
,e.CYCLEID
FROM ( SELECT e.sid
,e.CYCLE
FROM [db_temp].[dbo].[Sheet3$] e
) d
CROSS JOIN [db_temp].[dbo].[Sheet4$] e
) tab
LEFT OUTER JOIN [db_temp].[dbo].[Sheet3$] d
ON d.CYCLEID = tab.CYCLEID
AND d.SID = tab.SID
ORDER BY tab.SID
,tab.CYCLEID;
However I am not able to use this query for more scenarios as my data set have nearly 20 to 40 columns and i am having issues when i use the above one.
Is there any way to do this in a simpler manner with only left or right join itself? I want the query to return all the missing values and the completed values for the all the SID's instead of supplying a single sid in the query.
You can create a master table first (combine all SID and CYCLE ID), then right join with the data table
;with ctxMaster as (
select distinct d.SID, l.CYCLE, l.CYCLEID
from lookup_data l
cross join data d
)
select d.SID, m.CYCLE, m.CYCLEID
from ctxMaster m
left join data d on m.SID = d.SID and m.CYCLEID = d.CYCLEID
order by m.SID, m.CYCLEID
Fiddle
Or if you don't want to use common table expression, subquery version:
select d.SID, m.CYCLE, m.CYCLEID
from (select distinct d.SID, l.CYCLE, l.CYCLEID
from lookup_data l
cross join data d) m
left join data d on m.SID = d.SID and m.CYCLEID = d.CYCLEID
order by m.SID, m.CYCLEID

SQL query distict count using inner join

Need help ensuring the below query doesn't return inaccurate results.
select #billed = count(a.[counter]) from [dbo].cxitems a with (nolock)
inner join [dbo].cxitemhist b with (nolock) on a.[counter] = b.cxlink
where b.[eventtype] in ('BILLED','REBILLED')
and b.[datetime] between #begdate and #enddate
The query is "mostly" accurate as is, however there is a slight possibility that cxitemhist table could contain more than 1 "billed" record for given date range. I only need to count item as "Billed" once during given date range.
You can join on a sub query the limits you to one row for each combination of fields used for the join:
select #billed = count(a.[counter])
from [dbo].cxitems a
inner join (
select distinct cxlink
from [dbo].cxitemhist
where [eventtype] in ('BILLED','REBILLED')
and [datetime] between #begdate and #enddate
) b on a.[counter] = b.cxlink
You can also use the APPLY operator instead of a join here, but you'll have to check against your data to see which gives better performance.
If you only need to count records from the cxitems table, that have any corresponding records from the cxitemhist table, you can use the exists clause with a subquery.
select #billed = count(a.[counter]) from [dbo].cxitems a
where exists(select * from [dbo].cxitemhist b
where a.[counter] = b.cxlink
and b.[eventtype] in ('BILLED','REBILLED')
and b.[datetime] between #begdate and #enddate)
Cannot really say how this will affect performance, without specific data, though, but it should be comparably fast with your code.

SQL Server 2008 Stored Procedure Performance issue

Hi I have a Stored Procedure
ALTER PROCEDURE [dbo].[usp_EP_GetTherapeuticalALternates]
(
#NDCNumber CHAR(11) ,
#patientid INT ,
#pbmid INT
)
AS
BEGIN
TRUNCATE TABLE TempTherapeuticAlt
INSERT INTO TempTherapeuticAlt
SELECT --PR.ProductID AS MedicationID ,
NULL AS MedicationID ,
PR.ePrescribingName AS MedicationName ,
U.Strength AS MedicationStrength ,
FRM.FormName AS MedicationForm ,
PR.DEAClassificationID AS DEASchedule ,
NULL AS NDCNumber
--INTO #myTemp
FROM DatabaseTwo.dbo.Product PR
JOIN ( SELECT MP.MarketedProductID
FROM DatabaseTwo.dbo.Therapeutic_Concept_Tree_Specific_Product TCTSP
JOIN DatabaseTwo.dbo.Marketed_Product MP ON MP.SpecificProductID = TCTSP.SpecificProductID
JOIN ( SELECT TCTSP.TherapeuticConceptTreeID
FROM DatabaseTwo.dbo.Marketed_Product MP
JOIN DatabaseTwo.dbo.Therapeutic_Concept_Tree_Specific_Product TCTSP ON MP.SpecificProductID = TCTSP.SpecificProductID
JOIN ( SELECT
PR.MarketedProductID
FROM
DatabaseTwo.dbo.Package PA
JOIN DatabaseTwo.dbo.Product PR ON PA.ProductID = PR.ProductID
WHERE
PA.NDC11 = #NDCNumber
) PAPA ON MP.MarketedProductID = PAPA.MarketedProductID
) xxx ON TCTSP.TherapeuticConceptTreeID = xxx.TherapeuticConceptTreeID
) MPI ON PR.MarketedProductID = MPI.MarketedProductID
JOIN ( SELECT P.ProductID ,
O.Strength ,
O.Unit
FROM DatabaseTwo.dbo.Product AS P
INNER JOIN DatabaseTwo.dbo.Marketed_Product
AS M ON P.MarketedProductID = M.MarketedProductID
INNER JOIN DatabaseTwo.dbo.Specific_Product
AS S ON M.SpecificProductID = S.SpecificProductID
LEFT OUTER JOIN DatabaseTwo.dbo.OrderableName_Combined
AS O ON S.SpecificProductID = O.SpecificProductID
GROUP BY P.ProductID ,
O.Strength ,
O.Unit
) U ON PR.ProductID = U.ProductID
JOIN ( SELECT PA.ProductID ,
S.ScriptFormID ,
F.Code AS NCPDPScriptFormCode ,
S.FormName
FROM DatabaseTwo.dbo.Package AS PA
INNER JOIN DatabaseTwo.dbo.Script_Form
AS S ON PA.NCPDPScriptFormCode = S.NCPDPScriptFormCode
INNER JOIN DatabaseTwo.dbo.FormCode AS F ON S.FormName = F.FormName
GROUP BY PA.ProductID ,
S.ScriptFormID ,
F.Code ,
S.FormName
) FRM ON PR.ProductID = FRM.ProductID
WHERE
( PR.OffMarketDate IS NULL )
OR ( PR.OffMarketDate = '' )
OR (PR.OffMarketDate = '1899-12-30 00:00:00.000')
OR ( PR.OffMarketDate <> '1899-12-30 00:00:00.000'
AND DATEDIFF(dd, GETDATE(),PR.OffMarketDate) > 0
)
GROUP BY PR.ePrescribingName ,
U.Strength ,
FRM.FormName ,
PR.DEAClassificationID
-- ORDER BY pr.ePrescribingName
SELECT LL.ProductID AS MedicationID ,
temp.MedicationName ,
temp.MedicationStrength ,
temp.MedicationForm ,
temp.DEASchedule ,
temp.NDCNumber ,
fs.[ReturnFormulary] AS FormularyStatus ,
copay.CopaTier ,
copay.FirstCopayTerm ,
copay.FlatCopayAmount ,
copay.PercentageCopay ,
copay.PharmacyType,
dbo.udf_EP_GetBrandGeneric(LL.ProductID) AS BrandGeneric
FROM TempTherapeuticAlt temp
OUTER APPLY ( SELECT TOP 1
ProductID
FROM DatabaseTwo.dbo.Product
WHERE ePrescribingName = temp.MedicationName
) AS LL
OUTER APPLY [dbo].[udf_EP_tbfGetFormularyStatus](#patientid,
LL.ProductID,
#pbmid) AS fs
OUTER APPLY ( SELECT TOP 1
*
FROM udf_EP_CopayDetails(LL.ProductID,
#PBMID,
fs.ReturnFormulary)
) copay
--ORDER BY LL.ProductID
TRUNCATE TABLE TempTherapeuticAlt
END
On my dev server I have data of 63k in each table
so this procedure took about 30 seconds to return result.
On my Production server, it is timing out, or taking >1 minute.
I am wondering my production server tables are full with 1400 millions of records,
can this be a reason.
if so what can be done, I have all required indexes on tables.
any help would be greatly appreciated.
thanks
Execution Plan
http://www.sendspace.com/file/hk8fao
Major Leakage
OUTER APPLY [dbo].[udf_EP_tbfGetFormularyStatus](#patientid,
LL.ProductID,
#pbmid) AS fs
Some strategies that may help:
Remove the first ORDER BY statement, those are killer on complex queries shouldn't be necessary.
Use CTEs to break the query into smaller pieces that can be individually addressed.
Reduce the nesting in the first set of JOINs
Extract the second and third set of joins (the GROUPED ones) and insert those into a temporary indexed table before joining and grouping everything.
You did not include the definition for function1 or function2 -- custom functions are often a place where performance issues can hide.
Without seeing the execution plan, it's difficult to see where the particular problems may be.
You have a query that selects data from 4 or 5 tables , some of them multiple times. It's really hard to say how to improve without deep analysis of what you are trying to achieve and what table structure actually is.
Data size is definitely an issue; I think it's quite obvious that the more data has to be processed, the longer query will take. Some general advices... Run the query directly and check execution plan. It may reveal bottlenecks. Then check if statistics is up to date. Also, review your tables, partitioning may help a lot in some cases. In addition, you can try altering tables and create clustered index not on PK (as it's done by default unless otherwise specified), but on other column[s] so your query will benefit from certain physical order of records. Note : do it only if you are absolutely sure what you are doing.
Finally, try refactoring your query. I have a feeling that there is a better way to get desired results (sorry, without understanding of table structure and expected results I cannot tell exact solution, but multiple joins of the same tables and bunch of derived tables don't look good to me)

Resources