I have this statement:
insert into Admin.VersionHistory --do not know what to put here
select COUNT(*) as cnt
from membership.members as mm
left join aspnet_membership as asp
on mm.aspnetuserid=asp.userid
left join trade.tradesmen as tr
on tr.memberid=mm.memberid
where asp.isapproved = 0 and tr.ImportDPN IS NOT NULL and tr.importDPN <> ''
and it gives me a total of 179956. I want to write this total to another table called Admin.VersionHistory which has id(autoinc), version(varchar) and date(sysdate) columns,
How can I do this please?
Thanks
Your query returns resultset with one column and one row. It could be inserted in some table that has one integer column. Your target table is not suitable for this because it has three columns.
You should be able to do this:
INSERT INTO Admin.VersionHistory(ColumnName)
SELECT COUNT(*)
-- the resut of your query here
INSERT INTO Admin.VersionHistory (ColCount, version, Sysdate)
VALUES(
(SELECT COUNT(*) AS cnt FROM membership.members AS mm
lLEFT JOIN aspnet_membership AS asp ON mm.aspnetuserid = asp.userid
LEFT JOIN trade.tradesmen AS tr ON tr.memberid=mm.memberid
WHERE
asp.isapproved = 0 AND
tr.ImportDPN IS NOT NULL AND
tr.importDPN <> ''),
VERSIONVALUE, GETDATE())
Though I have tried similar thing in Oracle I guess it should not be very different in SQL.
Give it a shot.
Related
I have 2 tables, Table 1 (temp table in SP) has around 400 records. Table 2 has around 30,550,284 records.
I need to run a loop on table 1 for each record and get the top 1 from table 2 based on a few conditions (where clause) and then order by modified date in decreasing order.
There is an index on the modified date.
declare #iPos int;
declare #iCount int;
select #iCount = count(*) from Table1;
set #iPos = 1;
declare #Table2 table(......)
declare #timestampLocal2 datetime
while (#iPos <= #iCount)
BEGIN
select #val1 = Col1, #timestampLocal = TimeStamp
from #Table1 where ID = #iPos
set #timestampLocal2 = DATEADD(HH,-96,#timestampLocal)
INSERT INTO #Temp3 ( .... ),....)
select top 1 r.LastModified, r.[Col2], r.Col3, #iPos
from Table2 (NOLOCK) r
where Col1 =#val1 and
r.LastModified <= #timestampLocal
and r.LastModified >= #timestampLocal2
and (r.Col2 is not null and r.Col3 is not null)
order by LastModified desc
SELECT #iPos = #iPos + 1;
END
This query is very slow.
I have also thought to archive table 2, But I want to keep that as the second option for now.
Do I really need to add an index on the columns which are involved in the where clause?
So my question is, in terms of performance is there a better way to do this?
I believe a CROSS APPLY or OUTER APPLY may do the trick. These can be thought of as being similar to INNER JOIN or LEFT JOIN, except that they allow you to reference a subquery having more complex conditions such as TOP 1 and ORDER BY. Ideal for cases like this.
-- INSERT INTO #Temp3 ( .... )
select r.LastModified, r.[Col2], r.Col3, t1.ID
from #Table1 t1
cross apply (
SELECT TOP 1 r.*
from Table2 r -- Don't use (NOLOCK)
where r.Col1 = t.Col1
and r.LastModified <= t1.[TimeStamp]
and r.LastModified >= DATEADD(HH,-96,t1.[TimeStamp])
and (r.Col2 is not null and r.Col3 is not null)
order by r.LastModified desc
) r
For efficiency, I recommend an index on Table2(Col1,LastModified) or as an absolute minimum, an index on Table2(Col1).
I would strongly discourage the use of (NOLOCK) or 'READ UNCOMMITTED` in queries that update the database (like the insert into table3 above). While the query may appear to work most of the time, seemingly random occurrences of missing or duplicate rows may result.
Do you need to handle cases where no matching Table2 record is found? The above will quietly ignore such cases. Changing the CROSS APPLY to an OUTER APPLY together with logic to handle null r.xxx values could be what you need.
I am working on an ETL optimization problem and that requires creating a temp table that could be merged with the final table. Currently I have a couple Views that are used to load the final table and that is taking a lot of time. I tried to take the SQL logic from the view and created a temp table and noticed that the values in the temp table do not match the values in the final table. To look deeper I was running count(*) on the view couple of times and noticed that the result for total row count is different for every run by about 10/15 rows give or take. The view has 16 columns from 9 tables which load only once a day. So the time when I run the count(*) the underlying data does not change but the result of the count from the view does change.
This is on a SQL Server 2016 server. I have tried looking into the View logic and nothing stands out as odd. I have tried doing a count(*) on the tables that loads this view and the counts for the tables do not change. I have also tried to create 2 column table from the view logic to simplify the problem and tried an EXCEPT command and that still yields about 20 rows of inconsistent values between the 2 column table created from the same exact view logic.
Here is a reproduction of the VIEW definition that has the row count inconsistency
USE [PROD]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE VIEW Base_View
AS
select
concat(x, y, z)feild1
,*
,ROW_NUMBER() OVER(PARTITION BY a,b ORDER BY some_Date) AS rec_num
,count(a) OVER(PARTITION BY a) AS rec_total
from (
SELECT
case when RESULT='stored value' and e.code is not null then 'x' else '' end x
,case when RESULT='stored value 2' and r.l_id is not null then 'y' else '' end y
,case when RESULT in ('stored value 3','stored value 4') and t.amount is not null then 'z' else '' end z
,case when
CASE WHEN
(m.status = 'stored value 4' OR m.status = 'stored value 5')
AND m.bal < 0
THEN
CASE WHEN DATEDIFF(day,m.due,m.SNAP_DATE) < 0
THEN 0
ELSE DATEDIFF(day,m.due,m.SNAP_DATE)
END
ELSE 0
END=0 AND w.W_ID is null AND m.status<>'stored value 5'
then case
when RESULT in ('stored value 5','stored value 4')
then case when isnull(AMOUNT,0)<>0
then 'abc'
else 'def' end
else 'abc' end
else 'def'
end imp_feild
,result
,es.emp_id
,concat(es.fname,' ',es.lname)task_emp
,concat(e.fname,' ',e.lname)ext_emp
,case when RESULT ='stored value' then t.P_STATUS else null end p_status
,t.CREATE_DATE
,t.l_key
,t.l_id
,m.status
,cast(w.wodate as date)wo_date
,rm.balance refi_balance,rnl.LOAN_key refi_loan,r.effective refi_effective
,case trancode when 'ext' then m.payment else null end ext_amount,e.entered ext_entered,e.effective ext_effective
FROM
(
select t0.*,ROW_NUMBER() OVER(PARTITION BY t0.some_KEY,cast(t0.CREATE_DATE as date),t0.output
ORDER BY t0.some_KEY,cast(t0.CREATE_DATE as date),t0.output ) AS SEQ_NUM
from base_table_1 t0
left join base_table_2 e0
on t0.c_e_key=e0.e_key
where t0.active_rec_ind='Y'
and t0.output in (d,e,f,g)
and (t0.output2 in (j,k)
or ISNULL(e0.some_KEY,'h') in ('u','w'))
) t
join
base_table_3 l
on t.loan_sf_id=l.loan_sf_id
and t.active_rec_ind='Y'
join base_table_4 m
on
t.SOME_DATE=m.SNAP_DATE
and t.L_ID=m.L_ID
left
join base_table_5 es
on t.c_emp_key=es.emp_key
left
join base_table_6 r
on l.l_id=r.l_old_id
and r.entered between dateadd(day,0,cast(t.CREATE_DATE as date)) and dateadd(day,0,t.SOME_DATE)
left
join base_table_7 w
on l.l_id=w.l_id
and w.wodate between cast(t.CREATE_DATE_ETZ as date) and dateadd(day,0,t.SOME_DATE)
left
join base_table_8 wl
on w.l_id=wl.l_id
left
join base_table_8 rnl
on r.l_new_id=rnl.l_id
left
join base_table_8 rol
on r.l_old_id=rol.l_id
left
join base_table_4 rm
on
dateadd(day,-1,r.effective)=rm.SNAP_DATE
and rol.L_ID=rm.L_ID
left
join
(select e0.*,ew.value_1,ew.new_key,ROW_NUMBER() OVER(PARTITION BY e0.L_ID,e0.ENT ORDER BY e0.L_ID,e0.ENT) AS SEQ_NUM
from base_table_9 e0
join base_table_5 ew
on e0.EMP_ID=ew.EMP_ID
where e0.code='a'
) e
on l.sid=e.sid
and e.code='a' and RESULT='stored value 5'
and e.entered between cast(t.CREATE_DATE as date) and dateadd(day,0,t.HOLD_DATE)
AND e.SEQ_NUM=t.SEQ_NUM
and ((isnumeric(e.roll_key)=1 and isnumeric(es.roll_key)=1 and e.roll_key=es.roll_key)
or ((isnumeric(e.roll_key)=0 or isnumeric(es.roll_key)=0) and e.FNAME+e.LNAME=es.FNAME+es.LNAME))
where t.RESULT in ('abc','def')
and cast(t.CREATE_DATE as date) between cast(dateadd(month,-12,getdate()) as date) and cast(getdate() as date)
and (AGENT in ('lmn', 'pqr')
or ISNULL(es.VKEY,'stored value 8') in ('xx','yy','zz'))
)x
where imp_feild='abc'
and concat(x, y, z)<>''
or imp_feild='def'
GO
Expected result is that it should return a consistent number for the row count and that hopefully should solve the inconsistent values problem on the temp table.
Your query has between cast(dateadd(month,-12,getdate()) as date) and cast(getdate() as date) near the bottom. Of course the result of getdate() will be different with each execution and each call to getdate(). That will affect the result.
BTW, having * in your SELECT list is not a good idea. You should only return the columns needed. It makes the view results vulnerable to changes in the underlying tables.
There are a few other things that wouldn't pass code review where I work but that's kinda OT, I think.
This is too long for a comment. Using * in a view is a very bad idea. Not only does the view NOT update (unless you execute sp_refreshview) when you change the base table you can actually get some very interesting things happening.
Check this out as an example of just how bad this can be.
create table ViewExample (Col1 int, Col2 int)
go
create view ViewExampleView as select * from ViewExample
go
insert ViewExample select 1, 2
go
select * from ViewExampleView --obviously we get just a single column
alter table ViewExample add Col3 int --add a new column to the table, surely the view will pick this up?
go
insert ViewExample select 3, 4, 5 --insert a new row with data in all three columns
go
select * from ViewExampleView --what??? The view says select * but we only get Col1 and Col2?
alter table ViewExample drop column Col2 --Oops we decide to drop this column because we don't need it anymore
select * from ViewExampleView --What in the world? Col2 doesn't exist in the table, why is it in the view? And what the heck is going on here. The data from Col3 is now moved to Col2
drop view ViewExampleView
drop table ViewExample
Notice how in the last select from the view that the data from Col3 is being displayed in Col2. If this doesn't convince you to stop using * in views (and pretty much everywhere) I don't know what will.
Scenario:
I have a simplified version of a result set obtained from a series of complex joins. I have placed the result set in a temporary table. The result set consists of records of activity/activities in a day.
I need to join the 2 rows (merge activities of a day into a single row) with similar dates so that the resulting result set would be
I am trying to make this work
Merge #temp as target
using #temp as source
on (target.Date = source.Date) and target.Writing is NULL
when matched then
update set target.Writing = source.Writing;
but I'm running into this error:
The MERGE statement attempted to UPDATE or DELETE the same row more
than once. This happens when a target row matches more than one source
row. A MERGE statement cannot UPDATE/DELETE the same row of the target
table multiple times. Refine the ON clause to ensure a target row
matches at most one source row, or use the GROUP BY clause to group
the source rows.
What code modifications can you suggest?
This should do it:
SELECT dfl.mydate, dfl.firststart, dfl.lastend, fa.ActivityA, sa.ActivityB
FROM
(select s.mydate, firststart, lastend FROM
(SELECT mydate, MIN(starttime) as firststart from target GROUP by mydate) s
iNNER JOIN
(SELECT mydate, MAX(EndTime) as lastend from target GROUP by mydate) e
ON s.mydate = e.mydate) AS dfl
INNER JOIN
target fa on dfl.mydate = fa.mydate and dfl.firststart = fa.starttime
INNER JOIN
target sa on dfl.mydate = sa.mydate and dfl.lastend = sa.EndTime
Please note for my test I have called my table target and the columns: mydate, starttime, endtime, activitya and activityb.
No need to merge, a (relatively) simple select yields the results you want.
HTH
PS It helps when using time data to use a 24 hour clock. I have assumed by 5:00 you really meant 17:00
You don't need MERGE statement.
DECLARE #Test TABLE ([Id] int, [Date] nvarchar(10), [TimeIn] nvarchar(10), [TimeOut] nvarchar(10), [Reading] nvarchar(10), [Writeing] nvarchar(10))
INSERT INTO #Test
VALUES
(1,'08-01','8:00','5:00','Y',NULL),
(2,'08-02','8:00','5:00',NULL,'Y'),
(3,'08-02','5:00','12:00',NULL,'Y'),
(4,'08-03','8:00','5:00',NULL,'Y'),
(5,'08-04','1:00','5:00','Y',NULL),
(6,'08-04','5:00','7:00',NULL,'Y'),
(7,'08-04','7:00','10:00',NULL,'Y'),
(8,'08-04','10:00','13:00',NULL,'Y'),
(9,'08-05','8:00','5:00','Y',NULL)
;WITH CTE AS
(
SELECT
t1.[Date],
t1.TimeIn,
ISNULL(t2.TimeOut, t1.TimeOut) AS TimeOut,
ROW_NUMBER() OVER (PARTITION BY t1.[Date] ORDER BY t1.Id) AS RowNumber
FROM #Test AS t1
LEFT OUTER JOIN #Test AS t2 ON t1.TimeOut = t2.TimeIn AND t1.[Date] = t2.[Date]
)
SELECT
c.[Date],
(SELECT c2.TimeIn FROM CTE AS c2 WHERE c2.[Date] = c.[Date] AND c2.RowNumber = MIN(c.RowNumber)) AS TimeIn,
(SELECT c2.TimeOut FROM CTE AS c2 WHERE c2.[Date] = c.[Date] AND c2.RowNumber = MAX(c.RowNumber)) AS TimeOut
FROM CTE AS c
GROUP BY c.[Date]
You can use merge statements in tables where you have an identical column. You can identify the one or more columns that can be used to uniquely identify the row to be merged.
I have a query in my production environment which is taking long time to execute. I did not write this query but I must find a way to make it quicker since it is causing a big performance issue at the moment. I need to replace NOT IN with Left Join but not sure how to rewrite it. It looks like following at the moment
SELECT TOP 1 IT.ITEMID
FROM (SELECT CAST(ITEMID AS NUMERIC) + 1 ITEMID
FROM Items
WHERE ISNUMERIC(ITEMID) = 1
AND CAST(ITEMID AS NUMERIC) >= 50000) IT
WHERE IT.ITEMID NOT IN (SELECT CAST(ITEMID AS NUMERIC) ITEMID
FROM Items
WHERE ISNUMERIC(ITEMID) = 1)
ORDER BY IT.ITEMID
Kindly suggest how am I supposed to rewrite it using Left Join for better performance. Any help/guidance is greatly appreciated.
Try this one -
;WITH cte AS
(
SELECT DISTINCT ITEMID =
CASE WHEN ISNUMERIC(ITEMID) = 1
THEN ITEMID
END
FROM Items
)
SELECT TOP 1 ITEMID = ITEMID + 1
FROM cte t
WHERE ITEMID >= 50000
AND NOT EXISTS(
SELECT 1
FROM cte t2
WHERE t.ITEMID + 1 = t2.ITEMID
)
ORDER BY t.ITEMID
As mentioned in the comments, the NOT EXISTS version of the query is usually faster in SQLServer than the LEFT JOIN - for completeness, here's both versions:
Left join variant of existing query:
with cte as
(SELECT CAST(it.ITEMID AS NUMERIC) ITEMID
FROM Items
WHERE ISNUMERIC(ITEMID) = 1)
select top 1 i.ITEMID + 1 ITEMID
FROM cte i
LEFT JOIN cte ni ON i.ITEMID + 1 = ni.ITEMID
WHERE i.ITEMID >= 50000 AND ni.ITEMID IS NULL
Not exists variant of existing query:
with cte as
(SELECT CAST(it.ITEMID AS NUMERIC) ITEMID
FROM Items
WHERE ISNUMERIC(ITEMID) = 1)
select top 1 i.ITEMID + 1 ITEMID
FROM cte i
WHERE i.ITEMID >= 50000 AND NOT EXISTS
(SELECT NULL
FROM cte ni
WHERE i.ITEMID + 1 = ni.ITEMID)
As #gbn pointed at the comments, the CAST and functions on predicates which invalidates index use anyway, so there is no point in converting this from NOT IN to LEFT JOIN / IS NULL or to NOT EXISTS. And NOT EXISTS usually performs better than LEFT NULL in SQL-Server.
NOT IN is not advised due to the problems (wrong, unexpected results) when there are nulls (in the compared columns or produced by the expressions) and the inefficient plans because of the nullability of the columns/expessions.
And ISNUMERIC() is not doing always what you think it does (as # Damien_The_Unbeliever noted in another comment.) There are cases where the IsNumeric result is 1 but the cast fails.
So, the sane thing to do would be - in my opinion - to add another column in the table and convert (the values that can be converted) to numeric and store them in that column. Then you could write the query without casting and an index on that column could be used.
If you cannot alter the tables in any way (by adding a new column or a materialized view), then you can try and test the various rewritings the other answers offer.
I agree with #ypercube that the sane thing to do is to fix your schema.
If for some reason this is not an option maybe materialising the whole thing into an indexed temporary table at runtime would make the best of a bad job.
CREATE TABLE #T
(
ITEMID NUMERIC(18,0) PRIMARY KEY
WITH ( IGNORE_DUP_KEY = ON)
)
INSERT INTO #T
SELECT CASE WHEN ISNUMERIC(ITEMID) = 1 THEN ITEMID END
FROM Items
WHERE CASE WHEN ISNUMERIC(ITEMID) = 1 THEN ITEMID END >= 50000
SELECT TOP 1 ITEMID+1
FROM #T T1
WHERE NOT EXISTS (SELECT * FROM #T T2 WHERE T2.ITEMID = T1.ITEMID +1)
ORDER BY ITEMID
When I execute my "select union select", I get the correct number or rows (156)
Executed independently, select #1 returns 65 rows and select #2 returns 138 rows.
When I use this "select union select" with an Insert into, I get 203 rows (65+138) with duplicates.
I would like to know if it is my code structure that is causing this issue ?
INSERT INTO dpapm_MediaObjectValidation (mediaobject_id, username, checked_date, expiration_date, notified)
(SELECT FKMediaObjectId, CreatedBy,#checkdate,dateadd(ww,2,#checkdate),0
FROM dbo.gs_MediaObjectMetadata
LEFT JOIN gs_MediaObject mo
ON gs_MediaObjectMetadata.FKMediaObjectId = mo.MediaObjectId
WHERE UPPER([Description]) IN ('CAPTION','TITLE','AUTHOR','DATE PHOTO TAKEN','KEYWORDS')
AND FKMediaObjectId >=
(SELECT TOP 1 MediaObjectId
FROM dbo.gs_MediaObject
WHERE DateAdded > #lastcheck
ORDER BY MediaObjectId)
GROUP BY FKMediaObjectId, CreatedBy
HAVING count(*) < 5
UNION
SELECT FKMediaObjectId, CreatedBy,getdate(),dateadd(ww,2,getdate()),0
FROM gs_MediaObjectMetadata yt
LEFT JOIN gs_MediaObject mo
ON yt.FKMediaObjectId = mo.MediaObjectId
WHERE UPPER([Description]) = 'KEYWORDS'
AND FKMediaObjectId >=
(SELECT TOP 1 MediaObjectId
FROM dbo.gs_MediaObject
WHERE DateAdded > #lastcheck
ORDER BY MediaObjectId)
AND NOT EXISTS
(
SELECT *
FROM dbo.fnSplit(Replace(yt.Value, '''', ''''''), ',') split
WHERE split.item in (SELECT KeywordEn FROM gs_Keywords) or split.item in (SELECT KeywordFr FROM gs_Keywords)
)
)
I would appreciate any clues into resolving this problem ...
Thank you !
The UNION keyword should only return distinct records between the two queries. However, if I recall correctly, this is only true if the datatypes are the same. The date variables might be throwing that off. Depending on the collation type, whitespace might be handled differently as well. You might want to do a SELECT DISTINCT on the dpapm_MediaObjectValidation table after doing your insert, but be sure to trim whitespace from both sides in your comparison. Another approach is to do your first insert, then on your second insert, forgo the UNION altogether and do a manual EXISTS check to see if the items to be inserted already exist.