I need to update a table where one of the conditions is an EXISTS that needs to reference the table being updated.
I couldn't find a neat way to do this, so I resorted to this verbose SQL that gets the job done:
update TABLE_R set
_deleted = 1
where id in (
select id from TABLE_R r
where (r.connectId = #theID)
and (isnull(r._deleted, 0) = 0)
and (r.type = 'Person')
and exists(
select id from TABLE_P p
where (isnull(p.deleted, 0) = 1)
and (isnull(p.social_security, '') <> '')
and (p.social_security = r.social_security)
and (p.trans_id = r.trans_id)
)
)
Is there any way to simplify this? It seems like there should be a way to remove the outer nested select - but I couldn't find a way to reference TABLE_R in the EXISTS clause unless I wrap it in SELECT id FROM TABLE_R r.
Any ideas? Or is this as good as it gets?
I don't want to use JOIN, and I want to keep it as readable and simple as possible.
Whould this work? Since this is an UPDATE, PLEASE make sure you run this on test data, and also run it as a select first to verify the results.
UPDATE r SET _deleted = 1
--SELECT *
FROM TABLE_R r
WHERE r.connectId = #theID
AND ISNULL(r._deleted, 0) = 0
AND r.[type] = 'Person'
AND EXISTS (
SELECT *
FROM TABLE_P p
WHERE p.deleted = 1
AND p.social_security <> ''
AND p.social_security = r.social_security
AND s.trans_id = r.trans_id
)
I removed a couple things.
(isnull(p.deleted, 0) = 1)
The ISNULL() is not needed here since you want p.deleted = 1, that will filter out NULL's anyway.
and (isnull(p.social_security, '') <> '')
I also removed this ISNULL() since p.social_security = r.social_security will eliminate the NULL's as well, but you still want to avoid joining on blank social_security's.
In Sql Server, you can take your logic out of the in clause. Outside of that, you can make it a bit more readable by avoiding parentheses when not necessary. The exists clause may seem verbose, but it's actually going to be the most performant way you can do it. But for readability I like to select 0 to make it clear we don't care about the results.
update r
set _deleted = 1
from TABLE_R r
where r.connectId = #theID
and isnull(r._deleted, 0) = 0
and r.type = 'Person'
and exists(
select 0
from TABLE_P p
where isnull(p.deleted, 0) = 1
and isnull(p.social_security, '') <> ''
and p.social_security = r.social_security
and s.trans_id = r.trans_id
)
Related
I've a simple stored procedure to update a table as follows:
This sp is updating the table properly. But when I execute select query on po_tran table, its hanging.
Is there any mistake in the stored procedure..?
alter procedure po_tran_upd #locid char(3)
as
SET NOCOUNT ON;
begin
update t
set t.lastndaysale = (select isnull(sum( qty)*-1, 0)
from exp_tran
where exp_tran.loc_id =h.loc_id and
item_code = t.item_code and
exp_tran.doc_date > dateadd(dd,-30,getdate() )
and exp_tran.doc_type in ('PI', 'IN', 'SR')),
t.stk_qty = (select isnull(sum( qty), 0)
from exp_tran
where exp_tran.loc_id =h.loc_id and
item_code = t.item_code )
from po_tran t, po_hd h
where t.entry_no=h.entry_no and
h.loc_id=#locid and
h.entry_date> getdate()-35
end
;
Try the following possible ways to optimize your procedure.
Read this article, where I have explained the same example using CURSOR, Here I also have updated a field of the table using CURSOR.
Important: Remove Subquery, As I can see you have used a subquery to update the field.
You can use Join or Save the result of your query in the temp variable and you can use that variable while update.
i.g
DECLARE #lastndaysale AS FLOAT
DECLARE #stk_qty AS INT
select #lastndaysale = isnull(sum( qty)*-1, 0) from exp_tran where exp_tran.loc_id =h.loc_id and
item_code = t.item_code and exp_tran.doc_date > dateadd(dd,-30,getdate() ) and exp_tran.doc_type in ('PI', 'IN', 'SR')
select #stk_qty = isnull(sum( qty), 0) from exp_tran where exp_tran.loc_id =h.loc_id and item_code = t.item_code
update t set t.lastndaysale =#lastndaysale,
t.stk_qty = #stk_qty
from po_tran t, po_hd h where t.entry_no=h.entry_no and h.loc_id=#locid and h.entry_date> getdate()-35
This is just a sample example you can do need full changes in that.
I added a possibly more performant update, however, I do not fully understand your question. If "any" query is running slow against the po_tran, then I suggest you examine the indexing on that table and ensure it has a proper clustered index. If "this" query is running slow then I suggest you look into "covering indexes". The two fields entry_no and item_code seem like good candidates to include in a covering index.
update t
set t.lastndaysale =
CASE WHEN e.doc_date > dateadd(dd,-30,getdate() AND e.doc_type in ('PI', 'IN', 'SR') THEN
isnull(sum(qty) OVER (PARTITION BY e.loc_id, t.item_code) *-1, 0)
ELSE 0
END,
t.stk_qty = isnull(SUM(qty) OVER (PARTITION BY e.loc_id, t.item_code),0)
from
po_tran t
INNER JOIN po_hd h ON h.entry_no=t.entry_no AND h.entry_date> getdate()-35
INNER JOIN exp_tran e ON e.loc_id = h.loc_id AND e.itesm_code = t.item_code
where
h.loc_id=#locid
I have below dynamic WHERE condition in XML mapping which is working fine:
WHERE
IncomingFlightId=#{flightId}
<if test="screenFunction == 'MAIL'.toString()">
and ContentCode = 'M'
</if>
<if test="screenFunction == 'CARGO'.toString()">
and ContentCode Not IN('M')
</if>
order by ContentCode ASC
I'm trying to run below query in a IDE but unfortunately its not working.
Can anybody please explain what i'm doing wrong?
WHERE
IncomingFlightId = 2568648
AND (IF 'MAIL' = 'MAIL'
BEGIN
SELECT 'and ContentCode = "M"'
END ELSE BEGIN
SELECT 'and ContentCode Not IN("M")'
END)
order by ContentCode ASC
You can't use IF in straight up SQL statement, use CASE WHEN test THEN returniftrue ELSE valueiffalse END instead (if you have to use conditional logic)
That said, it's probably avoidable if you do something like this:
WHERE
(somecolumn = 'MAIL' AND ContentCode = 'M') OR
(somecolumn <> 'MAIL' and ContentCode <> 'M')
Example of conditional logic in a straight SQL:
SELECT * FROM table
WHERE
CASE WHEN col > 0 THEN 1 ELSE 0 END = 1
Case when runs a test and returns a value. You always have to compare the return value to something else. You can't do something that doesn't return a value.
It's kinda dumb here though, because anything you can express in the truth of a case when, can be more simply and readably expressed in the truth of a where clause directly..
SELECT * FROM table
WHERE
CASE WHEN type = 'x'
THEN (SELECT count(*) FROM x)
ELSE (SELECT count(*) FROM y)
END = 1
Versus
SELECT * FROM table
WHERE
(type = 'x' AND (SELECT count(*) FROM x) = 1) OR
type <> 'x' AND (SELECT count(*) FROM y) = 1)
It's useful for things like this though:
SELECT
bustourname,
SUM(CASE WHEN age > 60 THEN 1 ELSE 0 END) as count_of_old_people
FROM table
GROUP BY bustourname
If you're looking to write a stored procedure that conditionally builds an SQL, then sure, you can do that...
DECLARE #sql VARCAHR(max) = 'SELECT * FROM TABLE WHERE';
IF blah SET #sql = CONCAT(#sql, 'somecolumn = 1')
IF otherblah SET #sql = CONCAT(#sql, 'othercolumn = 1')
EXEC #sql...
But this is only in a stored procedure or procedure-like sql script where it builds a string that looks like an SQL, and then executes it dynamically. You cannot use IF in a plain SELECT statement
You are running the query which (beside it is syntactically incorrect SQL) has nothing to do with query generated and used by mybatis.
You need to understand how if in mybatis mapper works.
if element evaluates before the query is executed at the stage of generation of the SQL query text. If the value of the test is true the content of if element is included into the resulting query. In your case depending on the screenFunction parameter passed to mybatis mapper method one of three conditions are generated.
If value of screenFunction is MAIL then:
WHERE
IncomingFlightId=#{flightId}
and ContentCode = 'M'
order by ContentCode ASC
If value of screenFunction is CARGO then:
WHERE
IncomingFlightId=#{flightId}
and ContentCode Not IN('M')
order by ContentCode ASC
Otherwise (if value of screenFunction is not MAIL and is not CARGO):
WHERE
IncomingFlightId=#{flightId}
order by ContentCode ASC
Only after the query text is generated it is executed via JDBC against the database.
So if you want to run the query manually you need to try one of these queries.
One thing that you may want to do to make it easier is to enable logging of SQL queries and parameters passed to them so you can more easily rerun them.
I am trying to add a CASE statement, but I need an OR within it and I am having a little trouble.
I'm not sure how I can do this, but I was hoping someone could help. Here is what I am trying to do:
SELECT *
FROM Table1
WHERE IsUpdate = CASE WHEN #Type = 'Yes' THEN 1 ELSE (0 OR 1) END
So basically, I want to select only the rows that have IsUpdate set to 1 when #Type = 'Yes', otherwise I want to select the rows where IsUpdate = 0 OR IsUpdate = 1. Any ideas?
You don't need a CASE, i assume that the value can only be 0 or 1:
SELECT * FROM Table1
WHERE #Type <> 'Yes' OR IsUpdate = 1
If this is sitting in a stored-procedure it's probably better to use a If-Else instead of the parameter-check since above query is non-sargable so it might be inefficient on a large table.
The full where clause that matches your logic is:
where (#Type = 'Yes' and IsUpdate = 1) or
(#Type <> 'Yes' and IsUpdate in (0, 1))
You can simplify this, if you know something about the values in the columns. For instance, if IsUpdate only takes on the values 0 and 1 (and not NULL):
where #Type <> 'Yes' or IsUpdate = 1
In SQL Server, performance wise, it is better to use IF EXISTS (select * ...) than IF (select count(1)...) > 0...
However, it looks like Oracle does not allow EXISTS inside the IF statement, what would be an alternative to do that because using IF select count(1) into... is very inefficient performance wise?
Example of code:
IF (select count(1) from _TABLE where FIELD IS NULL) > 0 THEN
UPDATE TABLE _TABLE
SET FIELD = VAR
WHERE FIELD IS NULL;
END IF;
the best way to write your code snippet is
UPDATE TABLE _TABLE
SET FIELD = VAR
WHERE FIELD IS NULL;
i.e. just do the update. it will either process rows or not. if you needed to check if it did process rows then add afterwards
if (sql%rowcount > 0)
then
...
generally in cases where you have logic like
declare
v_cnt number;
begin
select count(*)
into v_cnt
from TABLE
where ...;
if (v_cnt > 0) then..
its best to use ROWNUM = 1 because you DON'T CARE if there are 40 million rows..just have Oracle stop after finding 1 row.
declare
v_cnt number;
begin
select count(*)
into v_cnt
from TABLE
where rownum = 1
and ...;
if (v_cnt > 0) then..
or
select count(*)
into v_cnt
from dual
where exists (select null
from TABLE
where ...);
whichever syntax you prefer.
As Per:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:3069487275935
You could try:
for x in ( select count(*) cnt
from dual
where exists ( select NULL from foo where bar ) )
loop
if ( x.cnt = 1 )
then
found do something
else
not found
end if;
end loop;
is one way (very fast, only runs the subquery as long as it "needs" to, where exists
stops the subquery after hitting the first row)
That loop always executes at least once and at most once since a count(*) on a table
without a group by clause ALWAYS returns at LEAST one row and at MOST one row (even of
the table itself is empty!)
MyTableA has several million records. On regular occasions every row in MyTableA needs to be updated with values from TheirTableA.
Unfortunately I have no control over TheirTableA and there is no field to indicate if anything in TheirTableA has changed so I either just update everything or I update based on comparing every field which could be different (not really feasible as this is a long and wide table).
Unfortunately the transaction log is ballooning doing a straight update so I wanted to chunk it by using UPDATE TOP, however, as I understand it I need some field to determine if the records in MyTableA have been updated yet or not otherwise I'll end up in an infinite loop:
declare #again as bit;
set #again = 1;
while #again = 1
begin
update top (10000) MyTableA
set my.A1 = their.A1, my.A2 = their.A2, my.A3 = their.A3
from MyTableA my
join TheirTableA their on my.Id = their.Id
if ##ROWCOUNT > 0
set #again = 1
else
set #again = 0
end
is the only way this will work if I add in a
where my.A1 <> their.A1 and my.A2 <> their.A2 and my.A3 <> their.A3
this seems like it will be horribly inefficient with many columns to compare
I'm sure I'm missing an obvious alternative?
Assuming both tables are the same structure, you can get a resultset of rows that are different using
SELECT * into #different_rows from MyTable EXCEPT select * from TheirTable and then update from that using whatever key fields are available.
Well, the first, and simplest solution, would obviously be if you could change the schema to include a timestamp for last update - and then only update the rows with a timestamp newer than your last change.
But if that is not possible, another way to go could be to use the HashBytes function, perhaps by concatenating the fields into an xml that you then compare. The caveat here is an 8kb limit (https://connect.microsoft.com/SQLServer/feedback/details/273429/hashbytes-function-should-support-large-data-types) EDIT: Once again, I have stolen code, this time from:
http://sqlblogcasts.com/blogs/tonyrogerson/archive/2009/10/21/detecting-changed-rows-in-a-trigger-using-hashbytes-and-without-eventdata-and-or-s.aspx
His example is:
select batch_id
from (
select distinct batch_id, hash_combined = hashbytes( 'sha1', combined )
from ( select batch_id,
combined =( select batch_id, batch_name, some_parm, some_parm2
from deleted c -- need old values
where c.batch_id = d.batch_id
for xml path( '' ) )
from deleted d
union all
select batch_id,
combined =( select batch_id, batch_name, some_parm, some_parm2
from some_base_table c -- need current values (could use inserted here)
where c.batch_id = d.batch_id
for xml path( '' ) )
from deleted d
) as r
) as c
group by batch_id
having count(*) > 1
A last resort (and my original suggestion) is to try Binary_Checksum? As noted in the comment, this does open the risk for a rather high collision rate.
http://msdn.microsoft.com/en-us/library/ms173784.aspx
I have stolen the following example from lessthandot.com - link to the full SQL (and other cool functions) is below.
--Data Mismatch
SELECT 'Data Mismatch', t1.au_id
FROM( SELECT BINARY_CHECKSUM(*) AS CheckSum1 ,au_id FROM pubs..authors) t1
JOIN(SELECT BINARY_CHECKSUM(*) AS CheckSum2,au_id FROM tempdb..authors2) t2 ON t1.au_id =t2.au_id
WHERE CheckSum1 <> CheckSum2
Example taken from http://wiki.lessthandot.com/index.php/Ten_SQL_Server_Functions_That_You_Have_Ignored_Until_Now
I don't know if this is better than adding where my.A1 <> their.A1 and my.A2 <> their.A2 and my.A3 <> their.A3, but I would definitely give it a try (assuming SQL Server 2005+):
declare #again as bit;
set #again = 1;
declare #idlist table (Id int);
while #again = 1
begin
update top (10000) MyTableA
set my.A1 = their.A1, my.A2 = their.A2, my.A3 = their.A3
output inserted.Id into #idlist (Id)
from MyTableA my
join TheirTableA their on my.Id = their.Id
left join #idlist i on my.Id = i.Id
where i.Id is null
/* alternatively (instead of left join + where):
where not exists (select * from #idlist where Id = my.Id) */
if ##ROWCOUNT > 0
set #again = 1
else
set #again = 0
end
That is, declare a table variable for collecting the IDs of the rows being updated and use that table for looking up (and omitting) IDs that have already been updated.
A slight variation on the method would be to use a local temporary table instead of a table variable. That way you would be able to create an index on the ID lookup table, which might result in better performance.
If schema change is not possible. How about using trigger to save off the Ids that have changed. And only import/export those rows.
Or use trigger to export it immediately.