Updating table based on Select query in stored procedure / ColdFusion - sql-server

I am using ColdFusion for for a project and I have a written a query which I think can be faster using a stored procedure, but I not a T-SQL person, so I am not sure how to do it to compare.
I am running an initial query which selects a number of fields from a table based on a dynamically built cfquery. I think I know how to convert this query into the SQL Server stored procedure.
However, directly after that, I then take all of the primary key IDs from that query and run another query against a separate table that "locks" records with those IDs. The lock is a bit field (a flag) in the second table that tells the system that this record is "checked out". I have wrapped both queries in a cftransaction so that they execute as a unit.
Code Overview:
<cftransaction>
<cfquery name="selectQuery">
SELECT id, field2, field3
FROM table1
WHERE (bunch of conditions here)
</cfquery>
<cfquery name="updateQuery">
UPDATE table2
SET lockField = 1
WHERE table2.id IN (#ValueList(selectQuery.id#)
</cfquery>
</cftransaction>
I then return the selectQuery resultset to my app which uses it for outputting some data. How would I accomplish the same thing in a single SQL Server 2008 stored procedure that I could call using cfstoredproc?
Again, I am thinking that the native CF way (with cfquery) is not as efficient as a stored procedure since I have to retrieve the resultset back to CF, then call another query back to the DB. A single stored procedure does everything in the DB and then returns the original query resultset for use.
Any ideas?

You could add an OUTPUT clause to the UPDATE statement to capture the id's of the records updated and insert them into a table variable/temp table. Then JOIN back to table1 to return the result set.
DECLARE #UpdatedRecords TABLE ( ID INT )
UPDATE t2
SET t2.lockField = 1
OUTPUT Inserted.ID INTO #UpdatedRecords ( ID )
FROM table2 t2 INNER JOIN table1 t1 ON t2.id = t1.id
WHERE (bunch of conditions for table1 here)
SELECT t1.id, t1.field2, t1.field3
FROM table1 t1 INNER JOIN #UpdatedRecords u ON t1.id = u.id
Keep in mind that if table1 is in constant flux, the other values ("field2" and "field3") are not guaranteed to be what they were when the UPDATE occurred. But I think your current method is susceptible to that issue as well.

Your problem is "bunch of conditions here". Are those conditions always static? So is it ALWAYS: (FOO = #x AND BAR = #y)? Or is it conditional where sometimes FOO does not exist at all as a condition?
If FOO is not always present then you have a problem with the stored proc. T-SQL cannot do dynamic query building, in fact even allowing it would kind of negate the point of the proc, which is to compile and pre-optimize the SQL. You CAN do it of course, but you end up just having to build a SQL string inside the proc body and then executing it at the end. You're much better off using CFQuery with cfqueryparams. Actually have you considered doing this instead?
<cfquery name="updateQuery">
UPDATE table2
SET lockField = 1
WHERE table2.id IN (SELECT id
FROM table1
WHERE (bunch of conditions here))
</cfquery>

You could do your update in one query by making your first query a subquery and then using a separate statement to return your results. The whole thing could be a single stored procedure:
CREATE PROCEDURE myUpdate
#Variable [datatype], etc...
AS
BEGIN
UPDATE table2
SET lockField = 1
WHERE table2.id IN (
SELECT id
FROM table1
WHERE (bunch of conditions here)
)
SELECT id, field2, field3
FROM table1
WHERE (bunch of conditions here)
END
You'll probably have to pass some parameters in, but that's the basic structure of a stored procedure. Then you can call it from ColdFusion like so:
<cfstoredproc procedure="myUpdate">
<cfprocparam type="[CF SQL Type]" value="[CF Variable]">
etc...
<cfprocresult name="selectQuery" resultSet="1">
</cfstoredproc>
You could use those query results just like you were using them before.

No need for a SPROC.
UPDATE table2
SET table2.lockField = 1
FROM table1
WHERE table1.id = table2.id
AND table1.field2 = <cfqueryparam ....>
AND table1.field3 = <cfqueryparam ....>

Related

What is the "lifespan" of a postgres CTE expression? e.g. WITH... AS

I have a CTE I am using to pull some data from two tables then stick in an intermediate table called cte_list, something like
with cte_list as (
select pl.col_val from prune_list pl join employees.employee emp on pl.col_val::uuid = emp.id
where pl.col_nm = 'employee_ref_id' limit 100
)
Then, I am doing an insert to move records from the cte_list to another archive table (if they don't exist) called employee_arch_test
insert into employees.employee_arch_test (
select * from employees.employee where id in (select col_val::uuid from cte_list)
and not exists (select 1 from employees.employee_arch_test where employees.employee_arch_test.id=employees.employee.id)
);
This seems to work fine. The problem is when I add another statement after, to do some deletions from the main employee table using this aforementioned cte_list - the cte_list apparently no longer exists?
SQL Error [42P01]: ERROR: relation "cte_list" does not exist
the actual delete query:
delete from employees.employee where id in (select col_val::uuid from cte_list);
Can the cte_list CTE table only be used once or something? I'm running these statements in a LOOP and I need to run the exact same calls for about 2 or 3 other tables but hit a sticking point here.
A CTE only exists for the duration of the statement of which it's a part. I gather you have an INSERT statement with the CTE preceding it:
with cte_list
as (select pl.col_val
from prune_list pl
join employees.employee emp
on pl.col_val::uuid = emp.id
where pl.col_nm = 'employee_ref_id'
limit 100
)
insert into employees.employee_arch_test
(select *
from employees.employee
where id in (select col_val::uuid from cte_list)
and not exists (select 1
from employees.employee_arch_test
where employees.employee_arch_test.id = employees.employee.id)
);
The CTE is part of the INSERT statement - it is not a separate statement by itself. It only exists for the duration of the INSERT statement.
If you need something which lasts longer your options are:
Add the same CTE to each of your following statements. Note that because data may be changing in your database each invocation of the CTE may return different data.
Create a view which performs the same operations as the CTE, then use the view in place of the CTE. Note that because data may be changing in your database each invocation of the view may return different data.
Create a temporary table to hold the data from your CTE query, then use the temporary table in place of the CTE. This has the advantage of providing a consistent set of data to all operations.

Stored procedure "forgetting" field between update and insert statement

We have a stored procedure that is used to UPDATE a table with a value calculated from an existing column.
I am trying to amend the stored procedure to also INSERT a row into a different table, using that same column's value but the column is being rejected by the parser as an invalid column name.
Here is a condensed version of the code. As originally supplied the sequence_no is known to the stored procedure and ends up in reference_no. i.e. the UPDATE works but the INSERT fails.
ALTER PROCEDURE [dbo].[update_references]
AS
-- Original contents:
UPDATE table1
SET reference_no = sequence_no
FROM table1 t1 WITH (NOLOCK)
LEFT OUTER JOIN proptable p1 WITH (NOLOCK) ON t1.checkval = p1.checkval
WHERE p1.fruit = 'apple'
-- I have added the INSERT
INSERT INTO table2 (next_seq_no)
VALUES (sequence_no)
The sequence_no is underlined in red in SSMS.
The insert statement in your code knows nothing about the previous update so you can't reference random columns from that and expect them to still be in scope. The easiest way of doing this is using the OUTPUT clause.
UPDATE table1
SET reference_no = sequence_no
OUTPUT INSERTED.reference_no INTO table2 (next_seq_no)
FROM table1 t1
LEFT OUTER JOIN proptable p1 ON t1.checkval = p1.checkval
WHERE p1.fruit = 'apple'

SQL Server loop programming

For Sql Server 2014, what syntax do I need, if this is even possible?
(in pseudo-code)
DECLARE #searchstring nvarchar(20)
LOOP #searchstringstring = (SELECT keyword FROM table1)
SELECT column FROM table2 where column LIKE '%#searchstring%'
END LOOP
I want it to return all columns in a single table.
Unless I'm missing something, you want to select all the values in table2.Column that contains the text in table2.Keyword. This can be done easily with a simple inner join:
SELECT t2.column
FROM table2 t2
INNER JOIN table1 t1 ON(t2.column LIKE '%'+ t1.keyword +'%'
Sql works best with set based operations. looping is rarely the desired approach.

How does t-sql update work without a join

I think my head is muddy or something. I'm trying to figure out how a t-sql update works without a join when updating one table from another. I've always used joins in the past but came across a stored proc where someone else created one without a join. This update is being used in SQL 2008R2 and it works.
Update table1
SET col1 = (SELECT TOP 1 colX FROM table2 WHERE colZ = colY),
col2 = (SELECT TOP 1 colE FROM table2 WHERE colZ = colY)
Obviously, colY is a field in table1. To get the same results in a select statement (not update), a join is required. I guess I don't understand how an update works behind the scenes but it must be doing some kind of join?
SQL Server translates those subqueries into joins. You can look at this by getting the query plan. You can write an equivalent query with UPDATE ... FROM ... JOIN syntax and observe the query plan to be essentially the same.
The sample code shown is unusual, hard to understand, redundant and inflexible. I recommend against using this style.
No it's doing a sub query, well two in this case. Be damn painful if you have another 98 col fields.
You can do something similar for select
select *,
(SELECT TOP 1 colX FROM table2 WHERE colZ = colY) as col1
From table1
A left join would simply be more efficient
Your example unless the dbms optimises it it running the subquery(ies) for each row in table.
Got to say whoever wrote it is less than competent.
These subqueries are what is called correlated subqueries. If you were to write the same query as a SELECT rather than an UPDATE it would look like this.
SELECT col1 = (SELECT TOP 1 table2.colX FROM table2 WHERE table2.colZ = table1.colY),
col2 = (SELECT TOP 1 table2.colE FROM table2 WHERE table2.colZ = table1.colY)
FROM table1
The JOIN is in the fact that you are referencing a column from an outside table on the inside of the subquery. Table1 is referenced in the UPDATE command. You can include a FROM clause but it isn't required for a setup like this.
You can use the same syntax in a SELECT with no join, but you need to alias the table if colY also exists in table2
SELECT (SELECT TOP 1 colX FROM table2 WHERE colZ = T.colY)
, (SELECT TOP 1 colE FROM table2 WHERE colZ = T.colY)
FROM table1 AS T
I only ever use this sort of thing when building up an ad hoc query just for my own infomation. If it's going to be put into any sort of permanent code I'll convert it to a join as it's easier to read and more maintainable.

Is there a way to optimize the query given below

I have the following Query and i need the query to fetch data from SomeTable based on the filter criteria present in the Someothertable. If there is nothing present in SomeOtherTable Query should return me all the data present in SomeTable
SQL SERVER 2005
SomeOtherTable does not have any indexes or any constraint all fields are char(50)
The Following Query work fine for my requirements but it causes performance problems when i have lots of parameters.
Due to some requirement of Client, We have to keep all the Where clause data in SomeOtherTable. depending on subid data will be joined with one of the columns in SomeTable.
For example the Query can can be
SELECT
*
FROM
SomeTable
WHERE
1=1
AND
(
SomeTable.ID in (SELECT DISTINCT ID FROM SomeOtherTable WHERE Name = 'ABC' and subid = 'EF')
OR
0=(SELECT Count(1) FROM SomeOtherTable WHERE spName = 'ABC' and subid = 'EF')
)
AND
(
SomeTable.date =(SELECT date FROM SomeOtherTable WHERE Name = 'ABC' and subid = 'Date')
OR
0=(SELECT Count(1) FROM SomeOtherTable WHERE spName = 'ABC' and subid = 'Date')
)
EDIT----------------------------------------------
I think i might have to explain my problem in detail:
We have developed an ASP.net application that is used to invoke parametrize crystal reports, parameters to the crystal reports are not passed using the default crystal reports method.
In ASP.net application we have created wizards which are used to pass the parameters to the Reports, These parameters are not directly consumed by the crystal report but are consumed by the Query embedded inside the crystal report or the Stored procedure used in the Crystal report.
This is achieved using a table (SomeOtherTable) which holds parameter data as long as report is running after which the data is deleted, as such we can assume that SomeOtherTable has max 2 to 3 rows at any given point of time.
So if we look at the above query initial part of the Query can be assumed as the Report Query and the where clause is used to get the user input from the SomeOtherTable table.
So i don't think it will be useful to create indexes etc (May be i am wrong).
SomeOtherTable does not have any
indexes or any constraint all fields
are char(50)
Well, there's your problem. There's nothing you can do to a query like this which will improve its performance if you create it like this.
You need a proper primary or other candidate key designated on all of your tables. That is to say, you need at least ONE unique index on the table. You can do this by designating one or more fields as the PK, or you can add a UNIQUE constraint or index.
You need to define your fields properly. Does the field store integers? Well then, an INT field may just be a better bet than a CHAR(50).
You can't "optimize" a query that is based on an unsound schema.
Try:
SELECT
*
FROM
SomeTable
LEFT JOIN SomeOtherTable ON SomeTable.ID=SomeOtherTable.ID AND Name = 'ABC'
WHERE
1=1
AND
(
SomeOtherTable.ID IS NOT NULL
OR
0=(SELECT Count(1) FROM SomeOtherTable WHERE spName = 'ABC')
)
also put 'with (nolock)' after each table name to improve performance
The following might speed you up
SELECT *
FROM SomeTable
WHERE
SomeTable.ID in
(SELECT DISTINCT ID FROM SomeOtherTable Where Name = 'ABC')
UNION
SELECT *
FROM SomeTable
Where
NOT EXISTS (Select spName From SomeOtherTable Where spName = 'ABC')
The UNION will effectivly split this into two simpler queries which can be optiomised separately (depends very much on DBMS, table size etc whether this will actually improve performance -- but its always worth a try).
The "EXISTS" key word is more efficient than the "SELECT COUNT(1)" as it will return true as soon as the first row is encountered.
Or check if the value exists in db first
And you can remove the distinct keyword in your query, it is useless here.
if EXISTS (Select spName From SomeOtherTable Where spName = 'ABC')
begin
SELECT *
FROM SomeTable
WHERE
SomeTable.ID in
(SELECT ID FROM SomeOtherTable Where Name = 'ABC')
end
else
begin
SELECT *
FROM SomeTable
end
Aloha
Try
select t.* from SomeTable t
left outer join SomeOtherTable o
on t.id = o.id
where (not exists (select id from SomeOtherTable where spname = 'adbc')
OR spname = 'adbc')
-Edoode
change all your select statements in the where part to inner jons.
the OR conditions should be union all-ed.
also make sure your indexing is ok.
sometimes it pays to have an intermediate table for temp results to which you can join to.
It seems to me that there is no need for the "1=1 AND" in your query. 1=1 will always evaluate to be true, leaving the software to evaluate the next part... why not just skip the 1=1 and evaluate the juicy part?
I am going to stick to my original Query.

Resources