I have a very strange issue with a SQL query.
IF NOT EXISTS ([special query here])
BEGIN
SELECT 1;
END
ELSE
BEGIN
SELECT 2;
END
The query above outputs: 2.
However when I replace the SELECT 1; part with a large query containing create tables etc. multiple errors are thrown. How is it possible that SQL Server executes code inside the case of an IF statement while that case is not true?
If you are changing schema, the parser will look to see that all the entities exist before running it.
ALTER TABLE myTable ADD aNewColumn INT
UPDATE myTable SET aNewColumn = 0
This will produce an error.
You can use dynamic sql, as long as you aren't taking in parameters from a client.
EXEC sp_executesql N'UPDATE myTable SET aNewColumn = 0'
Syntax errors are syntax errors, whether you're in a conditional branch that'll run or a conditional branch that won't run. Parsing occurs before execution, and must be successful.
Consider this analogous example written in C++:
int main()
{
if (false) {
acbukasygdfukasygdaskuygdfas##4r9837y214r
}
}
You can't write that nonsense line even though it's inside of a block that'll never run, because the program's intended meaning cannot be determined by the compiler.
Related
We are migrating a lot of code from SQL Server to Postgresql. We met the following problem, a serious difference between SQL Server and Postgresql.
Of course, below, by the expression 1=0, I meant cases when the query conditions do not return a single record.
A query in SQL Server:
select #variable = t.field
from table t
where 1 = 0
saves the previous value of the variable.
A query in Postgresql:
select t.field
into variable
from table t
where 1 = 0
replaces the previous value of the variable with null.
We have already rewritten a lot of code without taking this feature into account.
Is there an easy way in postgresql, without rewriting the code, to save the value of a variable in such cases? For example, maybe there is some kind of server's or database's or session's settings? We did not find any relevant information in the documentation. We do not understand such a pattern of behavior in postgresql, which requires the introduction of additional variables and lines of code to check the result of the every query.
As far as I know there is no way to change postgresql's behavior here.
I don't have access to the SQL/PSM specifications, so I couldn't tell you which one matches the standard (if any / if SELECT INTO <variable> even is in it).
You don't need to use additional variables though, you can use INTO STRICT and catch the exception when no rows were returned:
DO $$
DECLARE
variable int = 1;
BEGIN
BEGIN
SELECT 1
INTO STRICT variable
WHERE FALSE;
EXCEPTION
WHEN NO_DATA_FOUND THEN
END;
RAISE NOTICE 'kept the previous value: %', variable;
END
$$
shows "kept the previous value: 1".
Though it is obviously more verbose than the SQL Server version.
I'm facing a quite annoying barrier enforced by SQL Server and would like to check if there is an elegant solution for this.
I have a sequence of procedures' invocations (meaning, A calls B which calls C). The procedures are due to return different results sets, where (for instance) "A" generates its result using a set of records returned by "B".
Now, SQL Server does not allow to have nested INSERT INTO ... EXEC <stored procedure> so, to cope with this limitation, I converted the lowest procedure into a function that returns a table and hence INSERT INTO ... SELECT * FROM <function call>.
Now, there are situations in which the FUNCTION cannot return a result due to conditions of the data, and I would like the function to return a sort of code indicating the result of the execution (e.g. 0 would mean success, 1 would mean "missing input data").
Since SQL Server does not allow functions with OUTPUT parameters, I can't think of any elegant way of conveying these two outputs.
Can anyone suggest an elegant alternative?
there are situations in which the FUNCTION cannot return a result due
to conditions of the data, and I would like the function to return a
sort of code indicating the result of the execution
You really should use THROW to indicate the result of execution, which also precludes using a table-valued function.
So you need to use a stored procedure. To avoid the restriction on nested INSERT .. SELECT you can use temporary tables to pass data back to the calling procedure. EG
create or alter procedure foo
as
begin
if object_id('tempdb..#foo_results') is null
begin
print 'create table #foo_results(id int primary key, a int);';
THROW 51000, 'The results table #foo_results does not exist. Before calling this procedure create it. ', 1;
end
insert into #foo_results(id,a)
values (1,1);
end;
Can anyone suggest an ELEGANT alternative?
I'm not sure any of the alternatives is elegant.
For example:
SET #var1 = SYSUTCDATETIME();
...
SET #var2 = SYSUTCDATETIME();
IF #var1 = #var2
RETURN 0;
ELSE
RETURN 1;
Is it certain that I will always get zero, no matter what is in between var1 and var2?
In my view, given a specific release of SQL Server, the answer should be a simple yes/no answer; I'm not concerned about the details of the behavior.
No, the code between these 2 calls will take some time, so the values will be different.
EDIT: Assuming there is some code between them. In extreme cases, when these 2 assignments are adjacent and the server has nothing else to do, the variables might end up having the same value.
However, separate calls of sysutcdatetime() and other similar functions within the same statement / query do produce identical values.
Hi everyone getting this error message when trying to create a trigger and its got me a little stumped.
Here is my trigger code.
CREATE OR REPLACE TRIGGER CUSTOMER_AD
AFTER DELETE ON CUSTOMER
REFERENCING OLD AS OLD
FOR EACH ROW
DECLARE
nPlaced_order_count NUMBER;
BEGIN
SELECT COUNT(*)
INTO nPlaced_order_count
FROM PLACED_ORDERS p
WHERE p.FK1_CUSTOMER_ID = OLD.CUSTOMER_ID;
IF nPlaced_order_count > 0 THEN
INSERT into previous_customer
(customer_id,
first_name,
last_name,
address,
AUDIT_USER,
AUDIT_DATE)
VALUES
(:old.customer_id,
:old.first_name,
:old.last_name,
:old.address,
UPPER(v('APP_USER')),
SYSDATE);
END IF;
END CUSTOMER_AD;
And the error I'm getting 'Error at line 4: PL/SQL: SQL Statement ignored 0.10 seconds'
Anyone any guesses why?
thanks for the help
The error shown is only the highest level. Depending where you're running it you should be able to see the stack trace. The client will determine exactly how to do that; SQL*Plus or SQL Developer would show you more than this anyway, but I don't really know about other clients. If you can't see the details in your client then you can query for them with:
select * from user_errors where name = 'CUSTOMER_AD' and type = 'TRIGGER'
Assuming the tables all exist, it's probably this line:
WHERE p.FK1_CUSTOMER_ID = OLD.CUSTOMER_ID;
which should be:
WHERE p.FK1_CUSTOMER_ID = :OLD.CUSTOMER_ID;
When referencing the old (or new) value from the table, the name as specified in the referencing clause has be preceded by a colon, so :OLD in this case. As you're doing already in the insert ... values() clause.
(From comments, my assumption turned out to be wrong - as well as the missing colon problem, the table name is really placed_order, without an s).
Seems like you copied code from both answers to your previous question without really understanding what they were doing. You might want to look at the trigger design guidelines (particularly the one about not dupicating database functionality) and the syntax for create trigger which introduces the referencing clause.
I remember reading a while back that randomly SQL Server can slow down and / or take a stupidly long time to execute a stored procedure when it is written like:
CREATE PROCEDURE spMyExampleProc
(
#myParameterINT
)
AS
BEGIN
SELECT something FROM myTable WHERE myColumn = #myParameter
END
The way to fix this error is to do this:
CREATE PROCEDURE spMyExampleProc
(
#myParameterINT
)
AS
BEGIN
DECLARE #newParameter INT
SET #newParameter = #myParameter
SELECT something FROM myTable WHERE myColumn = #newParameter
END
Now my question is firstly is it bad practice to follow the second example for all my stored procedures? This seems like a bug that could be easily prevented with little work, but would there be any drawbacks to doing this and if so why?
When I read about this the problem was that the same proc would take varying times to execute depending on the value in the parameter, if anyone can tell me what this problem is called / why it occurs I would be really grateful, I cant seem to find the link to the post anywhere and it seems like a problem that could occur for our company.
The problem is "parameter sniffing" (SO Search)
The pattern with #newParameter is called "parameter masking" (also SO Search)
You could always use the this masking pattern but it isn't always needed. For example, a simple select by unique key, with no child tables or other filters should behave as expected every time.
Since SQL Server 2008, you can also use the OPTIMISE FOR UNKNOWN (SO). Also see Alternative to using local variables in a where clause and Experience with when to use OPTIMIZE FOR UNKNOWN