PL/SQL: SQL Statement ignored? - database

Hi everyone getting this error message when trying to create a trigger and its got me a little stumped.
Here is my trigger code.
CREATE OR REPLACE TRIGGER CUSTOMER_AD
AFTER DELETE ON CUSTOMER
REFERENCING OLD AS OLD
FOR EACH ROW
DECLARE
nPlaced_order_count NUMBER;
BEGIN
SELECT COUNT(*)
INTO nPlaced_order_count
FROM PLACED_ORDERS p
WHERE p.FK1_CUSTOMER_ID = OLD.CUSTOMER_ID;
IF nPlaced_order_count > 0 THEN
INSERT into previous_customer
(customer_id,
first_name,
last_name,
address,
AUDIT_USER,
AUDIT_DATE)
VALUES
(:old.customer_id,
:old.first_name,
:old.last_name,
:old.address,
UPPER(v('APP_USER')),
SYSDATE);
END IF;
END CUSTOMER_AD;
And the error I'm getting 'Error at line 4: PL/SQL: SQL Statement ignored 0.10 seconds'
Anyone any guesses why?
thanks for the help

The error shown is only the highest level. Depending where you're running it you should be able to see the stack trace. The client will determine exactly how to do that; SQL*Plus or SQL Developer would show you more than this anyway, but I don't really know about other clients. If you can't see the details in your client then you can query for them with:
select * from user_errors where name = 'CUSTOMER_AD' and type = 'TRIGGER'
Assuming the tables all exist, it's probably this line:
WHERE p.FK1_CUSTOMER_ID = OLD.CUSTOMER_ID;
which should be:
WHERE p.FK1_CUSTOMER_ID = :OLD.CUSTOMER_ID;
When referencing the old (or new) value from the table, the name as specified in the referencing clause has be preceded by a colon, so :OLD in this case. As you're doing already in the insert ... values() clause.
(From comments, my assumption turned out to be wrong - as well as the missing colon problem, the table name is really placed_order, without an s).
Seems like you copied code from both answers to your previous question without really understanding what they were doing. You might want to look at the trigger design guidelines (particularly the one about not dupicating database functionality) and the syntax for create trigger which introduces the referencing clause.

Related

select ... into variable from table where 1=0 leads to the replacement of the variable with null

We are migrating a lot of code from SQL Server to Postgresql. We met the following problem, a serious difference between SQL Server and Postgresql.
Of course, below, by the expression 1=0, I meant cases when the query conditions do not return a single record.
A query in SQL Server:
select #variable = t.field
from table t
where 1 = 0
saves the previous value of the variable.
A query in Postgresql:
select t.field
into variable
from table t
where 1 = 0
replaces the previous value of the variable with null.
We have already rewritten a lot of code without taking this feature into account.
Is there an easy way in postgresql, without rewriting the code, to save the value of a variable in such cases? For example, maybe there is some kind of server's or database's or session's settings? We did not find any relevant information in the documentation. We do not understand such a pattern of behavior in postgresql, which requires the introduction of additional variables and lines of code to check the result of the every query.
As far as I know there is no way to change postgresql's behavior here.
I don't have access to the SQL/PSM specifications, so I couldn't tell you which one matches the standard (if any / if SELECT INTO <variable> even is in it).
You don't need to use additional variables though, you can use INTO STRICT and catch the exception when no rows were returned:
DO $$
DECLARE
variable int = 1;
BEGIN
BEGIN
SELECT 1
INTO STRICT variable
WHERE FALSE;
EXCEPTION
WHEN NO_DATA_FOUND THEN
END;
RAISE NOTICE 'kept the previous value: %', variable;
END
$$
shows "kept the previous value: 1".
Though it is obviously more verbose than the SQL Server version.

SQL Conversion failed when converting the varchar value '*' to data type int

Aside: Please note that this is not a duplicate of the countless other string-to-int issues in sql. This is a specific cryptic error message (with the asterisk) that hides another problem
Today I came across an issue in my SQL that took me all day to solve. I was passing a data table parameter into my stored procedure and then inserting part of it into another table. The code used was similar to the following:
INSERT INTO tblUsers
(UserId,ProjectId)
VALUES ((SELECT CAST(CL.UserId AS int) FROM #UserList AS UL),#ProjectId)
I didn't seem to get the error message when using test data, only when making the call from the dev system.
What is the issue and how should the code look?
I could bet that (SELECT CAST(CL.UserId AS int) FROM #UserList AS UL) returns more than 1 row and your test scenario had only 1 row. But that may be just me.
Anyway, the way the code should look is:
INSERT INTO tblUsers (UserId,ProjectId)
SELECT CAST(CL.UserId AS int),
#ProjectId
FROM #UserList AS UL
After some time of trawling through google and such places, I have determined that this SQL code is wrong. I believe the correct code is something more like this:
INSERT INTO tblUsers
(UserId,ProjectId)
SELECT CAST(CL.UserId AS int) ,#ProjectId
FROM #UserList AS UL
The issue is that the other way of doing it attempts to insert one record with the data of all of the rows in the table parameter. I needed to remove the VALUES statement. Also, in order to add other data, I can simply put that as part of the select as you would otherwise

CDC fn_cdc_get_all_changes_dbo_ error 313 "An insufficient number of arguments..."

There's no shortage of topics covering this function and error "An insufficient number of arguments were supplied for the procedure or function cdc.fn_cdc_get_all_changes". I've checked most of them, but can't figure out what's wrong.
The problem here specifically is that I can't even reproduce this. It just appears randomly a few times per day or two, and goes away in mere seconds, so usually I can only see it in the job history as a failed run.
All sources I've browsed say it typically comes from using the function to fetch data from a time range where the capture instance doesn't have data. But as the code below will show you, the values are checked for just these type of exceptions before running the function.
DECLARE #END_LSN BINARY(10), #MIN_TABLE_LSN BINARY(10);
SELECT #END_LSN = sys.fn_cdc_get_max_lsn();
SELECT #MIN_TABLE_LSN = MAX(__$start_lsn) FROM MY_AUDIT_TABLE
IF #MIN_TABLE_LSN IS NOT NULL
SELECT #MIN_TABLE_LSN = sys.fn_cdc_increment_lsn(#MIN_TABLE_LSN)
ELSE
SELECT #MIN_TABLE_LSN = sys.fn_cdc_get_min_lsn('dbo_MY_AUDIT_TABLE')
IF #MIN_TABLE_LSN IS NOT NULL
BEGIN
INSERT INTO MY_AUDIT_TABLE (...columns...)
SELECT ... columns...
FROM cdc.fn_cdc_get_all_changes_dbo_MY_SOURCE_TABLE(#MIN_TABLE_LSN, #END_LSN, 'all update old') C
JOIN cdc.lsn_time_mapping T WITH (NOLOCK) ON T.start_lsn = C.__$start_lsn
ORDER BY __$start_lsn ASC, __$seqval ASC
END
Now the only remaining alternatives which some people have even suggested, is that this code may sometimes pick the latest change from the AUDIT table and then increment that to an LSN that doesn't yet exist. But I've tested this manually tons of times, and it gives no errors. Also, using an #END_LSN value derived from another CDC table, where this particular MY_AUDIT_TABLE doesn't have records that far yet, works perfectly as well.
The only way I can produce this error manually, is by giving the function a newer #END_LSN value than what exist in lsn_time_mapping table. But such a scenario is only possible if SQL Server can actually create CDC-table records with start_lsn's that don't yet exist in lsn_time_mapping, and I hardly think that's possible. Or is it? That would mean that you can't reliably map an lsn to a datetime at the time the row has just become available.
Thanks for help and explanations again, as usual. :)
This
SELECT #MIN_TABLE_LSN = sys.fn_cdc_increment_lsn(#MIN_TABLE_LSN)
just adds 1 to #MIN_TABLE_LSN. It's not necessary that new #Min_Table_Lsn exists. So if you don't have any changes, #END_LSN may be lower than #Min_Table_Lsn and cdc.fn_cdc_get_all_changes_dbo_MY_SOURCE_TABLE() function may fail.
Note that "CDC Scanner job" adds a dummy LSN (with transaction id 0x0) every 5 min to the cdc.lsn_time_mapping table, that's why you may have scenarios when there are no changes and the script may succeed or not, depending on when you run it.
As mentioned above, you should change the condition that Min LSN is lower or equal to End LSN
Simple - if the BeginLsn/FromLsn that you are trying to supply to get_all_changes function, is not available in the change table, this error is thrown. By default set it to minLsn (get_min_lsn) and pass it to function. With the retention period in effect, after every x number of days, the change table data get's cleaned, so you should make sure you are setting FromLsn to an available value in CT table.
i was able to fix this by using the "full name"
Eg. table is dbo.lookup_test you need to pass dbo_lookup_test in sys.fn_cdc_get_min_lsn.
DECLARE #from_lsn binary(10), #to_lsn binary(10);
SET #from_lsn = sys.fn_cdc_get_min_lsn('dbo_lookup_test');
SET #to_lsn = sys.fn_cdc_get_max_lsn();
SELECT * FROM cdc.fn_cdc_get_all_changes_dbo_lookup_test
(#from_lsn, #to_lsn, N'all');
GO
Obviously the code above already took my answer as consideration, but the table naming requirement "{schema}_{tablename}" was not apparent and SQL Server was not helping by giving a misleading error message.
Please also note the lsn should be declared as binary(10). It can't be simply declared as binary. It will throw the same error.

Issue with parameters in SQL Server stored procedures

I remember reading a while back that randomly SQL Server can slow down and / or take a stupidly long time to execute a stored procedure when it is written like:
CREATE PROCEDURE spMyExampleProc
(
#myParameterINT
)
AS
BEGIN
SELECT something FROM myTable WHERE myColumn = #myParameter
END
The way to fix this error is to do this:
CREATE PROCEDURE spMyExampleProc
(
#myParameterINT
)
AS
BEGIN
DECLARE #newParameter INT
SET #newParameter = #myParameter
SELECT something FROM myTable WHERE myColumn = #newParameter
END
Now my question is firstly is it bad practice to follow the second example for all my stored procedures? This seems like a bug that could be easily prevented with little work, but would there be any drawbacks to doing this and if so why?
When I read about this the problem was that the same proc would take varying times to execute depending on the value in the parameter, if anyone can tell me what this problem is called / why it occurs I would be really grateful, I cant seem to find the link to the post anywhere and it seems like a problem that could occur for our company.
The problem is "parameter sniffing" (SO Search)
The pattern with #newParameter is called "parameter masking" (also SO Search)
You could always use the this masking pattern but it isn't always needed. For example, a simple select by unique key, with no child tables or other filters should behave as expected every time.
Since SQL Server 2008, you can also use the OPTIMISE FOR UNKNOWN (SO). Also see Alternative to using local variables in a where clause and Experience with when to use OPTIMIZE FOR UNKNOWN

SQL Server 2000: search through out database

Some how some records in my table are getting updated with value of xyz in a certain column. Out of hundred of stored procedures, functions, triggers, how can I determine which code is doing this action. Is there a way to search through the database some how through each and every script of the code?
Please help.
One approach is to check syscomments
Contains entries for each view, rule,
default, trigger, CHECK constraint,
DEFAULT constraint, and stored
procedure within the database. The
text column contains the original SQL
definition statements..
e.g. select text from syscomments
If you are having trouble finding that literal string, the values could be coming from a table, or they could be being concatenated within a routine.
Try this
Select text from syscomments
where CharIndex('x', text) > 0
and CharIndex('y', text) > 0
and CharIndex('z', text) > 0
That might help you either find the right routine, or further indicate that the values are coming from a table.
This is going to be nearly impossible to do in SQL Server 2000 because the update might very well be from a variable that has that value or a join to another table that has that value and not hard-coded into the stored proc, trigger etc. The update could also be coming from a DTS package, a job, a piece of dynamic code run by the app or even from query analyzer, so the code itself may not be recorded inthe datbase anywhere.
Perhaps a better approach might be to create an audit table for the table in question and have it record the user and the code from the spid that generated the change as well as the old and new values. You'll have to wait until it happens again, but then you would know exactly what changed the value and what value to put it back to if need be.
Alternatively you could run profiler on the system until it happens but profiler tends to hurt performance and is not usually a good idea to run on a production system. If it is happening very often, it might be an acceptable alternative.
Here's a hint as to how you might get some of the info you want for the eventual trigger code you write:
create table #temp (eventtype nvarchar (1000), parameters int, eventinfo nvarchar (4000), myspid int)
declare #myspid int
select #myspid =##spid
insert #temp (eventtype,parameters, eventinfo)
exec ('dbcc inputbuffer (##spid)')
update #temp
set myspid = #myspid
select hostname, program_name, eventinfo
from #temp t
join sysprocesses s on t.myspid = s.spid
WHERE spid = #myspid
You might use sql-profiler to trac the update of a given table / column.

Resources