How to apply a cached update FDQuery using Delphi FireDAC with an UNIQUE constraint on the database - database

I have a problem to resolve cache updates when delta includes fields that have UNIQUE constraint on the database. I have a database with the following DDL schema (SQLite in memory can be used to reproduce):
create table FOO
(
ID integer primary key,
DESC char(2) UNIQUE
);
The initial database table contains one record with ID = 1 and DESC = R1
Acessing this table with a TFDQuery (select * from FOO), if the following steps are performed, the generated delta will be correctly applied with ApplyUpdates:
Update record ID = 1 to DESC = R2
Append a new record ID = 2 with DESC = R1
Delta includes the following:
R2
R1
No error will be generated on ApplyUpdates, because the first operation on delta will be an update. The second will be an insert. As record 1 now is R2, the insertion can be done because there are no violation of the unique contraint on this transaction.
Now, performing the following steps, will generate the exactly same delta (look at the FDQuery.Delta property), but a UNIQUE constraint violation will be generated.
Append a new temporary record ID = 2 with DESC = TT
Update the first record ID = 1 to DESC = R2
Update the temporary record 2 - TT to DESC = R1
Delta includes the following:
R2
R1
Note that FireDAC generates the same delta on both scenarios, this can be viewed through the FDquery's Delta property.
This steps cand be used to reproduce the error:
File > New VCL Forms Application; Drop a FDConnection and FDQuery on form; Set FDConnection to use SQLite driver (using in memory database); Drop two buttons on form, one to reproduce the correctly behavior, and another to reproduce the error, as follows:
Button OK:
procedure TFrmMain.btnOkClick(Sender: TObject);
begin
// create the default database with a FOO table
con.Open();
con.ExecSQL('create table FOO' + '(ID integer primary key, DESC char(2) UNIQUE)');
// insert a default record
con.ExecSQL('insert into FOO values (1,''R1'')');
qry.CachedUpdates := true;
qry.Open('select * from FOO');
// update the first record to T2
qry.First();
qry.Edit();
qry.Fields[1].AsString := 'R2';
qry.Post();
// append the second record to T1
qry.Append();
qry.Fields[0].AsInteger := 2;
qry.Fields[1].AsString := 'R1';
qry.Post();
// apply will not generate a unique constraint violation
qry.ApplyUpdates();
end;
Button Error:
// create the default database with a FOO table
con.Open();
con.ExecSQL('create table FOO' + '(ID integer primary key, DESC char(2) UNIQUE)');
// insert a default record
con.ExecSQL('insert into FOO values (1,''R1'')');
qry.CachedUpdates := true;
qry.Open('select * from FOO');
// append a temporary record (TT)
qry.Append();
qry.Fields[0].AsInteger := 2;
qry.Fields[1].AsString := 'TT';
qry.Post();
// update R1 to R2
qry.First();
qry.Edit();
qry.Fields[1].AsString := 'R2';
qry.Post();
qry.Next();
// update TT to R1
qry.Edit();
qry.Fields[1].AsString := 'R1';
qry.Post();
// apply will generate a unique contraint violation
qry.ApplyUpdates();

Update Since writing the original version of this answer, I've done some more investigation and am beginning to think that either there is a problem with ApplyUpdates, etc, in FireDAC's support for Sqlite (in Seattle, at least), or we are not using the FD components correctly. It would need FireDAC's author (who is a contributor here) to say which it is.
Leaving aside the ApplyUpdates business for a moment, there are a number of other problems with your code, namely your dataset navigation makes assumptions about the ordering on the rows in qry and the numbering of its Fields.
The test case I have used is to start (before execution of the application) with the Foo table containing the single row
(1, 'R1')
Then, I execute the following Delphi code, at the same time as monitoring the contents of Foo using an external application (the Sqlite Manager plug-in for FireFox). The code executes without an error being reported in the application, but notice that it does not call ApplyUpdates.
Con.Open();
Con.StartTransaction;
qry.Open('select * from FOO');
qry.InsertRecord([2, 'TT']);
assert(qry.Locate('ID', 1, []));
qry.Edit;
qry.FieldByName('DESC').AsString := 'R2';
qry.Post;
assert(qry.Locate('ID', 2, []));
qry.Edit;
qry.FieldByName('DESC').AsString := 'R1';
qry.Post;
Con.Commit;
qry.Close;
Con.Close;
The added row (ID = 2) is not visible to the external application until after Con.Close has executed, which I find puzzling. Once Con.Close has been called, the external application shows Foo as containing
(1, 'R2')
(2, 'R1')
However, I have been unable to avoid the constraint violation error if I call ApplyUpdates, regardless of any other changes I make to the code, including adding a call to ApplyUpdates after the first Post.
So, it seems to me that either the operation of ApplyUpdates is flawed or it is not being used correctly.
I mentioned FireDAC's author. His name is Dmitry Arefiev and he has answered a lot of FD qs on SO, though I haven't noticed him here in the past couple of months or so. You might try catching his attention by posting in EMBA's FireDAC NG forum, https://forums.embarcadero.com/forum.jspa?forumID=502.

Related

SP2-0552: Bind variable "NEW" not declared and END Error report - Unknown Command

I have to write a trigger for the tables I made and in insert update, I have to record a separate log table for those that are updated or inserted.
Columns in the log table will be like;
Done_process (will write update, insert)
Person (student number of the person treated)
Before (previous value for update, blank for insert)
After (new value for update, new value for insert)
This is my student_info table,
CREATE TABLE student_info (
school_id NUMBER,
id_no NUMBER NOT NULL UNIQUE,
name VARCHAR2(50) NOT NULL,
surname VARCHAR2(50) NOT NULL,
city VARCHAR2(50) NOT NULL,
birth_date DATE NOT NULL,
CONSTRAINT student_info_pk PRIMARY KEY(school_id )
);
CREATE TABLE og_log(
done_process VARCHAR2(30),
person VARCHAR2(30),
before VARCHAR2(30),
after VARCHAR2(30)
);
CREATE OR REPLACE TRIGGER og_trigger
BEFORE INSERT OR UPDATE OR DELETE ON student_info
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
ENABLE
DECLARE
BEGIN
IF INSERTING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Insert',:new.school_id,:old.name,:new.name);
ELSIF UPDATING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Update',:new.school_id,:old.name,:new.name);
END IF;
END;
/
When I try to run the code it gave an error as follows;
> Trıgger OG_TRIGGER created.
>
>
> Error starting at line : 280 in command - ELSIF UPDATING THEN Error
> report - Unknown Command
>
> SP2-0552: Bind variable "NEW" not declared.
>
> 0 rows inserted.
>
>
> Error starting at line : 283 in command - END IF Error report -
> Unknown Command
>
> SP2-0044: For a list of known commands enter HELP and to leave enter
> EXIT.
>
> Error starting at line : 284 in command - END Error report - Unknown
> Command
I believe you are creating this trigger for learning purpose and not something a real use case because what you do in trigger doesn't really making any sense.
The trigger you have mentioned is not compiling due to syntactical problems like where v_id := 20201033.
Where clause is used to compare the value and thus you should use = instead := which is an assignment operator.
Besides this problem few points which still needs to be taken care
Give a explicit convention for creating local variables. e.g. you have created a local variable v_id and the same column is also available in student_info table. Though it is not a problem in this case but it's good practice to keep the local variable specific like let's say l_v_id.
You have used a select statement inside trigger which could leads to NO_DATA_FOUND error and you should handle it by either in the exception section or another way would be using aggregate function like max() if obviously v_id is primary key. I doubt why you need this select statement ( you could use between old and new using something like coalesce(:old.school_id,:new_schoold_id) if I understood you) but I would leave it open to you to decide and act accordingly.
Considering above points final code will be,
CREATE OR REPLACE TRIGGER og_trigger
BEFORE INSERT OR UPDATE OR DELETE ON student_info
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
ENABLE
DECLARE
BEGIN
IF INSERTING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Insert',:new.school_id,:old.city,:new.city);
ELSIF UPDATING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Update',:new.school_id,:old.city,:new.city);
END IF;
END;
/
Find demo db<>fiddle
EDITED: Solving probably tool issue
I doubt the issue is with SQL Developer tool usage , however last try i would like to make,
Step1:
Drop both the tables used by issuing drop command
drop table STUDENT_INFO;
drop table og_log;
Step2:
Open another SQL worksheet using alt+F10 and do as I have shown in the following image. Please try and let me know.

Table insertions using stored procedures?

(Submitting for a Snowflake User, hoping to receive additional assistance)
Is there another way to perform table insertion using a stored procedure faster?
I started building a usp with the purpose to insert million or so of rows of test data into a table for the purpose of load testing.
I got to this stage show below and set the iteration value to 10,000.
This took over 10 mins to iterate 10,000 times to insert a single integer into a table each iteration
Yes - I am using a XS data warehouse, but even if this is increased to MAX - this is way to slow to be of any use.
--build a test table
CREATE OR REPLACE TABLE myTable
(
myInt NUMERIC(18,0)
);
--testing a js usp using a while statement with the intention to insert multiple rows into a table (Millions) for load testing
CREATE OR REPLACE PROCEDURE usp_LoadTable_test()
RETURNS float
LANGUAGE javascript
EXECUTE AS OWNER
AS
$$
//set the number of iterations
var maxLoops = 10;
//set the row Pointer
var rowPointer = 1;
//set the Insert sql statement
var sql_insert = 'INSERT INTO myTable VALUES(:1);';
//Insert the fist Value
sf_startInt = rowPointer + 1000;
resultSet = snowflake.execute( {sqlText: sql_insert, binds: [sf_startInt] });
//Loop thorugh to insert all other values
while (rowPointer < maxLoops)
{
rowPointer += 1;
sf_startInt = rowPointer + 1000;
resultSet = snowflake.execute( {sqlText: sql_insert, binds: [sf_startInt] });
}
return rowPointer;
$$;
CALL usp_LoadTable_test();
So far, I've received the following recommendations:
Recommendation #1
One thing you can do is to use a "feeder table" containing 1000 or more rows instead of INSERT ... VALUES, eg:
INSERT INTO myTable SELECT <some transformation of columns> FROM "feeder table"
Recommendation #2
When you perform a million single row inserts, you consume one million micropartitions - each 16MB.
That 16 TB chunk of storage might be visible on your Snowflake bill ... Normal tables are retained for 7 days minimum after drop.
To optimize storage, you could define a clustering key and load the table in ascending order with each chunk filling up as much of a micropartition as possible.
Recommendation #3
Use data generation functions that work very fast if you need sequential integers: https://docs.snowflake.net/manuals/sql-reference/functions/seq1.html
Any other ideas?
This question was also asked at the Snowflake Lodge some weeks ago.
Given the answers you received, do you still feel unanswered, then maybe hint about why?
If you just want a table with a single column of sequence numbers, use GENERATOR() as in #3 above. Otherwise, if you want more advice, share your specific requirements.

Error update trigger after new row has inserted into same table

I want to update OrigOrderNbr and OrigOrderType (QT type) because when I create first both of column are Null value. But after S2 was created (QT converted to S2) the OrigOrderType and OrigOrderNbr (S2) take from QT reference. Instead of that, I want to update it to QT also.
http://i.stack.imgur.com/6ipFa.png
http://i.stack.imgur.com/E6qzT.png
CREATE TRIGGER tgg_SOOrder
ON dbo.SOOrder
FOR INSERT
AS
DECLARE #tOrigOrderType char(2),
#tOrigOrderNbr nvarchar(15)
SELECT #tOrigOrderType = i.OrderType,
#tOrigOrderNbr = i.OrderNbr
FROM inserted i
UPDATE dbo.SOOrder
SET OrigOrderType = #tOrigOrderType,
OrigOrderNbr = #tOrigOrderNbr
FROM inserted i
WHERE dbo.SOOrder.CompanyID='2'
and dbo.SOOrder.OrderType=i.OrigOrderType
and dbo.SOOrder.OrderNbr=i.OrigOrderNbr
GO
After I run that trigger, it showed the message 'Error #91: Another process has updated 'SOOrder' record. Your changes will be lost.'.
Per long string of comments, including some excellent suggestions in regards to proper trigger writing techniques by #marc_s and #Damien_The_Unbeliever, as well as my better understanding of your issue at this point, here's the re-worked trigger:
CREATE TRIGGER tgg_SOOrder
ON dbo.SOOrder
FOR INSERT
AS
--Update QT record with S2 record's order info
UPDATE SOOrder
SET OrigOrderType = 'S2'
, OrigOrderNbr = i.OrderNbr
FROM SOOrder dest
JOIN inserted i
ON dest.OrderNbr = i.OrigOrderNbr
WHERE dest.OrderType = 'QT'
AND i.OrderType = 'S2'
AND dest.CompanyID = 2 --Business logic constraint
AND dest.OrigOrderNbr IS NULL
AND dest.OrigOrderType IS NULL
Basically, the idea is to update any record of type "QT" once a matching record of type "S2" is created. Matching here means that OrigOrderNbr of S2 record is the same as OrderNbr of QT record. I kept your business logic constraint in regards to CompanyID being set to 2. Additionally, we only care to modify QT records that have OrigOrderNbr and OrigOrderType set to NULL.
This trigger does not rely on a single-row insert; it will work regardless of the number of rows inserted - which is far less likely to break down the line.

Avoid Adding Duplicate Records

I m trying to write if statement to give error message if user try to add existing ID number.When i try to enter existing id i get error .untill here it s ok but when i type another id no and fill the fields(name,adress etc) it doesnt go to database.
METHOD add_employee.
DATA: IT_EMP TYPE TABLE OF ZEMPLOYEE_20.
DATA:WA_EMP TYPE ZEMPLOYEE_20.
Data: l_count type i value '2'.
SELECT * FROM ZEMPLOYEE_20 INTO TABLE IT_EMP.
LOOP AT IT_EMP INTO WA_EMP.
IF wa_emp-EMPLOYEE_ID eq pa_id.
l_count = l_count * '0'.
else.
l_count = l_count * '1'.
endif.
endloop.
If l_count eq '2'.
WA_EMP-EMPLOYEE_ID = C_ID.
WA_EMP-EMPLOYEE_NAME = C_NAME.
WA_EMP-EMPLOYEE_ADDRESS = C_ADD.
WA_EMP-EMPLOYEE_SALARY = C_SAL.
WA_EMP-EMPLOYEE_TYPE = C_TYPE.
APPEND wa_emp TO it_emp.
INSERT ZEMPLOYEE_20 FROM TABLE it_emp.
CALL FUNCTION 'POPUP_TO_DISPLAY_TEXT'
EXPORTING
TITEL = 'INFO'
TEXTLINE1 = 'Record Added Successfully.'.
elseif l_count eq '0'.
CALL FUNCTION 'POPUP_TO_DISPLAY_TEXT'
EXPORTING
TITEL = 'INFO'
TEXTLINE1 = 'Selected ID already in database.Please type another ID no.'.
ENDIF.
ENDMETHOD.
I'm not sure I'm getting your explanation. Why are you trying to re-insert all the existing entries back into the table? You're just trying to insert C_ID etc if it doesn't exist yet? Why do you need all the existing entries for that?
If so, throw out that select and the loop completely, you don't need it. You have a few options...
Just read the table with your single entry
SELECT SINGLE * FROM ztable INTO wa WITH KEY ID = C_ID etc.
IF SY-SUBRC = 0.
"this entry exists. popup!
ENDIF.
Use a modify statement
This will overwrite duplicate entries with new data (so non key fields may change this way), it wont fail. No need for a popup.
MODIFY ztable FROM wa.
Catch the SQL exceptions instead of making it dump
If the update fails because of an exception, you can always catch it and deal with exceptional situations.
TRY .
INSERT ztable FROM wa.
CATCH sapsql_array_insert_duprec.
"do your popup, the update failed because of duplicate records
ENDTRY.
I think there's a bug when appending in internal table 'IT_EMP' and inserting in 'ZEMPLOYEE_20' table.
Suppose you append the first time and then you insert. But when you append the second time you will have 2 records in 'IT_EMP' that are going to be inserted in 'ZEMPLOYEE_20'. That is because you don't refresh or clear the internal table and there you will have a runtime error.
According to SAP documentation on 'Inserting Lines into Tables ':
Inserting Several Lines
To insert several lines into a database table, use the following:
INSERT FROM TABLE [ACCEPTING DUPLICATE KEYS] . This
writes all lines of the internal table to the database table in
one single operation. The same rules apply to the line type of
as to the work area described above. If the system is able to
insert all of the lines from the internal table, SY-SUBRC is set to 0.
If one or more lines cannot be inserted because the database already
contains a line with the same primary key, a runtime error occurs.
Maybe the right direction here is trying to insert the work area directly but before you must check if record already exists using the primary key.
Check the SAP documentation on this issue clicking the link before.
On the other hand, once l_count is zero because of l_count = l_count * '0'. that value will never change to any other number making that you won't append or insert again.
why are you retrieving all entries from zemployee_20 ?
You can directly check wether the 'id' exists already or not by using select single. If exists, then show message, if not, add.
It is recommended to retrieve only one field when its needed and not the entire table with asterisc *
SELECT single employee_id FROM ZEMPLOYEE_20 where employee_id = p_id INTO v_id. ( or field in structure )
if sy-subrc = 0. "exists
"show message
else. "not existing id
"populate structure and then add record to Z table
endif.
Furthermore, l_count is not only unnecessary but also bad implemented.
You can directly use the insert query,if the sy-subrc is unsuccessful raise the error message.
WA_EMP-EMPLOYEE_ID = C_ID.
WA_EMP-EMPLOYEE_NAME = C_NAME.
WA_EMP-EMPLOYEE_ADDRESS = C_ADD.
WA_EMP-EMPLOYEE_SALARY = C_SAL.
WA_EMP-EMPLOYEE_TYPE = C_TYPE.
INSERT ZEMPLOYEE_20 FROM WA_EMP.
If sy-subrc <> 0.
Raise the Exception.
Endif.

How can I copy records between tables only if they are valid according to check constraints in Oracle?

I don't know if that is possible, but I want to copy a bunch of records from a temp table to a normal table. The problem is that some records may violate check constraints so I want to insert everything that is possible and generate error logs somewhere else for the invalid records.
If I execute:
INSERT INTO normal_table
SELECT ... FROM temp_table
nothing would be inserted if any record violates any constraint. I could make a loop and manually insert one by one, but I think the performance would be lower.
Ps: if possible, I'd like a solution that works with Oracle 9
From Oracle 10gR2, you can use the log errors clause:
EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('NORMAL_TABLE');
INSERT INTO normal_table
SELECT ... FROM temp_table
LOG ERRORS REJECT LIMIT UNLIMITED;
In its simplest form. You can then see what errors you got:
SELECT ora_err_mesg$
FROM err$_normal_table;
More on the CREATE_ERROR_LOG step here.
I think this approach works from 9i, but don't have an instance available to test on, so this is actually run on 11gR2
Update: tested and tweaked (to avoid PLS-00436) in 9i:
declare
type t_temp_table is table of temp_table%rowtype;
l_temp_table t_temp_table;
l_err_code err_table.err_code%type;
l_err_msg err_table.err_msg%type;
l_id err_table.id%type;
cursor c is select * from temp_table;
error_array exception;
pragma exception_init(error_array, -24381);
begin
open c;
loop
fetch c bulk collect into l_temp_table limit 100;
exit when l_temp_table.count = 0;
begin
forall i in 1..l_temp_table.count save exceptions
insert into normal_table
values l_temp_table(i);
exception
when error_array then
for j in 1..sql%bulk_exceptions.count loop
l_id := l_temp_table(sql%bulk_exceptions(j).error_index).id;
l_err_code := sql%bulk_exceptions(j).error_code;
l_err_msg := sqlerrm(-1 * sql%bulk_exceptions(j).error_code);
insert into err_table(id, err_code, err_msg)
values (l_id, l_err_code, l_err_msg);
end loop;
end;
end loop;
end;
/
With all your real columns instead of just id, which I've done just for demo purposes:
create table normal_table(id number primary key);
create table temp_table(id number);
create table err_table(id number, err_code number, err_msg varchar2(2000));
insert into temp_table values(42);
insert into temp_table values(42);
Then run the anonymous block above...
select * from normal_table;
ID
----------
42
column err_msg format a50
select * from err_table;
ID ERR_CODE ERR_MSG
---------- ---------- --------------------------------------------------
42 1 ORA-00001: unique constraint (.) violated
This is less satisfactory on a few levels - more coding, slower if you have a lot of exceptions (because of the individual inserts for those), doesn't show which constraint was violated (or any other error details), and won't retain the errors if you rollback - though you could call an autonomous transaction to log it if that was an issue, which I doubt here.
If you have a small enough volume of data to not want to worry about the limit clause you can simplify it a bit:
declare
type t_temp_table is table of temp_table%rowtype;
l_temp_table t_temp_table;
l_err_code err_table.err_code%type;
l_err_msg err_table.err_msg%type;
l_id err_table.id%type;
error_array exception;
pragma exception_init(error_array, -24381);
begin
select * bulk collect into l_temp_table from temp_table;
forall i in 1..l_temp_table.count save exceptions
insert into normal_table
values l_temp_table(i);
exception
when error_array then
for j in 1..sql%bulk_exceptions.count loop
l_id := l_temp_table(sql%bulk_exceptions(j).error_index).id;
l_err_code := sql%bulk_exceptions(j).error_code;
l_err_msg := sqlerrm(-1 * sql%bulk_exceptions(j).error_code);
insert into err_table(id, err_code, err_msg)
values (l_id, l_err_code, l_err_msg);
end loop;
end;
/
The 9i documentation doesn't seem to be online any more, but this is in a new-features document, and lots of people have written about it - it's been asked about here before too.
If you're specifically interested only in check constraints then one method to think about is to read the definitions of the target check constraints from the data dictionary and apply them as predicates to the query that extracts data from the source table using dynamic sql.
Given:
create table t1 (
col1 number check (col1 between 3 and 10))
You can:
select constraint_name,
search_condition
from user_constraints
where constraint_type = 'C' and
table_name = 'T1'
The result being:
"SYS_C00226681", "col1 between 3 and 10"
From there it's "a simple matter of coding", as they say, and the method will work on just about any version of Oracle. The most efficient method would probably be to use a multitable insert to direct rows to either the intended target table or to an error logging table based on the result of a CASE statement that applies the check constraint predicates.

Resources