i have this trigger and it says Warning: Trigger created with compilation errors.
can you give me quick advice?
/
CREATE OR REPLACE TRIGGER type_check
BEFORE INSERT ON carrs
FOR EACH ROW
BEGIN
IF :new.weight > 3500
THEN
:new.type := 'nakladne';
ELSE
:new.type := 'osobne';
END IF;
END;
/
edit: i still got the same warning
edit: here is table definition
create table carrs (
id_Car Integer not null,
id_board_unit Integer not null,
id_evc_numbers Integer not null,
id_owner Integer,
weight Integer not null );
here are the errors:
LINE/COL ERROR
-------- -----------------------------------------------------------------
4/5 PLS-00049: bad bind variable 'NEW.TYPE'
6/5 PLS-00049: bad bind variable 'NEW.TYPE'
If you're using SQL*Plus, type show errors after getting that warning to display the list of syntax errors that were reported. It's far easier to diagnose problems when you know what they are rather than guessing.
At a minimum, your references to the :new pseudorecord need to include the colon prefix :new rather than new. The assignment operator in PL/SQL is also := not =. So, at a minimum, you'd want something like
CREATE OR REPLACE TRIGGER type_check
BEFORE INSERT ON carrs
FOR EACH ROW
BEGIN
IF :new.weight > 3500
THEN
:new.type := 'nakladne';
ELSE
:new.type := 'osobne';
END IF;
END;
There may be additional errors as well. If there are, type show errors and edit your question to include them.
If you wish to add a new column named type to your table, you would do that before creating your trigger. That's not a particularly good name for a column, though, since type is also a reserved word in Oracle. I'd choose something more meaningful like, say, carr_type. You'd have to specify how large the strings the new column would need to hold when creating it. I'll guess that you want space for 10 characters
ALTER TABLE carrs
ADD( carr_type VARCHAR2(10 CHAR) );
You then probably want to UPDATE your table to populate the new column for your existing data
UPDATE carrs
SET carr_type = (case when weight > 3500
then 'nakladne'
else 'osobne'
end)
before creating your trigger
CREATE OR REPLACE TRIGGER type_check
BEFORE INSERT ON carrs
FOR EACH ROW
BEGIN
IF :new.weight > 3500
THEN
:new.carr_type := 'nakladne';
ELSE
:new.carr_type := 'osobne';
END IF;
END;
Related
I have to write a trigger for the tables I made and in insert update, I have to record a separate log table for those that are updated or inserted.
Columns in the log table will be like;
Done_process (will write update, insert)
Person (student number of the person treated)
Before (previous value for update, blank for insert)
After (new value for update, new value for insert)
This is my student_info table,
CREATE TABLE student_info (
school_id NUMBER,
id_no NUMBER NOT NULL UNIQUE,
name VARCHAR2(50) NOT NULL,
surname VARCHAR2(50) NOT NULL,
city VARCHAR2(50) NOT NULL,
birth_date DATE NOT NULL,
CONSTRAINT student_info_pk PRIMARY KEY(school_id )
);
CREATE TABLE og_log(
done_process VARCHAR2(30),
person VARCHAR2(30),
before VARCHAR2(30),
after VARCHAR2(30)
);
CREATE OR REPLACE TRIGGER og_trigger
BEFORE INSERT OR UPDATE OR DELETE ON student_info
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
ENABLE
DECLARE
BEGIN
IF INSERTING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Insert',:new.school_id,:old.name,:new.name);
ELSIF UPDATING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Update',:new.school_id,:old.name,:new.name);
END IF;
END;
/
When I try to run the code it gave an error as follows;
> Trıgger OG_TRIGGER created.
>
>
> Error starting at line : 280 in command - ELSIF UPDATING THEN Error
> report - Unknown Command
>
> SP2-0552: Bind variable "NEW" not declared.
>
> 0 rows inserted.
>
>
> Error starting at line : 283 in command - END IF Error report -
> Unknown Command
>
> SP2-0044: For a list of known commands enter HELP and to leave enter
> EXIT.
>
> Error starting at line : 284 in command - END Error report - Unknown
> Command
I believe you are creating this trigger for learning purpose and not something a real use case because what you do in trigger doesn't really making any sense.
The trigger you have mentioned is not compiling due to syntactical problems like where v_id := 20201033.
Where clause is used to compare the value and thus you should use = instead := which is an assignment operator.
Besides this problem few points which still needs to be taken care
Give a explicit convention for creating local variables. e.g. you have created a local variable v_id and the same column is also available in student_info table. Though it is not a problem in this case but it's good practice to keep the local variable specific like let's say l_v_id.
You have used a select statement inside trigger which could leads to NO_DATA_FOUND error and you should handle it by either in the exception section or another way would be using aggregate function like max() if obviously v_id is primary key. I doubt why you need this select statement ( you could use between old and new using something like coalesce(:old.school_id,:new_schoold_id) if I understood you) but I would leave it open to you to decide and act accordingly.
Considering above points final code will be,
CREATE OR REPLACE TRIGGER og_trigger
BEFORE INSERT OR UPDATE OR DELETE ON student_info
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
ENABLE
DECLARE
BEGIN
IF INSERTING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Insert',:new.school_id,:old.city,:new.city);
ELSIF UPDATING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Update',:new.school_id,:old.city,:new.city);
END IF;
END;
/
Find demo db<>fiddle
EDITED: Solving probably tool issue
I doubt the issue is with SQL Developer tool usage , however last try i would like to make,
Step1:
Drop both the tables used by issuing drop command
drop table STUDENT_INFO;
drop table og_log;
Step2:
Open another SQL worksheet using alt+F10 and do as I have shown in the following image. Please try and let me know.
I'm currently trying to figure out a question on my assignment. I've never worked with arrays before but I've done collections, triggers, functions, procedures, and cursors. I'm not asking for the answer but rather some help on this as I'm confused on how to approach it.
Declare a record where you keep record of LOCATIONS table (LOCATION_ID,
STREET_ADDRESS, POSTAL_CODE, CITY, STATE_PROVINCE and
COUNTRY_ID) and an operation.
Declare an array where you keep location records as elements.
Initialize the array by populating values for each location that you would like
to process
The operation type can be ‘U’ for Update, ‘I’ for Insert and ‘D’ for Delete
Iterate all array elements from beginning to end and execute the following
logic per each location defined in the array:
If operation type is ‘U’, then update LOCATIONS table with the values
coming from the array. If location id is not found on the table, insert it
as a new record.
If operation type is ‘I’, insert the record to the table. Values should
come from the array
If operation type is ‘D’, delete that location from the table.
For each operation, display a message after the process is completed
If operator type is different than U, I, D, then display a proper
message indicating 'Invalid operation'.
Commit all transactions
The part about the operations also confuses me because I have done some work where you would use the operators in triggers but it didn't involve an array:
BEGIN
IF INSERTING THEN
INSERT INTO carlog
VALUES ('INSERT',user, SYSDATE, UPPER(:NEW.serial), UPPER(:NEW.make),
UPPER(:NEW.model), UPPER(:NEW.color));
END IF;
IF UPDATING THEN
INSERT INTO carlog
VALUES ('UPDATE',user, SYSDATE,:old.serial, old.make,
old.model, old.color);
END IF;
IF DELETING THEN
INSERT INTO carlog
VALUES ('DELETE',user, SYSDATE,:old.serial, old.make,
old.model, old.color);
END IF;
END;
An array is just a type of collection. If you look at the documentation here: https://docs.oracle.com/cd/B28359_01/appdev.111/b28370/collections.htm#LNPLS00501
You can see that an "Associative Array" is simply a PL/SQL collection type.
As far as the "Operation" is concerned, my understanding based on the spec is that it is purely an attribute of the location record itself. I would make something similar to the below:
DECLARE
--declare your location record based on the spec
TYPE location IS RECORD (
location_id integer,
operation VARCHAR2(1)
--additional values from the spec would go here
);
--declare an array as a table of the location type
TYPE location_array IS TABLE OF location INDEX BY BINARY_INTEGER;
--two variables to hold the data
example_location location;
locations location_array;
BEGIN
--start building your location record for the location you want to modify
example_location.location_id := 1;
example_location.operation := 'T';
--..... additional values here
--add the location to the array
locations(locations.count + 1) := example_location;
--repeat the above for any other locations, or populate these some other way
--loop through the locations
FOR i IN locations.first..locations.last
LOOP
--decide the logic based on the operation, could just as easily use your if logic here
CASE locations(i).operation
WHEN 'U' THEN
dbms_output.put_line('your update logic here');
WHEN 'I' THEN
dbms_output.put_line('your insert logic here');
WHEN 'D' THEN
dbms_output.put_line('your delete logic here');
ELSE
dbms_output.put_line('other operations');
END CASE;
END LOOP;
END;
You would need to adapt the above to suit your needs, adding in the relevant logic, messages, commits, error handling etc.
I'm stumped on something which should be very straight-forward. I have a SQL Server database, and I'm trying to update a non-nullable varchar or nvarchar field with an empty string. I know it's possible, because an empty string '' is not the same thing as NULL. However, using the TADOQuery, it is not allowing me to do this.
I'm trying to update an existing record like so:
ADOQuery1.Edit;
ADOQuery1['NonNullFieldName']:= '';
//or
ADOQuery1.FieldByName('NonNullFieldName').AsString:= '';
ADOQuery1.Post; //<-- Exception raised while posting
If there is anything in the string, even just a single space, it saves just fine, as expected. But, if it is an empty string, it fails:
Non-nullable column cannot be updated to Null.
But it's not null. It's an empty string, which should work just fine. I swear I've passed empty strings many, many times in the past.
Why am I getting this error, and what should I do to resolve it?
Additional details:
Database: Microsoft SQL Server 2014 Express
Language: Delphi 10 Seattle Update 1
Database drivers: SQLOLEDB.1
Field being updated: nvarchar(MAX) NOT NULL
I can reproduce your reported problem using the code below with SS2014, the OLEDB driver and
Seattle and the difference in behaviour when the table has been created with MAX as the column size and a specific number (4096 in my case). I thought I would post this is as an alternative
answer because it not only shows how to investigate this difference systematically
but also identifies why this difference arises (and hence how to avoid it in future).
Please refer to and execute the code below, as written, i.e. with the UseMAX define
active.
Turning on "Use Debug DCUs" in the the project options before executing the code, immediately
reveals that the described exception occurs in Data.Win.ADODB at line 4920
Recordset.Fields[TField(FModifiedFields[I]).FieldNo-1].Value := Data
of TCustomADODataSet.InternalPost and the Debug evaluation window reveals that
Data at this point is Null.
Next, notice that
update jdtest set NonNullFieldName = ''
executes in an SSMS2014 Query window without complaint (Command(s) completed successfully.), so it seems that the
fact that Data is Null at line 4920 is what is causing the problem and the next question is "Why?"
Well, the first thing to notice is that the form's caption is displaying ftMemo
Next, comment out the UseMAX define, recompile and execute. Result: No exception
snd notice that the form's caption is now displaying ftString.
And that's the reason: Using a specific number for the column size means that
the table metadata retrieved by the RTL causes the client-side Field to be created
as a TStringField, whose value you can set by a string assignment statement.
OTOH, when you specify MAX, the resulting client-side Field is of type ftMemo,
which is one of Delphi's BLOB types and when you assign
string values to an ftMemo field, you are at the mercy of code in Data.DB.Pas , which does all the reading (and writing) to the record buffer using a TBlobStream. The problem with that is that as far as I can see, after a lot of experiments and tracing through the code, the way a TMemoField uses a BlobStream fails to properly distinguish between updating the field contents to '' and setting the field's value to Null (as in System.Variants).
In short, whenever you try to set a TMemoField's value to an empty string, what actually happens is that the field's state is set to Null, and this is what causes the exception in the q. AFAICS, this is unavoidable, so no work-around is obvious, to me at any rate.
I have not investigated whether the choice between ftMemo and ftString is made by the Delphi RTL code or the MDAC(Ado) layer it sits upon: I would expect it is actually determined by the RecordSet TAdoQuery uses.
QED. Notice that this systematic approach to debugging has revealed the
problem & cause with very little effort and zero trial and error, which was
what I was trying to suggest in my comments on the q.
Another point is that this problem could be tracked down entirely without
resorting to server-side tools including the SMSS profiler. There wasn't any need to use the profiler to inspect what the client was sending to the server
because there was no reason to suppose that the error returned by the server
was incorrect. That confirms what I said about starting investigation at the client side.
Also, using a table created on the fly using IfDefed Sql enabled the problem effectively to be isolated in a single step by simple observation of two runs of the app.
Code
uses [...] TypInfo;
[...]
implementation[...]
const
// The following consts are to create the table and insert a single row
//
// The difference between them is that scSqlSetUp1 specifies
// the size of the NonNullFieldName to 'MAX' whereas scSqlSetUp2 specifies a size of 4096
scSqlSetUp1 =
'CREATE TABLE [dbo].[JDTest]('#13#10
+ ' [ID] [int] NOT NULL primary key,'#13#10
+ ' [NonNullFieldName] VarChar(MAX) NOT NULL'#13#10
+ ') ON [PRIMARY]'#13#10
+ ';'#13#10
+ 'Insert JDTest (ID, [NonNullFieldName]) values (1, ''a'')'#13#10
+ ';'#13#10
+ 'SET ANSI_PADDING OFF'#13#10
+ ';';
scSqlSetUp2 =
'CREATE TABLE [dbo].[JDTest]('#13#10
+ ' [ID] [int] NOT NULL primary key,'#13#10
+ ' [NonNullFieldName] VarChar(4096) NOT NULL'#13#10
+ ') ON [PRIMARY]'#13#10
+ ';'#13#10
+ 'Insert JDTest (ID, [NonNullFieldName]) values (1, ''a'')'#13#10
+ ';'#13#10
+ 'SET ANSI_PADDING OFF'#13#10
+ ';';
scSqlDropTable = 'drop table [dbo].[jdtest]';
procedure TForm1.Test1;
var
AField : TField;
S : String;
begin
// Following creates the table. The define determines the size of the NonNullFieldName
{$define UseMAX}
{$ifdef UseMAX}
S := scSqlSetUp1;
{$else}
S := scSqlSetUp2;
{$endif}
ADOConnection1.Execute(S);
try
ADOQuery1.Open;
try
ADOQuery1.Edit;
// Get explicit reference to the NonNullFieldName
// field to make working with it and investigating it easier
AField := ADOQuery1.FieldByName('NonNullFieldName');
// The following, which requires the `TypInfo` unit in the `USES` list is to find out which exact type
// AField is. Answer: ftMemo, or ftString, depending on UseMAX.
// Of course, we could get this info by inspection in the IDE
// by creating persistent fields
S := GetEnumName(TypeInfo(TFieldType), Ord(AField.DataType));
Caption := S; // Displays `ftMemo` or `ftString`, of course
AField.AsString:= '';
ADOQuery1.Post; //<-- Exception raised while posting
finally
ADOQuery1.Close;
end;
finally
// Tidy up
ADOConnection1.Execute(scSqlDropTable);
end;
end;
procedure TForm1.Button1Click(Sender: TObject);
begin
Test1;
end;
The problem occurs when using MAX in the data type. Both varchar(MAX) and nvarchar(MAX) exploit this behavior. When removing MAX and replacing it with a large number, such as 5000, then it allows empty strings.
I m trying to write if statement to give error message if user try to add existing ID number.When i try to enter existing id i get error .untill here it s ok but when i type another id no and fill the fields(name,adress etc) it doesnt go to database.
METHOD add_employee.
DATA: IT_EMP TYPE TABLE OF ZEMPLOYEE_20.
DATA:WA_EMP TYPE ZEMPLOYEE_20.
Data: l_count type i value '2'.
SELECT * FROM ZEMPLOYEE_20 INTO TABLE IT_EMP.
LOOP AT IT_EMP INTO WA_EMP.
IF wa_emp-EMPLOYEE_ID eq pa_id.
l_count = l_count * '0'.
else.
l_count = l_count * '1'.
endif.
endloop.
If l_count eq '2'.
WA_EMP-EMPLOYEE_ID = C_ID.
WA_EMP-EMPLOYEE_NAME = C_NAME.
WA_EMP-EMPLOYEE_ADDRESS = C_ADD.
WA_EMP-EMPLOYEE_SALARY = C_SAL.
WA_EMP-EMPLOYEE_TYPE = C_TYPE.
APPEND wa_emp TO it_emp.
INSERT ZEMPLOYEE_20 FROM TABLE it_emp.
CALL FUNCTION 'POPUP_TO_DISPLAY_TEXT'
EXPORTING
TITEL = 'INFO'
TEXTLINE1 = 'Record Added Successfully.'.
elseif l_count eq '0'.
CALL FUNCTION 'POPUP_TO_DISPLAY_TEXT'
EXPORTING
TITEL = 'INFO'
TEXTLINE1 = 'Selected ID already in database.Please type another ID no.'.
ENDIF.
ENDMETHOD.
I'm not sure I'm getting your explanation. Why are you trying to re-insert all the existing entries back into the table? You're just trying to insert C_ID etc if it doesn't exist yet? Why do you need all the existing entries for that?
If so, throw out that select and the loop completely, you don't need it. You have a few options...
Just read the table with your single entry
SELECT SINGLE * FROM ztable INTO wa WITH KEY ID = C_ID etc.
IF SY-SUBRC = 0.
"this entry exists. popup!
ENDIF.
Use a modify statement
This will overwrite duplicate entries with new data (so non key fields may change this way), it wont fail. No need for a popup.
MODIFY ztable FROM wa.
Catch the SQL exceptions instead of making it dump
If the update fails because of an exception, you can always catch it and deal with exceptional situations.
TRY .
INSERT ztable FROM wa.
CATCH sapsql_array_insert_duprec.
"do your popup, the update failed because of duplicate records
ENDTRY.
I think there's a bug when appending in internal table 'IT_EMP' and inserting in 'ZEMPLOYEE_20' table.
Suppose you append the first time and then you insert. But when you append the second time you will have 2 records in 'IT_EMP' that are going to be inserted in 'ZEMPLOYEE_20'. That is because you don't refresh or clear the internal table and there you will have a runtime error.
According to SAP documentation on 'Inserting Lines into Tables ':
Inserting Several Lines
To insert several lines into a database table, use the following:
INSERT FROM TABLE [ACCEPTING DUPLICATE KEYS] . This
writes all lines of the internal table to the database table in
one single operation. The same rules apply to the line type of
as to the work area described above. If the system is able to
insert all of the lines from the internal table, SY-SUBRC is set to 0.
If one or more lines cannot be inserted because the database already
contains a line with the same primary key, a runtime error occurs.
Maybe the right direction here is trying to insert the work area directly but before you must check if record already exists using the primary key.
Check the SAP documentation on this issue clicking the link before.
On the other hand, once l_count is zero because of l_count = l_count * '0'. that value will never change to any other number making that you won't append or insert again.
why are you retrieving all entries from zemployee_20 ?
You can directly check wether the 'id' exists already or not by using select single. If exists, then show message, if not, add.
It is recommended to retrieve only one field when its needed and not the entire table with asterisc *
SELECT single employee_id FROM ZEMPLOYEE_20 where employee_id = p_id INTO v_id. ( or field in structure )
if sy-subrc = 0. "exists
"show message
else. "not existing id
"populate structure and then add record to Z table
endif.
Furthermore, l_count is not only unnecessary but also bad implemented.
You can directly use the insert query,if the sy-subrc is unsuccessful raise the error message.
WA_EMP-EMPLOYEE_ID = C_ID.
WA_EMP-EMPLOYEE_NAME = C_NAME.
WA_EMP-EMPLOYEE_ADDRESS = C_ADD.
WA_EMP-EMPLOYEE_SALARY = C_SAL.
WA_EMP-EMPLOYEE_TYPE = C_TYPE.
INSERT ZEMPLOYEE_20 FROM WA_EMP.
If sy-subrc <> 0.
Raise the Exception.
Endif.
I don't know if that is possible, but I want to copy a bunch of records from a temp table to a normal table. The problem is that some records may violate check constraints so I want to insert everything that is possible and generate error logs somewhere else for the invalid records.
If I execute:
INSERT INTO normal_table
SELECT ... FROM temp_table
nothing would be inserted if any record violates any constraint. I could make a loop and manually insert one by one, but I think the performance would be lower.
Ps: if possible, I'd like a solution that works with Oracle 9
From Oracle 10gR2, you can use the log errors clause:
EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('NORMAL_TABLE');
INSERT INTO normal_table
SELECT ... FROM temp_table
LOG ERRORS REJECT LIMIT UNLIMITED;
In its simplest form. You can then see what errors you got:
SELECT ora_err_mesg$
FROM err$_normal_table;
More on the CREATE_ERROR_LOG step here.
I think this approach works from 9i, but don't have an instance available to test on, so this is actually run on 11gR2
Update: tested and tweaked (to avoid PLS-00436) in 9i:
declare
type t_temp_table is table of temp_table%rowtype;
l_temp_table t_temp_table;
l_err_code err_table.err_code%type;
l_err_msg err_table.err_msg%type;
l_id err_table.id%type;
cursor c is select * from temp_table;
error_array exception;
pragma exception_init(error_array, -24381);
begin
open c;
loop
fetch c bulk collect into l_temp_table limit 100;
exit when l_temp_table.count = 0;
begin
forall i in 1..l_temp_table.count save exceptions
insert into normal_table
values l_temp_table(i);
exception
when error_array then
for j in 1..sql%bulk_exceptions.count loop
l_id := l_temp_table(sql%bulk_exceptions(j).error_index).id;
l_err_code := sql%bulk_exceptions(j).error_code;
l_err_msg := sqlerrm(-1 * sql%bulk_exceptions(j).error_code);
insert into err_table(id, err_code, err_msg)
values (l_id, l_err_code, l_err_msg);
end loop;
end;
end loop;
end;
/
With all your real columns instead of just id, which I've done just for demo purposes:
create table normal_table(id number primary key);
create table temp_table(id number);
create table err_table(id number, err_code number, err_msg varchar2(2000));
insert into temp_table values(42);
insert into temp_table values(42);
Then run the anonymous block above...
select * from normal_table;
ID
----------
42
column err_msg format a50
select * from err_table;
ID ERR_CODE ERR_MSG
---------- ---------- --------------------------------------------------
42 1 ORA-00001: unique constraint (.) violated
This is less satisfactory on a few levels - more coding, slower if you have a lot of exceptions (because of the individual inserts for those), doesn't show which constraint was violated (or any other error details), and won't retain the errors if you rollback - though you could call an autonomous transaction to log it if that was an issue, which I doubt here.
If you have a small enough volume of data to not want to worry about the limit clause you can simplify it a bit:
declare
type t_temp_table is table of temp_table%rowtype;
l_temp_table t_temp_table;
l_err_code err_table.err_code%type;
l_err_msg err_table.err_msg%type;
l_id err_table.id%type;
error_array exception;
pragma exception_init(error_array, -24381);
begin
select * bulk collect into l_temp_table from temp_table;
forall i in 1..l_temp_table.count save exceptions
insert into normal_table
values l_temp_table(i);
exception
when error_array then
for j in 1..sql%bulk_exceptions.count loop
l_id := l_temp_table(sql%bulk_exceptions(j).error_index).id;
l_err_code := sql%bulk_exceptions(j).error_code;
l_err_msg := sqlerrm(-1 * sql%bulk_exceptions(j).error_code);
insert into err_table(id, err_code, err_msg)
values (l_id, l_err_code, l_err_msg);
end loop;
end;
/
The 9i documentation doesn't seem to be online any more, but this is in a new-features document, and lots of people have written about it - it's been asked about here before too.
If you're specifically interested only in check constraints then one method to think about is to read the definitions of the target check constraints from the data dictionary and apply them as predicates to the query that extracts data from the source table using dynamic sql.
Given:
create table t1 (
col1 number check (col1 between 3 and 10))
You can:
select constraint_name,
search_condition
from user_constraints
where constraint_type = 'C' and
table_name = 'T1'
The result being:
"SYS_C00226681", "col1 between 3 and 10"
From there it's "a simple matter of coding", as they say, and the method will work on just about any version of Oracle. The most efficient method would probably be to use a multitable insert to direct rows to either the intended target table or to an error logging table based on the result of a CASE statement that applies the check constraint predicates.