I have this table:
db.define_table('block',
Field('ore_id', 'reference ore'),
Field('location', type='integer', required=True, notnull = True),
Field('x', type='integer'),
Field('y', type='integer'),
Field('block_mass_tn', type='double')
)
and this:
db.define_table('block_processing_line_1',
Field('block_location', 'reference block'),
Field('processing_time_d', type='double'),
Field('concentrate_tn', type='double'),
Field('concentrate_quality', type='double')
)
In the table block I have 100 entries, id and location running from 1-100. When I add a new record to the latter table with block_location being 1, it accepts it, but when I try to add 2 or 3 or so on, it won't accept it and it says "FOREIGN KEY constraint failed". I have other tables, which have that identical field: Field('block_location', 'reference block'), but they don't have any problem with values above 1. What is wrong here?
One other thing that comes in mind is that I had the same problem before with that table and I realized I had made a typo: Field('block_location', 'reference ore'), so I referenced a wrong table and that table ore only had one entry (thus accepting only 1 and nothing else). But now even though I fixed it, the problem persists. Could there be some traces of that former line somewhere in the DAL? I truncated both tables (block and block_processing_line_1after fixing.
#Anthony:
Have you compiled the app or turned off migrations? If neither, maybe
try dropping the block_processing_line_1 table.
This solved it! I dropped the table and saved, "undropped" the table and saved, and after that it worked as it should. So this line:
db.block_processing_line_1.drop()
Related
With Entity Framework Core (in case it matters, I'm using version 2.2), I update a number of entities in the DB context and call SaveChanges. In case saving fails because of the foreign key issue, I can't figure out what exact entities have foreign key issues. Is it possible to figure this out?
When Foreign Key conflict happens, SQL Server returns an error with code 547 and text The UPDATE statement conflicted with the FOREIGN KEY constraint "<Constraint>". The conflict occurred in database "<Database name>", table "<Table name>", column '<Column name>' without reference to exact invalid rows.
I have looked at DbUpdateException properties, but no luck:
var dbUpdateEx = ex as DbUpdateException;
var erroredEntries = dbUpdateEx.Entries;
// The documentation says that Entries property "Gets the entries that were involved in the error",
// but in my case it's always empty.
Also, I have tried validating entities manually before save, but such validation does not find FK conflicts. Code sample:
var valProvider = new ValidationDbContextServiceProvider(dbContext);
var valContext = new ValidationContext(entity, valProvider, null);
var entityErrors = new List<ValidationResult>();
var result = Validator.TryValidateObject(entity, valContext, entityErrors, true);
When I save more than 100 records and the save fails because of FK conflict in a single entity, it's quite frustrating to look for this single invalid record by hand.
I have to write a trigger for the tables I made and in insert update, I have to record a separate log table for those that are updated or inserted.
Columns in the log table will be like;
Done_process (will write update, insert)
Person (student number of the person treated)
Before (previous value for update, blank for insert)
After (new value for update, new value for insert)
This is my student_info table,
CREATE TABLE student_info (
school_id NUMBER,
id_no NUMBER NOT NULL UNIQUE,
name VARCHAR2(50) NOT NULL,
surname VARCHAR2(50) NOT NULL,
city VARCHAR2(50) NOT NULL,
birth_date DATE NOT NULL,
CONSTRAINT student_info_pk PRIMARY KEY(school_id )
);
CREATE TABLE og_log(
done_process VARCHAR2(30),
person VARCHAR2(30),
before VARCHAR2(30),
after VARCHAR2(30)
);
CREATE OR REPLACE TRIGGER og_trigger
BEFORE INSERT OR UPDATE OR DELETE ON student_info
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
ENABLE
DECLARE
BEGIN
IF INSERTING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Insert',:new.school_id,:old.name,:new.name);
ELSIF UPDATING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Update',:new.school_id,:old.name,:new.name);
END IF;
END;
/
When I try to run the code it gave an error as follows;
> Trıgger OG_TRIGGER created.
>
>
> Error starting at line : 280 in command - ELSIF UPDATING THEN Error
> report - Unknown Command
>
> SP2-0552: Bind variable "NEW" not declared.
>
> 0 rows inserted.
>
>
> Error starting at line : 283 in command - END IF Error report -
> Unknown Command
>
> SP2-0044: For a list of known commands enter HELP and to leave enter
> EXIT.
>
> Error starting at line : 284 in command - END Error report - Unknown
> Command
I believe you are creating this trigger for learning purpose and not something a real use case because what you do in trigger doesn't really making any sense.
The trigger you have mentioned is not compiling due to syntactical problems like where v_id := 20201033.
Where clause is used to compare the value and thus you should use = instead := which is an assignment operator.
Besides this problem few points which still needs to be taken care
Give a explicit convention for creating local variables. e.g. you have created a local variable v_id and the same column is also available in student_info table. Though it is not a problem in this case but it's good practice to keep the local variable specific like let's say l_v_id.
You have used a select statement inside trigger which could leads to NO_DATA_FOUND error and you should handle it by either in the exception section or another way would be using aggregate function like max() if obviously v_id is primary key. I doubt why you need this select statement ( you could use between old and new using something like coalesce(:old.school_id,:new_schoold_id) if I understood you) but I would leave it open to you to decide and act accordingly.
Considering above points final code will be,
CREATE OR REPLACE TRIGGER og_trigger
BEFORE INSERT OR UPDATE OR DELETE ON student_info
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
ENABLE
DECLARE
BEGIN
IF INSERTING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Insert',:new.school_id,:old.city,:new.city);
ELSIF UPDATING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Update',:new.school_id,:old.city,:new.city);
END IF;
END;
/
Find demo db<>fiddle
EDITED: Solving probably tool issue
I doubt the issue is with SQL Developer tool usage , however last try i would like to make,
Step1:
Drop both the tables used by issuing drop command
drop table STUDENT_INFO;
drop table og_log;
Step2:
Open another SQL worksheet using alt+F10 and do as I have shown in the following image. Please try and let me know.
I m trying to write if statement to give error message if user try to add existing ID number.When i try to enter existing id i get error .untill here it s ok but when i type another id no and fill the fields(name,adress etc) it doesnt go to database.
METHOD add_employee.
DATA: IT_EMP TYPE TABLE OF ZEMPLOYEE_20.
DATA:WA_EMP TYPE ZEMPLOYEE_20.
Data: l_count type i value '2'.
SELECT * FROM ZEMPLOYEE_20 INTO TABLE IT_EMP.
LOOP AT IT_EMP INTO WA_EMP.
IF wa_emp-EMPLOYEE_ID eq pa_id.
l_count = l_count * '0'.
else.
l_count = l_count * '1'.
endif.
endloop.
If l_count eq '2'.
WA_EMP-EMPLOYEE_ID = C_ID.
WA_EMP-EMPLOYEE_NAME = C_NAME.
WA_EMP-EMPLOYEE_ADDRESS = C_ADD.
WA_EMP-EMPLOYEE_SALARY = C_SAL.
WA_EMP-EMPLOYEE_TYPE = C_TYPE.
APPEND wa_emp TO it_emp.
INSERT ZEMPLOYEE_20 FROM TABLE it_emp.
CALL FUNCTION 'POPUP_TO_DISPLAY_TEXT'
EXPORTING
TITEL = 'INFO'
TEXTLINE1 = 'Record Added Successfully.'.
elseif l_count eq '0'.
CALL FUNCTION 'POPUP_TO_DISPLAY_TEXT'
EXPORTING
TITEL = 'INFO'
TEXTLINE1 = 'Selected ID already in database.Please type another ID no.'.
ENDIF.
ENDMETHOD.
I'm not sure I'm getting your explanation. Why are you trying to re-insert all the existing entries back into the table? You're just trying to insert C_ID etc if it doesn't exist yet? Why do you need all the existing entries for that?
If so, throw out that select and the loop completely, you don't need it. You have a few options...
Just read the table with your single entry
SELECT SINGLE * FROM ztable INTO wa WITH KEY ID = C_ID etc.
IF SY-SUBRC = 0.
"this entry exists. popup!
ENDIF.
Use a modify statement
This will overwrite duplicate entries with new data (so non key fields may change this way), it wont fail. No need for a popup.
MODIFY ztable FROM wa.
Catch the SQL exceptions instead of making it dump
If the update fails because of an exception, you can always catch it and deal with exceptional situations.
TRY .
INSERT ztable FROM wa.
CATCH sapsql_array_insert_duprec.
"do your popup, the update failed because of duplicate records
ENDTRY.
I think there's a bug when appending in internal table 'IT_EMP' and inserting in 'ZEMPLOYEE_20' table.
Suppose you append the first time and then you insert. But when you append the second time you will have 2 records in 'IT_EMP' that are going to be inserted in 'ZEMPLOYEE_20'. That is because you don't refresh or clear the internal table and there you will have a runtime error.
According to SAP documentation on 'Inserting Lines into Tables ':
Inserting Several Lines
To insert several lines into a database table, use the following:
INSERT FROM TABLE [ACCEPTING DUPLICATE KEYS] . This
writes all lines of the internal table to the database table in
one single operation. The same rules apply to the line type of
as to the work area described above. If the system is able to
insert all of the lines from the internal table, SY-SUBRC is set to 0.
If one or more lines cannot be inserted because the database already
contains a line with the same primary key, a runtime error occurs.
Maybe the right direction here is trying to insert the work area directly but before you must check if record already exists using the primary key.
Check the SAP documentation on this issue clicking the link before.
On the other hand, once l_count is zero because of l_count = l_count * '0'. that value will never change to any other number making that you won't append or insert again.
why are you retrieving all entries from zemployee_20 ?
You can directly check wether the 'id' exists already or not by using select single. If exists, then show message, if not, add.
It is recommended to retrieve only one field when its needed and not the entire table with asterisc *
SELECT single employee_id FROM ZEMPLOYEE_20 where employee_id = p_id INTO v_id. ( or field in structure )
if sy-subrc = 0. "exists
"show message
else. "not existing id
"populate structure and then add record to Z table
endif.
Furthermore, l_count is not only unnecessary but also bad implemented.
You can directly use the insert query,if the sy-subrc is unsuccessful raise the error message.
WA_EMP-EMPLOYEE_ID = C_ID.
WA_EMP-EMPLOYEE_NAME = C_NAME.
WA_EMP-EMPLOYEE_ADDRESS = C_ADD.
WA_EMP-EMPLOYEE_SALARY = C_SAL.
WA_EMP-EMPLOYEE_TYPE = C_TYPE.
INSERT ZEMPLOYEE_20 FROM WA_EMP.
If sy-subrc <> 0.
Raise the Exception.
Endif.
I'm using Cayenne 3.2M1 and Postgres 9.0.1 to create a database. Right now I'm having problems with the primary key generation of Cayenne since I have tables with more than one primary key and as far as I've read Cayenne cant generate more that one primary key per table. So I want the Postgres to do that work.
I have this table:
CREATE TABLE telefonocliente
(
cod_cliente integer NOT NULL DEFAULT currval('cliente_serial'::regclass),
cod_telefono integer NOT NULL DEFAULT nextval('telefonocliente_serial'::regclass),
fijo integer,
CONSTRAINT telefonocliente_pkey PRIMARY KEY (cod_cliente, cod_telefono)
)
WITH (
OIDS=FALSE
);
TelefonoCliente telefono = context.newObject(TelefonoCliente.class);
telefono.setFijo(4999000);
context.commitChanges();
and this is the error I get:
INFO: --- transaction started.
19/11/2013 22:46:17 org.apache.cayenne.access.dbsync.CreateIfNoSchemaStrategy processSchemaUpdate
INFO: Full or partial schema detected, skipping tables creation
19/11/2013 22:46:17 org.apache.cayenne.log.CommonsJdbcEventLogger logQuery
INFO: SELECT nextval('pk_telefonocliente')
Exception in thread "main" org.apache.cayenne.CayenneRuntimeException: [v.3.2M1 Jul 07 2013 16:23:58] Commit Exception
at org.apache.cayenne.access.DataContext.flushToParent(DataContext.java:759)
at org.apache.cayenne.access.DataContext.commitChanges(DataContext.java:676)
at org.example.cayenne.Main.main(Main.java:45)
Caused by: org.postgresql.util.PSQLException: ERROR: no existe la relaci?n ≪pk_telefonocliente≫
Position: 16
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2102)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1835)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:500)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:374)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:254)
at org.apache.cayenne.dba.postgres.PostgresPkGenerator.longPkFromDatabase(PostgresPkGenerator.java:79)
at org.apache.cayenne.dba.JdbcPkGenerator.generatePk(JdbcPkGenerator.java:272)
at org.apache.cayenne.access.DataDomainInsertBucket.createPermIds(DataDomainInsertBucket.java:171)
at org.apache.cayenne.access.DataDomainInsertBucket.appendQueriesInternal(DataDomainInsertBucket.java:76)
at org.apache.cayenne.access.DataDomainSyncBucket.appendQueries(DataDomainSyncBucket.java:78)
at org.apache.cayenne.access.DataDomainFlushAction.preprocess(DataDomainFlushAction.java:188)
at org.apache.cayenne.access.DataDomainFlushAction.flush(DataDomainFlushAction.java:144)
at org.apache.cayenne.access.DataDomain.onSyncFlush(DataDomain.java:685)
at org.apache.cayenne.access.DataDomain$2.transform(DataDomain.java:651)
at org.apache.cayenne.access.DataDomain.runInTransaction(DataDomain.java:712)
at org.apache.cayenne.access.DataDomain.onSyncNoFilters(DataDomain.java:648)
at org.apache.cayenne.access.DataDomain$DataDomainSyncFilterChain.onSync(DataDomain.java:852)
at org.apache.cayenne.access.DataDomain.onSync(DataDomain.java:629)
at org.apache.cayenne.access.DataContext.flushToParent(DataContext.java:727)
... 2 more
I've been trying the suggestions on Cayenne tutorial "generated columns", "primary key support" but I seems to always get some error.
INFO: SELECT nextval('pk_telefonocliente')
Exception in thread "main" org.apache.cayenne.CayenneRuntimeException: [v.3.2M1 Jul 07 2013 16:23:58] Primary Key autogeneration only works for a single attribute.
I want to know how to solve this.
Thanks in advance
From your description in comments, out of 2 columns comprising the PK of 'telefonocliente', only one is truly independent - 'cod_telefono'. This will be what Cayenne will generate. In case of PosgreSQL, you will need the following sequence in DB for this to happen:
CREATE SEQUENCE pk_telefonocliente INCREMENT 20 START 200;
Now, where does the second PK 'cod_cliente' come from? Since it is also FK to another table, it means it is a "dependent" PK, and must come from a relationship. So first you need to map a many-to-one relationship between 'telefonocliente' and 'cliente'. Check "To Dep Pk" checkbox on the 'telefonocliente' side. Generate a matching ObjRelationship for your Java objects. Now you can use it in your code:
Cliente c = .. // get a hold of this object somehow
TelefonoCliente telefono = context.newObject(TelefonoCliente.class);
telefono.setFijo(4999000);
telefono.setCliente(c); // this line is what will populate 'cod_cliente' PK/FK
That should be it.
The primary key is allowed just to be one per a table! In your case you create a primary key on two columns, that is right, it is defined in an SQL standard and Postgres supports it well.
However there is a not in Cayenne documentation:
Cayenne only supports automatic PK generation for a single column per table.
see http://cayenne.apache.org/docs/3.0/primary-key-generation.html at the bottom of the page.
Probably they can fix it in a newer version or you can put a request to the Cayenne community.
Again we probably have a very simple problem;
our database look like:
CREATE TABLE Question (
idQuestion SERIAL,
questionContent VARCHAR,
CONSTRAINT Question_idQuestion_PK PRIMARY KEY (idQuestion),
);
CREATE TABLE Answer (
idAnswer SERIAL,
answerContent VARCHAR,
idQuestion INTEGER,
CONSTRAINT Answer_idAnswer_PK PRIMARY KEY (idAnswer),
CONSTRAINT Answer_idQuestion_FK FOREIGN KEY (idQuestion) REFERENCES Question(idQuestion),
);
So a Question have many Answers.
Following in entity generated by Netbeans 7.1.2 we have field:
#OneToMany(mappedBy = "idquestion", orphanRemoval=true, cascade= CascadeType.ALL, fetch= FetchType.EAGER)
private Collection<Answer> answerCollection;
As you can see I've already added all possible orphan removal and cascade instructions for cascade removal of collection. And it's working fine but for one moment:
You can delete a Question and connected Answers only if they were created in previous 'instance' of our Application. If I first create a new Question and even one Answer and then go straight and delete it we got an error like:
root cause
Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.2.0.v20110202-r8913): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: org.postgresql.util.PSQLException: ERROR: update or delete on table "question" violates foreign key constraint "answer_idquestion_fk" on table "answer"
Detail: Key (idquestion)=(30) is still referenced from table "answer".
Error Code: 0
Call: DELETE FROM question WHERE ((idquestion = ?) AND (version = ?))
bind => [2 parameters bound]
Query: DeleteObjectQuery(com.accenture.androidwebapp.entities.Question[ idquestion=30 ])
root cause
org.postgresql.util.PSQLException: ERROR: update or delete on table "question" violates foreign key constraint "answer_idquestion_fk" on table "answer"
Detail: Key (idquestion)=(30) is still referenced from table "answer".
If I restart (rebuild, redeploy) the application it's working though.. why? Thanks!
Try adding the EclipseLink #PrivateOwned extension annotation to your collection mapping.
As for the issue with deletion not working until you restart the app, two things come to mind:
You might be using very long EntityManager sessions where everything stays attached to the EnitityManager. If that's the case, it's the change of EntityManager session forced by a restart that's helping. Consider using shorter EntityManager sessions by calling EntityManager.close() when you're done with a session. Work with detached entities and EntityManager.merge() state back in when you want to modify it.
The second level cache, if any, will be cleared by a redeploy. Try disabling the second level cache and see if that helps.