I've been working with Entity framework in C# trying to figure out how to join two tables together. I found a reference here http://msdn.microsoft.com/en-us/data/jj715646.aspx on how to do this. Problem is, the two tables have PKs that are not in sync which seems to be a requirement. I've never had to worry about syncing PKs from two tables in a database before. I know I can turn off identity insert on one table but I see comments from numerous people that this is a very bad idea. If I'm not supposed to do this, then how do I accomplish syncing the PKs in each of the tables?
I have two tables in a database:
User
pkID (int)
FirstName (varchar)
LastName (varchar)
Email (varchar)
...
LockedFlags (locking fields in user from being edited)
pkID
fkUserID
bFirstName (bool)
bLastName (bool)
bEmail (bool)
I'm curious on why people thing that removing the identity insert on a table is a bad idea... if I'm relying on MSSQL to assign a PK then I could see an instance when I'm inserting a record into the database where the second table write could get a different value when dealing with multiple writes...
It sounds like you have orphaned rows in the LockedFlags table, like a row with a user ID that points to a user that has been deleted. Depending on how the relationship is setup it can also be true for the reverse.
If you have a entity where the 2 tables are combined into a single class, loading the entity set will query both tables and require matching pairs of rows.
Your LockedFlags probably has a User property which it is trying to load and cannot find in the user table.
Table options:
Note: I'm using MSSQL equivalent as I don't know MYSQL.
Comments regarding your data model:
I don't know how MYWSQL handles record locking but if it is anything like MSSQL, you do not have to worry about manually handling.
I would strongly suggest re-looking at your data model if you're going to use it as is. Just using a single table would be best if you really want to manually lock individual row fields?
Edit:
ALTER TABLE LockFlags ADD CONSTRAINT
FK_LockFlags_User FOREIGN KEY
(
UserID
) REFERENCES User
(
pkID
) ON UPDATE NO ACTION
ON DELETE NO ACTION
GO
Should I break my database up into two under this scenario?
Scenario
While customers are creating, editing, saving orders in the Order table, the website owner calls a stored procedure to alter table properties in the Email table.
Update
The only relation the Order table has with the Email table is the userId (FK on Email table) . So, what are the ramifications if WHILE customer is placing an order, I am simultaneous adding say a nullable "CcAddressId" column to the Email table. Would problems occur with this order being successful?
Questions:
Will either have a potential error if these events occurring simultaneously?
Would it be better to break up the database into groups?
Short answer is not to break the database in two because you will lose the referential integrity enforced by your foreign keys, not to mention other issues that others brought up above.
What you need to do is find out if SQL Server will issue any kind of lock for your EMails table as you are inserting data into Orders table.
You can experiment and find out for yourself using this as a starting point:
http://aboutsqlserver.com/2012/04/05/locking-in-microsoft-sql-server-part-13-schema-locks/
Altering a table will try to issue a schema modification lock (SCH-M) which is incompatible with any other kinds of locks. Therefore, if there is any activity going on over the table being altered (which I assume there will be because foreign key constraints are being validated), your schema modification statement will be blocked for a long time.
This is why it is better to run schema altering statements when your database is NOT under heavy load.
I would like to use IBatis for polling 3 legacy databases for new rows and insert into a new database. But our customers don't allow me to insert one "status" column in three legacy databases which help me avoid consuming twice or more. So what do I have to do? Thanks in advance!
Create a new table with the status column and add a foreign key pointing to the primary key of the legacy table. Create a view with both tables joined together and you will have your status column associated with the legacy table without altering it.
You can use the idempotent consumer EIP to filter out duplicates
http://camel.apache.org/idempotent-consumer.html
But as Joachim said, you need a new table to store the status.
You can maybe also create a SQL VIEW on the original table + status table, and let iBatis query that view.
In a one-to-many relationship what's the best way to handle data so it's flexible enough for the user to save the slave data before he saves the master table data.
reserving the row ID of the master so the I can save de slave data with the reserved master id
save slave data in a temporary table so that when we save the master data we can "import" the data in the temporary table
other??
Example in a ticket/upload multiple files form where the users has the possibility to upload the files before sendind the ticket information:
Master table
PK
ticket description
Slave table
PK
Master_FK
File
Are your id's autogenerated?
You have several choices all with possible problems.
First don't define a FK relationship. Now how do you account for records in a partial state and those who never get married up to the real record? And how do you intend to marry up the records when the main record is inserted?
Insert a record into the master table first that where everything is blank except the id. This makes enforcing all required fields default to the user application, which I'm not wild about from a data integrity standpoint.
Third and most complex but probably safest - use 3 tables. Create the master record in a table that only contains the master recordid and return that to your application on opening the form to create a new record. Create a pk/fk relationship to both the orginal master table and the foreign key table. Remove the autogeneration of the id from the orginal master table and insert the id from the new master table instead when you insert the record. Insert the new master table id when you insert records to the orginal FK table as well. At least this way, you can continue to have all the required fields marked as required in the database but the relationship is between the new table and the other table not the original table and the other table. This won't affect querying (as long as you have proper indexing), but will make things more complicated if you delete records as you could leave some hanging out if you aren't careful. Also you would have to consider if there are other processes (such as data imports from another source) which might be inserting records to the main table which would have to be adjusted as the id would no longer be autogenerated..
In Oracle (maybe others?) you can defer a constraint's validation until COMMIT time.
So you could insert the child rows first. (You'd need the parent key, obviously.)
Why can't you create the master row and flag it as incomplete?
In case of upload you will have to create temporary storage for not committed upload. So that once upload started you save all new files in a separate table. Once user ready to submit ticket you save ticket and append files from temp table.
Also you can create fake record if it possible with some fixed id in master table. You then have to make sure that fake record does not appear in queries in other places.
Third, you can create stored procedure which would generate id for primary table and increment identity counter. If user aborts operation reserved id will not affect anything. It is just like if you create master record and then delete it. You can create temporary records in master table as well.
I've got a table structure I'm not really certain of how to create the best way.
Basically I have two tables, tblSystemItems and tblClientItems. I have a third table that has a column that references an 'Item'. The problem is, this column needs to reference either a system item or a client item - it does not matter which. System items have keys in the 1..2^31 range while client items have keys in the range -1..-2^31, thus there will never be any collisions.
Whenever I query the items, I'm doing it through a view that does a UNION ALL between the contents of the two tables.
Thus, optimally, I'd like to make a foreign key reference the result of the view, since the view will always be the union of the two tables - while still keeping IDs unique. But I can't do this as I can't reference a view.
Now, I can just drop the foreign key, and all is well. However, I'd really like to have some referential checking and cascading delete/set null functionality. Is there any way to do this, besides triggers?
sorry for the late answer, I've been struck with a serious case of weekenditis.
As for utilizing a third table to include PKs from both client and system tables - I don't like that as that just overly complicates synchronization and still requires my app to know of the third table.
Another issue that has arisen is that I have a third table that needs to reference an item - either system or client, it doesn't matter. Having the tables separated basically means I need to have two columns, a ClientItemID and a SystemItemID, each having a constraint for each of their tables with nullability - rather ugly.
I ended up choosing a different solution. The whole issue was with easily synchronizing new system items into the tables without messing with client items, avoiding collisions and so forth.
I ended up creating just a single table, Items. Items has a bit column named "SystemItem" that defines, well, the obvious. In my development / system database, I've got the PK as an int identity(1,1). After the table has been created in the client database, the identity key is changed to (-1,-1). That means client items go in the negative while system items go in the positive.
For synchronizations I basically ignore anything with (SystemItem = 1) while synchronizing the rest using IDENTITY INSERT ON. Thus I'm able to synchronize while completely ignoring client items and avoiding collisions. I'm also able to reference just one "Items" table which covers both client and system items. The only thing to keep in mind is to fix the standard clustered key so it's descending to avoid all kinds of page restructuring when the client inserts new items (client updates vs system updates is like 99%/1%).
You can create a unique id (db generated - sequence, autoinc, etc) for the table that references items, and create two additional columns (tblSystemItemsFK and tblClientItemsFk) where you reference the system items and client items respectively - some databases allows you to have a foreign key that is nullable.
If you're using an ORM you can even easily distinguish client items and system items (this way you don't need to negative identifiers to prevent ID overlap) based on column information only.
With a little more bakcground/context it is probably easier to determine an optimal solution.
You probably need a table say tblItems that simply store all the primary keys of the two tables. Inserting items would require two steps to ensure that when an item is entered into the tblSystemItems table that the PK is entered into the tblItems table.
The third table then has a FK to tblItems. In a way tblItems is a parent of the other two items tables. To query for an Item it would be necessary to create a JOIN between tblItems, tblSystemItems and tblClientItems.
[EDIT-for comment below] If the tblSystemItems and tblClientItems control their own PK then you can still let them. You would probably insert into tblSystemItems first then insert into tblItems. When you implement an inheritance structure using a tool like Hibernate you end up with something like this.
Add a table called Items with a PK ItemiD, And a single column called ItemType = "System" or "Client" then have ClientItems table PK (named ClientItemId) and SystemItems PK (named SystemItemId) both also be FKs to Items.ItemId, (These relationships are zero to one relationships (0-1)
Then in your third table that references an item, just have it's FK constraint reference the itemId in this extra (Items) table...
If you are using stored procedures to implement inserts, just have the stored proc that inserts items insert a new record into the Items table first, and then, using the auto-generated PK value in that table insert the actual data record into either SystemItems or ClientItems (depending on which it is) as part of the same stored proc call, using the auto-generated (identity) value that the system inserted into the Items table ItemId column.
This is called "SubClassing"
I've been puzzling over your table design. I'm not certain that it is right. I realise that the third table may just be providing detail information, but I can't help thinking that the primary key is actually the one in your ITEM table and the FOREIGN keys are the ones in your system and client item tables. You'd then just need to do right outer joins from Item to the system and client item tables, and all constraints would work fine.
I have a similar situation in a database I'm using. I have a "candidate key" on each table that I call EntityID. Then, if there's a table that needs to refer to items in more than one of the other tables, I use EntityID to refer to that row. I do have an Entity table to cross reference everything (so that EntityID is the primary key of the Entity table, and all other EntityID's are FKs), but I don't find myself using the Entity table very often.