I have many tables in my database which are interrelated. I have a table (table one) which has had data inserted and the id auto increments. Once that row has an ID i want to insert this into a table (table three) with another set of ID's which comes from a form(this data will also be going into a table, so it could from from that table), the same form as the data which went into the first table came from.
The two ID's together make the primary key of the third table.
How can I do this, its to show that more than one ID is joined to a single ID for something else.
Thanks.
You can't do that through a trigger as the trigger only has available to it the data that you already inserted not data that is currenlty only residing in your user interface.
Normally how you handle this situation is that you write a stored proc that inserts the meeting, returns the id value (using scope_identity() in SQL Server, but I'm sure other databases would have method to return the auto-generated id as well). Then you would use that value to insert to the other table with the other values you need for that table. You would of course want to wrap the whole thing in a transaction.
I think you can probably do what you're describing (just write the INSERTs to table 3) in the table 1 trigger) but you'll have to put the additional info for the table 3 rows into your table 1 row, which isn't very smart.
I can't see why you would do that instead of writing the INSERTs in your code, where someone reading it can see what's happening.
The trouble with triggers is that they make it easy to hide business logic in the database. I think (and I believe I'm in the majority here) that it's easier to understand, manage, maintain and generally all-round deal with an application where all the business rules exist in the same general area.
There are reasons to use triggers (for propagating denormalised values, for example) just as there are reasons for useing stored procedures. I'm going to assert that they are largely related to performance-critical areas. Or should be.
Related
hi i am creating a project where i am actually having 3 related tables which are connected to one table like below
table 1
id
name
table 2
id
tb1_id
random_thing
table 3
id
tb1_id
random_thing
i can not basically go with an option where i can create a row in table1 first and then tb2,tb3 . client wants everything to be done on single button . so i am creating a new blank row whenever the page is called and getting the new tb1_id and then linking everything and go with single button but the problem is i can delete unused rows like 2-3 days later but thats ridiculous so is there any other best practices to get over situations such as this?
Edit
Explanation with an example will be really helpful , i am good to go with any database or any language just example has to be good so i can understand how its done. sorry but i am one of those guys who hates theory and love practicals :d
The best practice is to use explicit foreign key relationships and transactions.
So, the basic idea is:
Begin a transaction.
Insert a row in table 1 with the name.
Get the id of the newly created row, ideally using a returning or output clause (depending on the database).
Insert a row in table 2.
Insert a row in table 3.
Commit the transaction.
When using transactions, just be careful to rollback the transaction if it does not complete for any reason.
As for deleting rows, you can have "cascading delete" options on the foreign key definitions, so if the parent row is deleted then the related rows are also deleted.
Some databases (notably Postgres) offer some functionality where you can put all this into a single statement using CTEs that modify the data. The idea is still the same, just easier to code.
I should note that there are perfectly reasonable alternatives. For instance, you could create a view on the "data" columns of the three tables and create insert/update/delete triggers on the view. Personally, I find that hiding this functionality in triggers makes it more difficult to understand and maintain. I think that is a personal opinion and this is also a reasonable approach.
I have seen something like this asked a number of times but not quite in this configuration. I have a table that has a one to many relation.
Let’s say I have a computer table and a parts table. The user enters a generic info in the computer table then selects parts that are stored in the parts table with a relationship to the computer table of computerId. So the original write is a simple insert. Now let’s say the user select the computer again and changes the part on the pc, adds some new, removes some, and updates a few. Then the user hits save to save the changes. I run a simple update on the computer table but now the issue with the parts table.
Would it be better to delete all the records from the parts table for the computer Id and then do a clean insert of all the parts selected.
Or Run some method that would look at the existing parts in the table and where the part has been updated update the record, where the part no longer exists do a delete, and then insert the remaining parts?
Clearly the simple solution is to delete all and then insert all.
The down side of this SQL traffic, locks, and table fragmentation.
If it is small table and only few concurrent users then fine.
In a high volume environment I do the following
There is no update - that is just an ignore
- delete items gone
- ignore any items not changed
- insert new items
And you can do that in one pass two/three statements.
Or you could define a stored procedure.
Do the delete before the insert to clear space first.
You can get real fancy and use an update for delete / insert but that just gets more complex than it is worth in my mind. You would still have an insert or a delete if the item count is not the same.
delete comp_part
where compID = #compID and partID not in (....);
Insert is a little more tricky:
You can to it with a series of inserts and if you have a PK just let the insert fail
The other way is to create a #table and use it for both the delete and insert
This is only worth the hassle if you have a REALLY busy table.
It all depends upon the business model, if you would want to track the transaction than its not a good option to delete it. If you have all your old transactions with your customers than it would be beneficial for tracking purposes., Your CustomerID would be Primarykey and you can have another Unique key as PartOrderID which will be a unique value for each insert.
Hope this helps
Really you should have three tables. Product, Part, and ProductPart; the ProductPart table would store the association of "this product has these parts". As far as updating, the simplest thing would be to delete all ProductParts for a given Product and re-insert the records you want.
I recently realized that I add some form of row creation timestamp and possibly a "updated on" field to most of my tables. Suddenly I started thinking that perhaps every table in the database should have a created and modified field that are set in the model behind the scenes.
Does this sound correct? Are there any types of high-load tables (like sessions) or massive sized tables that this wouldn't be a good idea for?
I wouldn't put those fields (which I generally call audit fields) on every database table. If it's a low-traffic, high-value table (like Users, for instance), it goes on, no question. I'd also add creator and modifier. If it's a table that gets hit a lot (an operation history table, say), then maybe the benefit isn't worth the cost of increased insert time and storage space.
It's a call you'll need to make separately for each table.
Obviously, there isn't a single rule.
Most of my tables have date-related things, DateCreated, DateModified, and occasionally a Revision to track changes and so on. Do whatever makes sense. Clearly, you can invent cases where it's appropriate and cases where it is not. If you're asking whether you should add them "by default" to most tables, I'd say "probably".
I have a SQL Server as backend and use ms access as frontend.
I have two tables (persons and managers), manager is derived from persons (a 1:1 relation), thus i created a view managersFull which is basically a:
SELECT *
FROM `managers` `m`
INNER JOIN `persons` `p`
ON `m`.`id` = `p`.`id`
id in persons is autoincrementing and the primary key, id in managers is the primary key and a foreign key, referencing persons.id
now i want to be able to insert a new dataset with a form in ms access, but i can’t get it to work. no error message, no status line, nothing. the new rows aren’t inserted, and i have to press escape to cancel my changes to get back to design view in ms access.
i’m talking about a managers form and i want to be able to enter manager AND person information at the same time in a single form
my question is now: is it possible what i want to do here? if not, is there a “simple” workaround using after insert triggers or some lines of vba code?
thanks in advance
The problem is that your view is across several tables. If you access multiple tables you could update or insert in only one of them.
Please also check the MSDN for more detailed information on restrictions and on proper strategies for view updates
Assuming ODBC, some things to consider:
make sure you have a timestamp field in the person table, and that it is returned in your managers view. You also probably need the real PK of the person table in the manager view (I'm assuming your view takes the FK used for the self-join and aliases it as the ID field -- I wouldn't do that myself, as it is confusing. Instead, I'd use the real foreign key name in the managers view, and let the PK stand on its own with its real name).
try the Jet/ACE-specific DISTINCTROW predicate in your recordsource. With Jet/ACE back ends, this often makes it possible to insert into both tables when it's otherwise impossible. I don't know for certain if Jet will be smart enough to tell SQL Server to do the right thing, though.
if neither of those things works, change your form to use a recordsource based on your person table, and use a combo box based on the managers view as the control with which you edit the record to relate the person to a manager.
Ilya Kochetov pointed out that you can only update one table, but the work-around would be to apply the updates to the fields on one table and then the other. This solution assumes that the only access you have to these two tables is through this view and that you are not allowed to create a stored procedure to take care of this.
To model and maintain two related tables in access you don’t use a query or view that is a join of both tables. What you do is use a main form, and drop in a sub-form that is based on the child table. If the link master and child setting in the sub-form is set correctly, then you not need to write any code and access will insert the person’s id in the link field.
So, don’t use a joined table here. Simply use a form + sub-form setup and you be able to edit and maintain the data and the data in the related child table.
This means you base the form on the table, and not a view. And you base the sub-form on the child table. So, don't use a view here.
We have an entity split across 5 different tables. Records in 3 of those tables are mandatory. Records in the other two tables are optional (based on sub-type of entity).
One of the tables is designated the entity master. Records in the other four tables are keyed by the unique id from master.
After update/delete trigger is present on each table and a change of a record saves off history (from deleted table inside trigger) into a related history table. Each history table contains related entity fields + a timestamp.
So, live records are always in the live tables and history/changes are in history tables. Historical records can be ordered based on the timestamp column. Obviously, timestamp columns are not related across history tables.
Now, for the more difficult part.
Records are initially inserted in a single transaction. Either 3 or 5 records will be written in a single transaction.
Individual updates can happen to any or all of the 5 tables.
All records are updated as part of a single transaction. Again, either 3 or 5 records will be updated in a single transaction.
Number 2 can be repeated multiple times.
Number 3 can be repeated multiple times.
The application is supposed to display a list of point in time history entries based on records written as single transactions only (points 1,3 and 5 only)
I'm currently having problems with an algorithm that will retrieve historical records based on timestamp data alone.
Adding a HISTORYMASTER table to hold the extra information about transactions seems to partially address the problem. A new record is added into HISTORYMASTER before every transaction. New HISTORYMASTER.ID is saved into each entity table during a transaction.
Point in time history can be retrieved by selecting the first record for a particular HISTORYMASTER.ID (ordered by timestamp)
Is there any more optimal way to manage audit tables based on AFTER (UPDATE, DELETE) TRIGGERs for entities spanning multiple tables?
Your HistoryMaster seems similar to how we have addressed history of multiple related items in one of our systems. By having a single point to hang all the related changes from in the history table, it is easy to then create a view that uses the history master as the hub and attached the related information. It also allows you to not create records in the history where an audit is not desired.
In our case the primary tables were called EntityAudit (where entity was the "primary" item being retained) and all data was stored EntityHistory tables related back to the Audit. In our case we were using a data layer for business rules, so it was easy to insert the audit rules into the data layer itself. I feel that the data layer is an optimal point for such tracking if and only if all modifications use that data layer. If you have multiple applications using distinct data layers (or none at all) then I suspect that a trigger than creates the master record is pretty much the only way to go.
If you don't have additional information to track in the Audit (we track the user who made the change, for example, something not on the main tables) then I would contemplate putting the extra Audit ID on the "primary" record itself. Your description does not seem to indicate you are interested in the minor changes to individual tables, but only changes that update the entire entity set (although I may be miss reading that). I would only do so if you don't care about the minor edits though. In our case, we needed to track all changes, even to the related records.
Note that the use of an Audit/Master table has an advantage in that you are making minimal changes to the History tables as compared to the source tables: a single AuditID (in our case, a Guid, although autonumbers would be fine in non distributed databases).
Can you add a TimeStamp / RowVersion datatype column to the entity master table, and associate all the audit records with that?
But an Update to any of the "child" tables will need to update the Master entity table to force the TimeStamp / RowVersion to change :(
Or stick a GUID in there that you freshen whenever one of the associated records changes.
Thinking that through, out loud, it may be better to have a table joined 1:1 to Master Entity that only contains the Master Entity ID and the "version number" fo the record - either TimeSTamp / RowVersion, GUID, incremented number, or something else.
I think it's a symptom of trying to capture "abstract" audit events at the lowest level of your application stack - the database.
If it's possible consider trapping the audit events in your business layer. This would allow you to capture the history per logical transaction rather than on a row-by-row basis. The date/time is unreliable for resolving things like this as it can be different for different rows, and the same for concurrent (or closely spaced) transactions.
I understand that you've asked how to do this in DB triggers though. I don't know about SQL Server, but in Oracle you can overcome this by using the DBMS_TRANSACTION.LOCAL_TRANSACTION_ID system package to return the ID for the current transaction. If you can retrieve an equivalent SQLServer value, then you can use this to tie the record updates for the current transaction together into a logical package.