I am using a trigger in PostgreSQL 8.2 to audit changes to a table:
CREATE OR REPLACE FUNCTION update_issue_history() RETURNS trigger as $trig$
BEGIN
INSERT INTO issue_history (username, issueid)
VALUES ('fixed-username', OLD.issueid);
RETURN NULL;
END;
$trig$ LANGUAGE plpgsql;
CREATE TRIGGER update_issue_history_trigger
AFTER UPDATE ON issue
FOR EACH ROW EXECUTE PROCEDURE update_issue_history();
What I want to do is have some way to provide the value of fixed-username at the time that I execute the update. Is this possible? If so, how do I accomplish it?
Try something like this:
CREATE OR REPLACE FUNCTION update_issue_history()
RETURNS trigger as $trig$
DECLARE
arg_username varchar;
BEGIN
arg_username := TG_ARGV[0];
INSERT INTO issue_history (username, issueid)
VALUES (arg_username, OLD.issueid);
RETURN NULL;
END;
$trig$ LANGUAGE plpgsql;
CREATE TRIGGER update_issue_history_trigger
AFTER UPDATE ON issue
FOR EACH ROW EXECUTE PROCEDURE update_issue_history('my username value');
I don't see a way to do this other than to create temp tables and use EXECUTE in your trigger. This will have performance consequences though. A better option might be to tie into another table somewhere and log in who logs in/out by session id and back-end PID, and reference that?
Note you don't have any other way of getting the information into the update statement. Keep in mind that the trigger can only see what is available in the API or in the database. If you want a trigger to work transparently, you can't expect to pass information to it at runtime that it would not have access to otherwise.
The basic question you have to ask is "How does the db know what to put there?" Once you decide on a method there the answer should be straight-forward but there are no free lunches.
Update
In the past when I have had to do something like this when the login to the db is with an application role, the way I have done it is to create a temporary table and then access that table from the stored procedures. Currently stored procedures can handle temporary tables in this way but in the past we had to use EXECUTE.
There are two huge limitations with this approach. The first is that it creates a lot of tables and this leads eventually to the possibility of oid wraparound.
These days I much prefer to have the logins to the db being user logins. This makes this far easier to manage and you can just access via the value SESSION_USER (a newbie mistake is to use CURRENT_USER which shows you the current security context rather than the login of the user.
Neither of these approaches work well with connection pooling. In the first case you can't do connection pooling because your temporary tables will get misinterpreted or clobbered. In the second, you can't do it because the login roles are different.
Related
I've got database ApressFinancial which i created from the book. (Robin Dewson - Beginning SQL Server for Developers (The Expert's Voice in SQL Server) - 2014)
I was asked a question: "How to check the correctness of adding records to the transaction log?" (And there was Hint that i can use trigger instead of)
Could not figure out.
Thank you.
I think you need a INSTEAD OF INSERT trigger in order to catch all your inserted data.
Basically, you create a trigger, which is a special type of stored procedure that lets you hook some functionality inside the transaction that should perform the INSERT (instead of will cause the insert intent to not fulfill). The trigger will expose a special table (not sure this is the exact term, but it behaves like one) called inserted that contains the information that is supposed to be inserted.
A more relevant example can be found here.
NOTE: also take a look upon AFTER INSERT trigger, as this type allows values to be inserted and provide a mechanism to use the values to perform other operations.
I have two SQL Server databases.
One is being used as the back-end for a Ruby-On-Rails system that we are transitioning from but is still in use because of the Ruby apps we are rewriting in ASP.NET MVC.
The databases have similar tables, but not identical, for the Users, Roles and Roles-Users tables.
I want to create some type of trigger to update the user and roles-users tables on each database when a modification is made on the other database of the same table.
I can't just use the users table on the original database because Ruby has a different hash function for the passwords, but I want to ensure that changes on one system are reflected on the other instanter.
I also want to avoid the obvious problem that an update on the one database triggers an update on the other which triggers an update on the first and the process repeats itself until the server crashes or something similarly undesirable happens or a deadlock occurs.
I do not want to use database replication.
Is there a somewhat simple way to do this on a transaction per transaction basis?
EDIT
The trigger would be conceptually something like this:
USE Original;
GO
CREATE TRIGGER dbo.user_update
ON dbo.user WITH EXECUTE AS [cross table user identity]
AFTER UPDATE
AS
BEGIN
UPDATE Another.dbo.users SET column1=value1, etc., WHERE inserted.ID = Another.dbo.users.ID;
END
The problem I am trying to avoid is a recursive call.
Another.dbo.users will have a similar trigger in place on it because the two databases have different types of applications, Ruby-On-Rails on the one and ASP.NET MVC on the other that may be working on data that should be the same on the two databases.
I would add a field to both tables if possible. When adding or updating a table the 'check' field would be set to 0. The trigger would look at this field and if it is 0, having been generated by an application event, then the trigger fires the insert/update into the second table but the check field would have a 1 instead of 0.
So when the trigger fires on the second table it will skip the insert back into table one.
This will solve the recursive problem.
If for some reason you can not add the check field, you can use a separate table with the primary key to the table and the check field. This need more coding but would work also.
So I currently have a primary postgres database that handles multiple users from different apps. So one of the issue regarding concurrency is that when say AppOne and AppTwo want to add users at the same time.
Currently what is happening is AppOne will generate a random number (must be 10 digits long) and then check if the value exists in the database, if it doesn't exist then it will insert the user with that value in a column called user_url (used for their url).
Now as you can image, if in between the time for the generation, check, or insertion AppTwo makes a request to add a users we can have repeated unique values (it's happened). I want to solve that issue potentially using postgres triggers.
I know that I can use transactions, but I don't want to hold up the database, I'd rather it created the unique number sequence through a function and trigger on the database side, so as I scale I don't have to worry about race conditions. Once the trigger does it's thing, I can then get the newly added user with all of it's data, including the unique id.
So Ideally
CREATE OR REPLACE FUNCTION set_unique_number(...) RETURNS trigger AS $$
DECLARE
BEGIN
....something here
RETURN new;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER insert_unique_url_id BEFORE INSERT ... PROCEDURE
set_unique_number(...);
it would be a function that generates the number and inserts it into the row, which would be run by a trigger of BEFORE INSERT. I may be wrong.
Any help/suggestions would be helpful
EDIT: I want it so that there is no sequence to the numbers. this way people could not guess the next user's url.
Thanks
9,000,000,000 is small enough number that birthday problem will guarantee that you'll start to see collisions very soon.
I think you can work around this problem while still allowing concurrent inserts by using advisory locking. Your procedure might look like this (in pseudocode):
while (true) {
start transaction;
bigint new_id = floor(random())*9000000000+1000000000;
if select pg_try_advisory_xact_lock(new_id) {
if select not exists id from url where id=new_id {
insert into url (id, ...) values (new_id, ...);
commit;
break;
}
}
commit;
}
This procedure will never end when you'd have 9,000,000,000 rows in the database. You'd have to implement it externally, as Postgres procedures do not allow multiple transactions within a procedure. It might be possible to work around by using exceptions, but it'll be rather complicated.
Why don't you use UUID-ossp extension? It will allow you to generated UUID's from within postgres itself.
Heres a good tutorial how to use those as primary keys even.
I'm new to triggers and I need to fire a trigger when selecting values from a database table in sql server. I have tried firing triggers on insert/update and delete. is there any way to fire trigger when selecting values?
There are only two ways I know that you can do this and neither are trigger.
You can use a stored procedure to run the query and log the query to a table and other information you'd like to know.
You can use the audit feature of SQL Server.
I've never used the latter, so I can't speak of the ease of use.
No there is no provision of having trigger on SELECT operation. As suggested in earlier answer, write a stored procedure which takes parameters that are fetched from SEECT query and call this procedure after desired SELECT query.
SpectralGhost's answer assumes you are trying to do something like a security audit of who or what has looked at which data.
But it strikes me if you are new enough to sql not to know that a SELECT trigger is conceptually daft, you may be trying to do something else, in which case you're really talking about locking rather than auditing - i.e. once one process has read a particular record you want to prevent other processes accessing it (or possibly some other related records in a different table) until the transaction is either committed or rolled back. In that case, triggers are definitely not your solution (they rarely are). See BOL on transaction control and locking
so, I'm facing the challenge of having to log the data being changed for each field in a table. Now I can obviously do that with triggers (which I've never used before, but I can imagine is not that difficult), but I also need to be able to link the log who performed the change which is where the problem lies. The trigger wouldn't be aware of who is performing the change and I can't pass in a user id either.
So, how can I do what I need to do? If it helps say I have these tables:
Employees {
EmployeeId
}
Jobs {
JobId
}
Cookies {
CookieId
EmployeeId -> Employees.EmployeeId
}
So, as you can see I have a Cookies table which the application uses to verify sessions, and I can infer the user out of it, but again, I can't make the trigger be aware of it if I want to make changes to the Jobs table.
Help would be highly appreciated!
We use context_info to set the user making the calls to the DB. Then our application level security can be enforced all the way to in DB code. It might seem like an overhead, but really there is no performance issue for us.
make_db_call() {
Set context_info --some data representing the user----
do sql incantation
}
in db
select #user = dbo.ParseContextInfo()
... audit/log/security etc can determine who....
To get the previous value inside the trigger you select from the 'deleted' pseudo table, and to the get the values you are putting in you select from th 'inserted' pseudo table.
Before you issue linq2sql query issue the command like this.
context.ExecuteQuery('exec some_sp_to_set_context ' + userId')
Or more preferably I'd suggest an overloaded DataContext, where the above is executed before eqch query. See here for an example.
we don't use multiple SQL logins as we rely on the connection pooling and also locking down the db caller to a limited user.