Execute default action in SQL Server's insted of triggers? - sql-server

Is there a way in a INSTEAD OF trigger to just cause the default action?
For instance write a query like this:
BEGIN
IF <some rule>
BEGIN
ROLLBACK
EXIT
END
/* Long, boring, DDL dependant query ...
INSERT INTO
...
... ...
... ... ... -_-'
*/
// A simple statement that does the job!!
<default insertion>
END
My goal is to check some business rule and only if the check passes insert (or update/delete) without having to rewrite the whole statement which would break if the table's structure was to change.

Generally, no. In some vanilla cases you can get away with using * instead of listing the fields, but you probably shouldn't do that anyway.
SQL is not an OO language or environment and does not have the same level of code re-usability as they do. Consequently, there are many common cases where the right thing to do is repetitive and even redundant. In short, DRY does not always apply in SQL.
In particular, it almost never applies with respect to the impact of DDL changes. So for DDL changes, it is best to accept that the Edit Sensitivity of correct code is always going to be greater than one. You are just going to have to change things in more than one place.

Related

one simple query written in store procedure and same written as inline query which will execute fast.?

one simple query written in store procedure and same written as inline query which will execute fast in SQL server.
someone from interview panel asked this question from me i said store procedure reason being procedure is compiled but he said i am wrong.
please explain.?
I suppose, that a simple query is some read-only code.
A VIEW is fully inlineable and precompiled. The biggest advantage is, that you can bind this into bigger statement with joins and filters and this will - in most cases - be able to use indexes and statistics.
A table valued function (TVF) in the "new" syntax (without BEGIN and END is very similar to a VIEW but the parameters and there handling are precompiled too. The advantages of the VIEW are the same here.
An UDF returning a table ("old" syntax") is in most cases something one should not do. The biggest disadvantage is the fact, that the optimizer cannot pre-estimate the result and will handle this as one-row-table which is - in most cases - really bad...
A StoredProcedure which does nothing more than a VIEW or a TVF could do as well, is a pain in the neck - at least in my eyes. I know, that there are other opinions... The biggest draw back is: Whenever you want to continue with the returned result set, you have to insert it into a table (or declared table variable). Further joins or filters against this new table will miss indexes and statistics. Might be that a simple Hey SP give me your result! is fast, but everything after this call is absolutely down.
So my facit: Use SPs when there is something to do and use VIEW or TVF when there is something to read

Can we replace a DML trigger with a stored procedure

Not sure if this has been asked before cause while typing the title text the possible duplicate given suggestion's doesn't match.
One of my colleague asked if a DML trigger functioning can be replaced totally with a stored procedure(SP). Well sounds bit weird at first but it's possible cause trigger is also a special type of SP but not explicitly callable.
I mean say for example: a AFTER INSERT Trigger named trg_insert1 defined on tbl1 which does update some data in in tbl2 like below (taken a SQL Server Example but question is not specific to any DB)
create trigger trg_insert1
after insert on tbl1
foreach row
begin
update tbl2 set somedata = inserted.tbl1somedata
where id = inserted.tbl1id;
end
Now this trigger can be replaced with a SP like below (using transaction block);
create procedure usp_insertupdate (#name varchar(10), #data varchar(200))
as
begin
begin try
begin trans
insert into tbl1(name, data) values(#name, #data);
update tbl2 set somedata = #data
where id = scope_identity();
commit trans
end try
begin catch
if ##TRANCOUNT > 0
rollback trans
end catch
end
Which will work perfectly in almost all cases of DML trigger like after/before -> insert/delete/update. BUT I really couldn't answer/explain
what the difference then?
Is it a good practice to do so?
Is it not possible in all cases?
Am I being thinking it over complex.
Please let me know what you think.
[NOTE: Not a specific RDBMS related question though]
I'll try to answer in a very general sense (you specified this is not targeted to a specific implementation).
First of all, a trigger is written in the same data manipulation language that you would use for a stored procedure. So in terms of capabilities Trigger and Stored Procedure are the same.
But...
a trigger is guaranteed to be invoked every time you alter the data, no matter if you do that through a stored procedure, another trigger, or by manually executing a SQL statement.
In fact you can expect a trigger to always execute (for its triggering statement) unless you explicitly disable it.
A stored procedure, on the other hand it is guaranteed never to run by itself unless you explicitly run it.
This has an important consequence: triggers are better at ensuring consistency. If someone in a hurry removes a record in your live instance by typing:
Delete from tablex where uid="QWTY10311"
any bookkeeping action implemented as a trigger will be executed, while if the user forgets (or maliciously avoid) following this with
Execute SP_TABLEX_LOG("DELETE","QWTY10311")
your DB will just lose the data silently.
Triggers have two other important characteristics that can be duplicated with stored procedures only through extra (sometimes significantly more expensive) effort.
First of all they are executed record-by-record. So if you are deleting 1 million records the logging will be performed for each operation. Good luck calling the appropriate stored procedure with a 1 million rows cursor as a parameter, ESPECIALLY if you want to do that after a manual operation as in my example above.
Second advantage: Triggers have a special scope where they can reference pre- and post- change values for each field.
So if you are incrementing a table of prices by 10% and want to log what the previous value was, and which user performed the action at what time, you will have "old-value", "new-value", "user-id" and "timestamp" in scope for any kind of operation you may want to do.
Again, doing this by invoking a stored procedure means you have to save the values to pass them to the stored procedure when it runs.
So why bother with SP anyway? (this will answer, hopefully, your question about "best use case").
Stored Procedure are better when you need to create complex business logic which will be invoked by an application layer. So if you want to know, for example, how many hotel rooms are available between two given dates and with the extra requirement that pets are allowed, a trigger would not be a good idea.
Especially because a trigger will not return any result to an invoking process...
So anytime you need to get some result to the caller, be it a query, a calculation, or anything else that has OUTPUT parameters, a trigger is useless.
Triggers should be used to enforce consistency. If a header record should not be deleted unless it has no children in other tables, enforce this with a trigger, maybe. If you need to log whoever changes a value in a field, no matter how, use a trigger.
In all other cases, use a stored procedure (keep also in mind that triggers will impact the responsiveness of any data update, just like indexes).
Yes stored procedures can be used to replace DML triggers in this way, and whether it is a good practice or not depends on your needs.
The main difference is that a trigger will run its code every time it is fired. In your example, if a user does an ad-hoc INSERT to tbl1, a trigger will fire and tbl2 will get updated.
A stored procedure can only be used to enforce this rule if ad-hoc INSERTs are not allowed.

Use transactions for select statements?

I don't use Stored procedures very often and was wondering if it made sense to wrap my select queries in a transaction.
My procedure has three simple select queries, two of which use the returned value of the first.
In a highly concurrent application it could (theoretically) happen that data you've read in the first select is modified before the other selects are executed.
If that is a situation that could occur in your application you should use a transaction to wrap your selects. Make sure you pick the correct isolation level though, not all transaction types guarantee consistent reads.
Update :
You may also find this article on concurrent update/insert solutions (aka upsert) interesting. It puts several common methods of upsert to the test to see what method actually guarantees data is not modified between a select and the next statement. The results are, well, shocking I'd say.
Transactions are usually used when you have CREATE, UPDATE or DELETE statements and you want to have the atomic behavior, that is, Either commit everything or commit nothing.
However, you could use a transaction for READ select statements to:
Make sure nobody else could update the table of interest while the bunch of your select query is executing.
Have a look at this msdn post.
Most databases run every single query in a transaction even if not specified it is implicitly wrapped. This includes select statements.
PostgreSQL actually treats every SQL statement as being executed within a transaction. If you do not issue a BEGIN command, then each individual statement has an implicit BEGIN and (if successful) COMMIT wrapped around it. A group of statements surrounded by BEGIN and COMMIT is sometimes called a transaction block.
https://www.postgresql.org/docs/current/tutorial-transactions.html

What are PostgreSQL RULEs good for?

Question
I often see it stated that rules should be avoided and triggers used instead. I can see the danger in the rule system, but certainly there are valid uses for rules, right? What are they?
I'm asking this out of general interest; I'm not very seasoned with databases.
Example of what might be a valid use
For instance, in the past I've needed to lock down certain data, so I've done something like this:
CREATE OR REPLACE RULE protect_data AS
ON UPDATE TO exampletable -- another similar rule for DELETE
WHERE OLD.type = 'protected'
DO INSTEAD NOTHING;
Then if I want to edit the protected data:
START TRANSACTION;
ALTER TABLE exampletable DISABLE RULE protect_data;
-- edit data as I like
ALTER TABLE exampletable ENABLE RULE protect_data;
COMMIT;
I agree this is hacky, but I couldn't change the application(s) accessing the database in this case (or even throw errors at it). So bonus points for finding a reason why this is a dangerous/invalid use of the rule system, but not for why this is bad design.
One of the use cases for RULES are updateable views (although that changes in 9.1 as that version introduces INSTEAD OF triggers for views)
Another good explanation can be found in the manual:
For the things that can be implemented by both, which is best depends on the usage of the database. A trigger is fired for any affected row once. A rule manipulates the query or generates an additional query. So if many rows are affected in one statement, a rule issuing one extra command is likely to be faster than a trigger that is called for every single row and must execute its operations many times. However, the trigger approach is conceptually far simpler than the rule approach, and is easier for novices to get right.
(Taken from: http://www.postgresql.org/docs/current/static/rules-triggers.html)
Some problems with rules are shown here: http://www.depesz.com/index.php/2010/06/15/to-rule-or-not-to-rule-that-is-the-question/ (for instance, if a random() is included in a query, it might get executed twice and return different values).
The biggest drawback of rules is that people don't understand them.
For example, one might think that having the rule:
CREATE OR REPLACE RULE protect_data AS
ON UPDATE TO exampletable -- another similar rule for DELETE
WHERE OLD.type = 'protected'
DO INSTEAD NOTHING;
Will mean that if I'll issue:
update exampletable set whatever = whatever + 1 where type = 'protected'
No query will be ran. Which is not true. The query will be run, but it will be ran in modified version - with added condition.
What's more - rules break very useful thing, that is returning clause:
$ update exampletable set whatever = whatever + 1 where type = 'normal' returning *;
ERROR: cannot perform UPDATE RETURNING on relation "exampletable"
HINT: You need an unconditional ON UPDATE DO INSTEAD rule with a RETURNING clause.
To wrap it - if you really, really, positively have to use writable views, and you're using pre 9.1 PostgreSQL - you might have a valid reason to use rules.
In all other cases - you're most likely shooting yourself in a foot, even if you don't immediately see it.
I've had a few bitter experiences with rules when dealing with volatile functions (if memory serves, depesz' blog post highlights some of them).
I've also broken referential integrity when using them because of the timing in which fkey triggers get fired:
CREATE OR REPLACE RULE protected_example AS
ON DELETE TO example
WHERE OLD.protected
DO INSTEAD NOTHING;
... then add another table, and make example reference that table with an on delete cascade foreign key. Then, delete * from that table... and recoil in horror.
I reported the above issue as a bug, which got dismissed as a feature/necessary edge case. It's only months later that I made sense of why that might be, i.e. the fkey trigger does its job, and the rule then kicks in and does its own, but the fkey trigger won't check that its job was done properly for performance reasons.
The practical use-case where I still use rules is when a BEFORE trigger that pre-manipulates data (the SQL standard says is not allowed, but Postgres will happily oblige) can result in pre-manipulating the affected rows and thus changing their ctid (i.e. it gets updated twice, or doesn't get deleted because an update invalidated the delete).
This results in Postgres returning an incorrect number of affected rows, which is no big deal until you monitor that number before issuing subsequent statements.
In this case, I've found that using a strategically placed rule or two can allow to pre-emptively execute the offending statement(s), resulting in Postgres returning the correct number of affected rows.
How about this: You have a table that needs to be changed into a view. In order to support legacy applications that insert into said table, a rule is created that maps "inserts" to the new view to the underlying table(s).

Postgresql: keep 2 sequences synchronized

Is there a way to keep 2 sequences synchronized in Postgres?
I mean if I have:
table_A_id_seq = 1
table_B_id_seq = 1
if I execute SELECT nextval('table_A_id_seq'::regclass)
I want that table_B_id_seq takes the same value of table_A_id_seq
and obviously it must be the same on the other side.
I need 2 different sequences because I have to hack some constraints I have in Django (and that I cannot solve there).
The two tables must be related in some way? I would encapsulate that relationship in a lookup table containing the sequence and then replace the two tables you expect to be handling with views that use the lookup table.
Just use one sequence for both tables. You can't keep them in sync unless you always sync them again and over again. Sequences are not transaction safe, they always roll forwards, never backwards, not even by ROLLBACK.
Edit: one sequence is also not going to work, doesn't give you the same number for both tables. Use a subquery to get the correct number and use just a single sequence for a single table. The other table has to use the subquery.
My first thought when seeing this is why do you really want to do this? This smells a little spoiled, kinda like milk does after being a few days expired.
What is the scenario that requires that these two seq stay at the same value?
Ignoring the "this seems a bit odd" feelings I'm getting in my stomach you could try this:
Put a trigger on table_a that does this on insert.
--set b seq to the value of a.
select setval('table_b_seq',currval('table_a_seq'));
The problem with this approach is that is assumes only a insert into table_a will change the table_a_seq value and nothing else will be incrementing table_a_seq. If you can live with that this may work in a really hackish fashion that I wouldn't release to production if it was my call.
If you really need this, to make it more robust make a single interface to increment table_a_seq such as a function. And only allow manipulation of table_a_seq via this function. That way there is one interface to increment table_a_seq and you should also put
select setval('table_b_seq',currval('table_a_seq')); into that function. That way no matter what, table_b_seq will always be set to be equal to table_a_seq. That means removing any grants to the users to table_a_seq and only granting them execute grant on the new function.
You could put an INSERT trigger on Table_A that executes some code that increases Table_B's sequence. Now, every time you insert a new row into Table_A, it will fire off that trigger.

Resources