Trying to explore solutions alternative to using insert triggers. Like API based ones and pros and cons with different approaches.
In an API approach you would create a procedure to perform both operations - something like:
package body emp_api is
procedure insert_emp (...) is
begin
insert into emp (...) values (...);
-- Insert that was previously in trigger
insert into other_table (...) values (...);
end;
end;
Then you force applications to use the API by giving them EXECUTE access to the api package but no INSERT/UPDATE/DELETE access to the tables.
If you want to guarantee that you'll have a record inserted into tableB when something inserts into tableA, then keep the trigger. You can disable if bulk loading into tableA and can guarantee you'll have the only process loading into that table during that time.
As soon as you remove the trigger, you have NO guarantees about inserts into tableB. Your only hope is that any and all programs that may insert into tableA (do you really know all of these?) adhere to the secondary insert into tableB. This is "data integrity via company policy", not data integrity enforced via Oracle.
This approach depends on how much you care about the state of the data in tableB I suppose.
I would NOT go the route of table apis (TAPIs), which now force any/all operations through some pl/sql api that handles the logic. These almost always tend to be slow and buggy in my experience.
In DDL You can disable a trigger with ALTER TRIGGER or ALTER TABLE.
ALTER TRIGGER triggername DISABLE; -- disable a single trigger
ALTER TABLE tablename DISABLE ALL TRIGGERS; -- disable all triggers on a table
To do this at runtime, you would have to use dynamic SQL and the schema in which the procedure is running must own the table or otherwise have the necessary privileges.
EXECUTE IMMEDIATE 'ALTER TRIGGER tablename DISABLE ALL TRIGGERS';
For more info on enabling/disabling triggers, see http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/general004.htm
Related
I have a TABLE A with 3000 records with 25 columns. I want to have a history table called Table A history holding all the changes updates and deletes for me to look up any day. I usually use cursors. Now thought using triggers which I was not asked to. Do you have any other suggestions? Many thanks!
If your using tsql /SQL server and you can't use triggers, which is the only sure way to get every change, maybe use a stored procedure that is scheduled in job to run every x amount of time, the stored procedure using a MERGE statement with the two tables to get new records or changes. I would not suggest this if you need every single change without question.
CREATE TABLE dbo.TableA (id INT, Column1 nvarchar(30))
CREATE TABLE dbo.TableA_History (id INT, Column1 nvarchar(30), TimeStamp DateTime)
(this code isn't production, just the general idea)
Put the following code inside a stored procedure and use a Sql Server Job with a schedule on it.
MERGE INTO dbo.TableA_History
USING dbo.TableA
ON TableA_History.id = TableA.id AND TableA_History.Column1 = TableA.Column1
WHEN NOT MATCHED BY TARGET THEN
INSERT (id,Column1,TimeStamp) VALUES (TableA.id,TableA.Column1,GETDATE())
So basically if the record either doesn't exist or doesn't match meaning a column changed, insert the record into the history table.
It is possible to create history without triggers in some case, even if you are not using SQL Server 2016 and system-versioned table are not available.
In some cases, when you can identify for sure which routines are modifying your table, you can create history using OUTPUT INTO clause.
For example,
INSERT INTO [dbo].[MainTable]
OUTPUT inserted.[]
,...
,'I'
,GETUTCDATE()
,#CurrentUserID
INTO [dbo].[HistoryTable]
SELECT *
FROM ... ;
In routines, when you are using MERGE I like that we can use $action:
Is available only for the MERGE statement. Specifies a column of type
nvarchar(10) in the OUTPUT clause in a MERGE statement that returns
one of three values for each row: 'INSERT', 'UPDATE', or 'DELETE',
according to the action that was performed on that row.
It's very handy that we can add the user which is modifying the table. Using triggers you need to use session context or session variable to pass the user. In versioning table you need to add additional column to the main table in order to log the user as it only logs the current table columns (at least for now).
So, basically it depends on your data and application. If you have many sources of CRUD over the table, the trigger is the most secure way. If your table is very big and heavily used, using MERGE is not good as it my cause blocking and harm performance.
In our databases we are using all of the methods depending on the situation:
triggers for legacy
system-versioning for new development
direct OUTPUT in the history, when sure that data is modified only by given set of routines
I want create a Stored Procedure that insert data to 3 table using transactions.
I get the last primary key value of the Main table using MAX. It takes a little time to get it.
My problem is here where some request come in same time and all of them get same result for last record. How can I lock transaction or other solution to it?
I know I can make an identity field and use it with SCOPE_IDENTITY, but don't want do this, unless I'm forced to do it.
The Best way would be to use Identity or Sequence for your primary key. You can add another column for your user generated unique key which is based on the logic for other tables.
Identity Approach
In essence what you would do is:
Insert row in main table.
Calculate your unique key using MAX(unique key) where id < SCOPE_IDENTITY() and your additional logic and update the main table
Insert in other tables
Locking Approach
If you want to lock transactions (not recommended), you can use serializable transaction with with(UPDLOCK)do something like .
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
SELECT MAX(ID) FROM MainTable with(UPDLOCK)
// Do Stuff and generate new ID and insert into main table
COMMIT
Note: Until you COMMIT or ROLLBACK, this transaction will block both readers and writers.
The best way to resolve is using SCOPE_IDENTITY but you mentioned you don't want . I think you need to get the max id from a table and combine with other thing to generate a unique statement to use in your table.
The best solution to do this is using a transaction with serialized level like below:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
Get your Last ID and use it . The other session will wait until you commit or rollback
COMMIT TRANSACTION-- ROLLBACK
Is there any possibility to disable auto creating statistics on specific table in database, without disabling auto creating statistics for entire database?
I have a procedure wich written as follow
create proc
as
create table #someTempTable(many columns, more than 100)
inserting into #someTempTable **always one or two row**
exec proc1
exec proc2
etc.
proc1, proc2 .. coontains many selects and updates like this:
select ..
from #someTempTable t
join someOrdinaryTable t2 on ...
update #someTempTable set col1 = somevalue
Profiler shows that before each select server starts collecting stats in #someTempTable, and it takes more than quarter of entire execution of proc. Proc is using in OLPT processing and should works very fast. I want to change this temporary table to table variable(because for table variables server doesn't collect stats) but can't because it lead me to rewrite all this procedures to passing variables between them and all of this legacy code should be retests. I'm searching alternative way how to force server to behave temporary table like table variables in part of collecting stats.
P.S. I'm know that stats is useful thing but in this case it's useless because table alway contains small amount of records.
I assume you know what you are doing. Disabling a statistics is generally a bad idea. Anyhow:
EXEC sp_autostats 'table_name', 'OFF'
More documentation here: https://msdn.microsoft.com/en-us/library/ms188775.aspx.
Edit: OP clarified that he wants to disable statistics for a temp table. Try this:
CREATE TABLE #someTempTable
(
ID int PRIMARY KEY WITH (STATISTICS_NORECOMPUTE = ON),
...other columns...
)
If you don't have a primary key already, use an identity column for a PK.
I'm migrating our system from Oracle to SQL SERVER. In Oracle we have insert triggers that are resposnible for setting primary key if not set. Below you will find code from PL/SQL.
create or replace trigger trigg1
before insert on table1
for each row
when (new.ID_T1 is null) -- if primary key is null
begin
select OUR_SEQ.nextval into :new.ID_T1 from dual;
end trigg1;
Now I have to do something similar in T-SQL. I found the solution, but unfortunatelly I have to list all the columns for the table trigger is created. This is something I want to avoid (model for the system is still very dynamic).
Is it possible to implement such trigger without listing all the columns in trigger?
Marcin
I am trying to write a trigger on a table to avoid insertion of two Names which are not flagged as IsDeleted. But the first part of selection contains the inserted one and so the condition is always true. I though that using FOR keyword causes the trigger to run before the INSERTION but in this case the inserted row is already in the table. Am I wrong or this is how all FOR trigger work?
ALTER TRIGGER TriggerName
ON MyTable
FOR INSERT, UPDATE
AS
BEGIN
If exist (select [Name] From MyTable WHERE IsDeleted = 0 AND [Name] in (SELECT [Name] FROM INSERTED)
BEGIN
RAISERROR ('ERROR Description', 16, 1);
Rollback;
END
END
FOR runs after the data is changed, INSTEAD OF is what I think you are after.
EDIT: As stated by others, INSTEAD OF runs instead of the data you are changing, therefore you need to insert the data if it is valid, rather than stopping the insert if it is invalid.
Read this question for a much more detailed explanation of the types of Triggers.
SQL Server "AFTER INSERT" trigger doesn't see the just-inserted row
FOR is the same as AFTER. if you want to "simulate" BEFORE trigger, use INSTEAD OF, caveat, it's not exactly what you would expect on proper BEFORE trigger, i.e. if you fail to provide the necessary INSTEAD action, your inserted/updated data could be lost/ignored.
MSSQL doesn't have BEFORE trigger.
For SQL Server, FOR runs AFTER the SQL which triggered it.
From:
http://msdn.microsoft.com/en-us/library/ms189799.aspx
FOR | AFTER
AFTER specifies that the DML trigger
is fired only when all operations
specified in the triggering SQL
statement have executed successfully.
All referential cascade actions and
constraint checks also must succeed
before this trigger fires.
AFTER is the default when FOR is the
only keyword specified.
AFTER triggers
cannot be defined on views.
I've actually ran into a similar problem lately, and found a cool way to handle it. I had a table which could have several rows for one id, but only ONE of them could be marked as primary.
In SQL Server 2008, you'll be able to make a partial unique index something like this:
create unique index IX on MyTable(name) where isDeleted = 0;
However, you can accomplish it with a little more work in SQL Server 2005. The trick is to make a view showing only the rows which aren't deleted, and then create a unique clustered index on it:
create view MyTableNotDeleted_vw
with schema_binding /* Must be schema bound to create an indexed view */
as
select name
from dbo.MyTable /* Have to use dbo. for schema bound views */
where isDeleted = 0;
GO
create unique clustered index IX on MyTableNotDeleted_vw ( name );
This will effectively create a unique constraint only affecting rows that haven't yet been deleted, and will probably perform better than a custom trigger!