Strategy for rolling back an altered table using liquibase - sql-server

I want to migrate my database from v1.0 to v1.1 and one of the changes is updates on some of the values in Table1. I know that for INSERT, I can easily include a rollback command of deleting the values I just added, but how about a table alteration? Is there a way to store the current value and use this information for the rollback process (in the future)?
Thanks.

You can specify a <rollback> block (docs) in your changeset to describe how to roll back the change. Within your rollback tag you can use raw SQL or a <createTable> tag to re-describe what the table looked like before it was altered.
You can also specify the changeSetId and changeSetAuthor in the rollback tag to point to an existing changeSet that will recreate the table. This approach can be easier if there has been no other changes since the object was created but doesn't work as well if there has been multiple changeSets that modified the object since it was first created.

Any DDL operation (ALTER TABLE being one of them) in SQL Server is transactional.
It means that you can open a transaction, do alterations to the database objects, and rollback the transaction as if it never happened.
There are some exceptions, mainly actions involving filesystem operations (adding a file to database and such).

Related

Postgresql Database Design Questions (Trigger vs Function)

I am building a database for a CMS system and I am at a point where I am no longer sure which way to go anymore, noting that all of the business logic is in the database layer (We use PostgreSQL 13 and the application is planned to be a SaaS):
1- The application has folders and documents associated with them, so if we move a folder (Or a group of folders in bulk) from its parent folder to another, then the permissions of the folder as well as the underlying documents must follow the permissions of the new location (An update to a permissions table is sent), is this better enforced via an after statement trigger, or do we need to force all of the code to call a single method to move the folder, documents and update their permissions.
2- Wouldn't make more sense to have an AFTER statement trigger rather than an AFTER row trigger in all cases since they do the same thing, but with statement triggers you can process all affected rows in bulk (Thus done more efficiently) , so if I was to enforce inserting a record in another table if an update or an insert takes place, it will have a similar performance for a a single row, but will be a lot faster if they were 1000 rows in the statement level trigger (Since I can easily do INSERT INTO .. SELECT * FORM new_table).
You need a row level trigger or a statement level trigger with transition tables, so that you know which rows were affected by the statement. To avoid repetition, the latter might be a better choice.
Rather than modifying permissions whenever you move an object, you could figure out the permissions when you query the table by recursively following the chain of containment. The question here is if you prefer to do the extra work when you modify the data or when you query the data.

What is the best way to write trigger for two linked database in oracle

What is the best method to write a database trigger if you are updating the table in target database while noticing the change in source database table.
For example: If I have source-database.source-table and target-database.target-table. I wants to insert an entry into target-database.target-table while there is a change in source-database.source-table. Can I write something like this.
Method 1: Write trigger on target database:
create or replace trigger "target-database"."target-trigger"
after update on source-database.source-table#source-dblink
for each row
where (:new.some-col <> :old.some-col)
begin
insert into target-database.target-table ("col1","col2","col3")
values
("value1","value2","value3")
end;
Method 2: Write trigger on source database
create or replace trigger "source-database"."source-trigger"
after update on source-database.source-table
for each row
where (:new.some-col <> :old.some-col)
begin
insert into target-database.target-table#target-dblink ("col1","col2","col3")
values
("value1","value2","value3")
end;
If you were going to create a trigger to implement replication, the trigger would need to exist on the source database. It's not syntactically valid to create the trigger on the target database. If you did create the trigger on the source database, you'd need to use the database link to reference the target table, you wouldn't have a database link in the ON <<table name>> clause.
However, you really, really don't want to use a trigger to implement replication. Oracle provides a host of tools to implement replication-- materialized views, Streams, Golden Gate, etc. You really, really want to be using one of those solutions.
If you use a trigger to replicate data, you're significantly reducing the availability of the system. The transaction against the source table can only succeed if the remote database is up and running and if the network link between the two is up. That forces the two systems to be tightly coupled-- you can't take one site down for maintenance without affecting the other.
If you use a trigger to replicate data, you're significantly affecting the performance of the system. The transaction against the source table now has to involve a two-phase commit with the remote database. That's going to involve multiple network round-trips and will generally be rather slow (certainly slow compared to a local transaction).
A real replication solution, on the other hand, will replicate the data asynchronously with little or no effect on the performance of transactions. If both systems are up, the data will replicate after a very short lag. If the destination system is unavailable, local transactions will still succeed and the data will replicate when the destination system comes back up.
Intuitively, I would rather start in source DB and do something like:
create or replace trigger "source-database"."source-trigger"
after update on source-database.source-table
for each row
where (:new.some-col <> :old.some-col)
begin
insert into target-database.target-table#target-dblink ("col1","col2","col3")
values ("value1","value2","value3");
end;
/
There is no reference to a DB link before the insert, which is preferable IMHO.

Fire triggers on SELECT

I'm new to triggers and I need to fire a trigger when selecting values from a database table in sql server. I have tried firing triggers on insert/update and delete. is there any way to fire trigger when selecting values?
There are only two ways I know that you can do this and neither are trigger.
You can use a stored procedure to run the query and log the query to a table and other information you'd like to know.
You can use the audit feature of SQL Server.
I've never used the latter, so I can't speak of the ease of use.
No there is no provision of having trigger on SELECT operation. As suggested in earlier answer, write a stored procedure which takes parameters that are fetched from SEECT query and call this procedure after desired SELECT query.
SpectralGhost's answer assumes you are trying to do something like a security audit of who or what has looked at which data.
But it strikes me if you are new enough to sql not to know that a SELECT trigger is conceptually daft, you may be trying to do something else, in which case you're really talking about locking rather than auditing - i.e. once one process has read a particular record you want to prevent other processes accessing it (or possibly some other related records in a different table) until the transaction is either committed or rolled back. In that case, triggers are definitely not your solution (they rarely are). See BOL on transaction control and locking

Specify trigger's parent schema in trigger body

In DB2 for IBM System i I create this trigger for recording on MYLOGTABLE every insert operation made on MYCHECKEDTABLE:
SET SCHEMA MYSCHEMA;
CREATE TRIGGER MYTRIGGER AFTER INSERT ON MYCHECKEDTABLE
REFERENCING NEW AS ROWREF
FOR EACH ROW BEGIN ATOMIC
INSERT INTO MYLOGTABLE -- after creation becomes MYSCHEMA.MYLOGTABLE
(MMACOD, OPTYPE, OPDATE)
VALUES (ROWREF.ID, 'I', CURRENT TIMESTAMP);
END;
The DBMS stores the trigger body with MYSCHEMA.MYLOGTABLE hardcoded.
Now imagine that we copy the entire schema as a new schema NEWSCHEMA. When I insert a record in NEWSCHEMA.MYCHECKEDTABLE a log record will be added to MYSCHEMA.MYLOGTABLE instead of NEWSCHEMA.MYLOGTABLE, i.e. in the schema where trigger and its table live. This is cause of big issues!! Also because many users can copy the schema without my control...
So, is there a way to specify, in the trigger body, the schema where the trigger lives? In this way we'll write the log record in the correct MYLOGTABLE. Something like PARENT SCHEMA... Or is there a workaround?
Many thanks!
External triggers defined in an HLL have access to a trigger buffer that includes the library name of the table that fired the trigger. This could be used to qualify the reference to the MYLOGTABLE.
See chapter 11.2 "Trigger program structure" of the IBM Redbook Stored Procedures, Triggers, and User-Defined Functions on DB2 Universal Database for iSeries for more information.
Alternatively you may be able to use the CURRENT SCHEMA special register or the GET DESCRIPTOR statement to find out where the trigger and/or table are currently located.
Unfortunately I realized that the schema where a trigger lives can't be detected from inside trigger's body.
But there are some workarounds (thanks to #krmilligan too):
Take away the user's authority to execute CPYLIB and make them use a utility.
Create a background agent on the system that peridiocally runs looking for triggers that are out of synch.
For command CPYLIB set the default for TRG option to *NO. In this way triggers will never be copied, except if the user explicitly specifies it.
I choose the last one because it's the simplest one, even if there can be contexts where trigger copy is required. In such cases I'd take the first workaround.

How do I create a stored procedure whose effects cannot be rolled back?

I want to have a stored procedure that inserts a record into tableA and updates record(s) in tableB.
The stored procedure will be called from within a trigger.
I want the inserted records in tableA to exist even if the outermost transaction of the trigger is rolled back.
The records in tableA are linearly linked and I must be able to rebuild the linear connection.
Write access to tableA is only ever through the triggers.
How do I go about this?
What you're looking for are autonomous transactions, and these do not exist in SQL Server today. Please vote / comment on the following items:
http://connect.microsoft.com/SQLServer/feedback/details/296870/add-support-for-autonomous-transactions
http://connect.microsoft.com/SQLServer/feedback/details/324569/add-support-for-true-nested-transactions
What you can consider doing is using xp_cmdshell or CLR to go outside the SQL engine to come back in (these actions can't be rolled back by SQL Server)... but these methods aren't without their own issues.
Another idea is to use INSTEAD OF triggers - you can log/update other tables and then just decide not to proceed with the actual action.
EDIT
And along the lines of #VoodooChild's suggestion, you can use a #table variable to temporarily hold data that you can reference after the rollback - this data will survive a rollback, unlike an insert into a #temp table.
See this post Logging messages during a transaction for a (somewhat convoluted) effective way of achieving what you want: the insert into the logging table is persisted even if the transaction had rolled back. The method Simon proposes has several advantages: requires no changes to the caller, is fast and is scalable, and it can be used safely from within a trigger. Simon's example is for logging, but the insert can be for anything.
One way is to create a linked server that points to the local server. Stored procedures executed over a linked server won't be rolled back:
EXEC LinkedServer.DbName.dbo.sp_LogInfo 'this won''t be rolled back'
You can call a remote stored procedure from a trigger.

Resources