Sending incremental data to other application from Oracle database:- Even small suggestion would be very helpful - database

I have 1 table lets suppose Item. there are many DML happens on this table daily. Whatever DML(Insert update delete) happens on this table I need to insert this transaction data into another application using APIs.
if in item table ,2 record gets inserted, 1 updated and, 1 deleted I need to inject data into another application in the below form. file will be in json format.
I can create below file. My question is regarding how to extract daily transactional data.
{
"insert": ["A1,A2"].
"delete": "B1",
"update": "C1 "
}
something like above. means if A1 A2 inserted into Item table, B1 got deleted and C1 got updated. so i will send the data in above format to target application to do changes.
To do this I created one more table Item_trigger. also I created trigger on Item table. so if any DML happens trigger will insert into Item_trigger table with value
('A1','Insert'), ('A2','Insert'),('B1','delete'),('C1','Update')
then using Item_trigger table I will create file and send the data to target system.
The above design have been rejected because i am using trigger.is there any good solution? I was thingking about MV but it doesn't consider delete. doesn't consider delete so I can not use even that.
Could you please help me with design. Is there anyway to record transaction without using trigger

You can make use of statement level auditing on particular table. But that will only provide the information of what type of operation has been performed but not the actual data. You can combine this information with storing the value of whatever inserted, deleted and updated in another table or use the main table to directly transmit data.
Below is the script
audit select,insert,update,delete on test.test_audit by access;
delete from test_audit where id <= 10;
select * from Dba_Audit_Object where OBJ_NAME='TEST_AUDIT';

Related

How to write a code to timetravel using a specific transaction ID

I would like to use a timetravel feature on snowflake and restore the original table.
I've deleted and created the table using following command:
DROP TABLE "SOCIAL_LIVE"
CREATE TABLE "SOCIAL_LIVE" (...)
I would like to go back to the original table before dropping table.
I've used following code (hid the transaction ID to 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')
Select "BW"."PUBLIC"."SOCIAL_LIVE".* From "BW"."PUBLIC"."SOCIAL_LIVE";
select * from SOCIAL_LIVE before(statement => 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx');
Received an error message:
Statement xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx cannot be used to specify time for time travel query.
How can we go back to the original table and restore it on snowflake?
The documentation states:
After dropping a table, creating a table with the same name creates a
new version of the table. The dropped version of the previous table
can still be restored using the following method:
Rename the current version of the table to a different name.
Use the UNDROP TABLE command to restore the previous version.
If you need further information, this page is useful:
https://docs.snowflake.net/manuals/sql-reference/sql/drop-table.html#usage-notes
You will need to undrop the table in order to access that data, though. Time-travel is not maintained by name alone. So, once you dropped and recreated the table, the new table has its own, new time travel.
Looks like there's 3 common reasons that error is seen, with solutions:
the table has been dropped and recreated
see this answer
the time travel period has been exceeded
no solution: target a statement within the time travel period for the table
the wrong statement type is being targeted
only certain statement types can be targeted. Currently, these include SELECT, BEGIN, COMMIT, and DML (INSERT, UPDATE etc). See documentation here.
Statement xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx cannot be used to specify time for time travel query.
Usually we will get above error when we trying to travel behind the object creation time. Try with time travel option with offset option.

Track user activity using Audit tables

I am trying to implement a way to track changes to a table named gsbirst_Objects and gsbirst_Objects_Backup. It will record DML and Truncate statements
I have a stored procedure that will update the main table when it is called. How can I capture changes at the beginning and end when the stored procedure is called
I have created the backup table
I did this a while back using triggers it isn't the best way but works. You can create an audit table them build a trigger for each action. I made a trigger ON DELETE, ON UPDATE, and ON INSERT. I would then grab the record that was changed up dated or deleted and concatenate the row together and load a before and after into the audit table depending on what happened. This route for me gave me a little more detailed even of what happened and what changed.

How to create a trigger to populate a table from another table in a different database

Basically what I'm trying to do is create a dynamic trigger where if a table from database 1 has a new record inputed. if it falls in the category of data that I need for database 2, it automatically populates the table in database 2 without me needed to manually update.
Right now I am going into the table in database 1 sorting for the category I need and copying the data I need into the table in database 2.
I tried to make this process easier by doing a select query for the columns that I need from database 1 to database 2, which works fine however it overwrites what I have already and I have to basically recreate everytime.
So after all that rambling I guess exactly what I need to know. Is there a way to create a trigger that if a new line item is inputed in database 1 with the tag matching the type of material I need to transfer to database 2. Also on top of that I only need to transfer 2 columns from database 1 to database 2.
I would try to post a sample code, however I have no idea where to start on this.
I suggest you look into Service Broker messaging. We use it quite a bit and it works quite well. You can send messages to the other database with the data that needs to be inserted and allow the second database to do all the work. This will alleviate the worries about the second database being offline or causing an error which rolls back into your trigger. If the second database is unavailable the messages will queue up in your database until it can send them. This isn't the easiest thing to set up but is a way to keep the two databases from being so closely tied together.
Service Broker
I am unclear about the logic in your selection but if you want to save a copy of what was just inserted into table1 into a table (table2) on another database, using a trigger, you can try this:
create trigger trig1 on dbo.table1
after insert as
insert into database2.dbo.table2 (col1,col2,col3) values (inserted.col1, inserted.col2)`
You could use an AFTER INSERT Trigger like this:
CREATE TRIGGER [FirstDB].[dbo].[YourTrigger]
ON [FirstDB].[dbo].[Table]
AFTER INSERT
AS
BEGIN
INSERT INTO [OtherDB].[dbo].[Table] SELECT (values...)
END
I recommend you consider non-trigger alternatives as well though. Cross-DB triggers could be risky (what if the other db is offline, etc.)

What is the fastest way to insert data to MS SQL database without locking it?

I've a running system where data is inserted periodically into MS SQL DB and web application is used to display this data to users.
During data insert users should be able to continue to use DB, unfortunatelly I can't redesign the whole system right now. Every 2 hours 40k-80k records are inserted.
Right now the process looks like this:
Temp table is created
Data is inserted into it using plain INSERT statements (parameterized queries or stored proceuders should improve the speed).
Data is pumped from temp table to destination table using INSERT INTO MyTable(...) SELECT ... FROM #TempTable
I think that such approach is very inefficient. I see, that insert phase can be improved (bulk insert?), but what about transfering data from temp table to destination?
This is waht we did a few times. Rename your table as TableName_A. Create a view that calls that table. Create a second table exactly like the first one (Tablename_B). Populate it with the data from the first one. Now set up your import process to populate the table that is not being called by the view. Then change the view to call that table instead. Total downtime to users, a few seconds. Then repopulate the first table. It is actually easier if you can truncate and populate the table becasue then you don't need that last step, but that may not be possible if your input data is not a complete refresh.
You cannot avoid locking when inserting into the table. Even with BULK INSERT this is not possible.
But clients that want to access this table during the concurrent INSERT operations can do so when changing the transaction isolation level to READ UNCOMMITTED or by executing the SELECT command with the WITH NOLOCK option.
The INSERT command will still lock the table/rows but the SELECT command will then ignore these locks and also read uncommitted entries.

Database update

Scenario:
I have Database1 (PostgreSQL). For this i) When a record is deleted, the status col. for that record is changed to inactive. ii) When a record is updated, the current record is rendered INACTIVE and a new record is inserted. iii) Insertion happens as usual. There is a timestamp col for each record for all the tables in the database.
I have another database2 (SQLite) which is synced with Database1, and follows the same property of Database1
Database1 gets changed regularly and I would get the CSV files for all the tables. The CSV would include all the data, including new insertions, and updations.
Requirement:
I need to make the data in Database1 consistent with the new CSV.
i) For the records that are not in the CSV, but are there in Database1 (DELETED RECORDS) - These records I have to set the status as inactive.
ii) For the records that are there in the CSV but not there in the Database1 (INSERTED RECORDS) - I need these records to be inserted.
iii) For the records that are updated in the CSVs I need to set status as inactive and insert new records.
Kindly help me with the logical implementation of these!!!
Thanks
Jayakrishnan
I assume you're looking to build software to achieve what you want, not looking for an off-the-shelf solution.
What environments are you able to develop in? C? PHP? Java? C#?
Lots of options in many environments that can all read/write from CSV/SQLite/PostgreSQL.
you could use an ON DELETE trigger to override existing delete behavior.
This strikes me as dangerous however. Someone is going to rely on this and then when the trigger isn't there, you will have actual deletions occur... It's better to encapsulate this behind a view or something and put a trigger on that. Or go through a stored procedure or something.

Resources