How to distinguish changed data in CSV transferred over FTP using SymmetricDS - symmetricds

I am trying to transfer changed data to FTP server using SymmetricDS. And, i am able to transfer it successfully. The CSV file thus generated contains changed ROW_DATA i.e In case of 'UPDATE' event, a row with updated values is there and for an 'INSERT' event, there is a row with all new values.
Here are few points which I am wondering about:-
How to distinguish between an 'UPDATED' row and 'INSERTED' row in
CSV file?
Also, for a 'DELETE' event, there was no corresponding row
in the CSV file. So, how to fetch the rows which are deleted?
Can anyone please help me out on this one.

If there's a row OLD_DATA then the operation is update, otherwise the operation is insert. Do not forget that on the target side symmetricDs could fallback to an update if there's already a row with the same primary key even though the OLD_DATA is empty, i.e on the source node there was an insert and vice versa.
Are the ON_DELETE triggers declare at all? The easiest is to check a list of defined triggers in the db and find out if ON_DELETE are present. The other way is to delete a row, commit and then select * from sym_data order by data_id desc verifying that delete data has been captured on delete.

Related

Sending incremental data to other application from Oracle database:- Even small suggestion would be very helpful

I have 1 table lets suppose Item. there are many DML happens on this table daily. Whatever DML(Insert update delete) happens on this table I need to insert this transaction data into another application using APIs.
if in item table ,2 record gets inserted, 1 updated and, 1 deleted I need to inject data into another application in the below form. file will be in json format.
I can create below file. My question is regarding how to extract daily transactional data.
{
"insert": ["A1,A2"].
"delete": "B1",
"update": "C1 "
}
something like above. means if A1 A2 inserted into Item table, B1 got deleted and C1 got updated. so i will send the data in above format to target application to do changes.
To do this I created one more table Item_trigger. also I created trigger on Item table. so if any DML happens trigger will insert into Item_trigger table with value
('A1','Insert'), ('A2','Insert'),('B1','delete'),('C1','Update')
then using Item_trigger table I will create file and send the data to target system.
The above design have been rejected because i am using trigger.is there any good solution? I was thingking about MV but it doesn't consider delete. doesn't consider delete so I can not use even that.
Could you please help me with design. Is there anyway to record transaction without using trigger
You can make use of statement level auditing on particular table. But that will only provide the information of what type of operation has been performed but not the actual data. You can combine this information with storing the value of whatever inserted, deleted and updated in another table or use the main table to directly transmit data.
Below is the script
audit select,insert,update,delete on test.test_audit by access;
delete from test_audit where id <= 10;
select * from Dba_Audit_Object where OBJ_NAME='TEST_AUDIT';

[SymmetricDS]: Missing changes in SYM_DATA

I have new records insert into source database but some records are not synced to target database. When I look into SYM_DATA, between 2 consecutive insert there are some update event triggered to the same table but different row. Log file has deadlocks error but after retry becomes OK.
My question is can SymmetricDS trigger update and insert if both event type happen together ? How to avoid deadlocks and make sure no missing records to sync from source to target ?
Data can not be synced only if it hasn't been inserted or editing data updated in the database. SymmetricDs extraction of data happens in the same transaction as the application uses. Check if there data has been successfully inserted or updates. Maybe some transactions have been rolled back. If they have been successfully committed order the table sym_data by its primary key data_id description to be sure you haven't missed a row.

data migration in informatica

A large amount of data is coming from source to target. After a successful insertion in target, we have to change the status to every rows as "committed". But when will we know that all datas have come or not in target without directly querying the source?
For example - suppose 10 records have migrated to target from source.
We cannot change the status of all the records as "committed" before successful insertion of all records in target.
So before changing the status of all the records, how will we know that 11th record is coming or not?
Is there anything that will give me the information about total records in source?
I need a real-time based answer.
we had the same scenario and this is what we did:
First of all
to check if data is loaded in target you can join source and target table, update will lock the rows so for this commit must be fired at database level in target table (so that lock for update can happen).
after joining, update the loaded data based on join with target column.
Few things.
You have to stop you session (used pmcmd to stop session in command task)
update data in your source table and restart session.
keep load for counter of 20k-30 rows so update goes smoothly.

Database update

Scenario:
I have Database1 (PostgreSQL). For this i) When a record is deleted, the status col. for that record is changed to inactive. ii) When a record is updated, the current record is rendered INACTIVE and a new record is inserted. iii) Insertion happens as usual. There is a timestamp col for each record for all the tables in the database.
I have another database2 (SQLite) which is synced with Database1, and follows the same property of Database1
Database1 gets changed regularly and I would get the CSV files for all the tables. The CSV would include all the data, including new insertions, and updations.
Requirement:
I need to make the data in Database1 consistent with the new CSV.
i) For the records that are not in the CSV, but are there in Database1 (DELETED RECORDS) - These records I have to set the status as inactive.
ii) For the records that are there in the CSV but not there in the Database1 (INSERTED RECORDS) - I need these records to be inserted.
iii) For the records that are updated in the CSVs I need to set status as inactive and insert new records.
Kindly help me with the logical implementation of these!!!
Thanks
Jayakrishnan
I assume you're looking to build software to achieve what you want, not looking for an off-the-shelf solution.
What environments are you able to develop in? C? PHP? Java? C#?
Lots of options in many environments that can all read/write from CSV/SQLite/PostgreSQL.
you could use an ON DELETE trigger to override existing delete behavior.
This strikes me as dangerous however. Someone is going to rely on this and then when the trigger isn't there, you will have actual deletions occur... It's better to encapsulate this behind a view or something and put a trigger on that. Or go through a stored procedure or something.

Deleting Rows from a SQL Table marked for Replication

I erroneously delete all the rows from a MS SQL 2000 table that is used in merge replication (the table is on the publisher). I then compounded the issue by using a DTS operation to retrieve the rows from a backup database and repopulate the table.
This has created the following issue:
The delete operation marked the rows for deletion on the clients but the DTS operation bypasses the replication triggers so the imported rows are not marked for insertion on the subscribers. In effect the subscribers lose the data although it is on the publisher.
So I thought "no worries" I will just delete the rows again and then add them correctly via an insert statement and they will then be marked for insertion on the subscribers.
This is my problem:
I cannot delete the DTSed rows because I get a "Cannot insert duplicate key row in object 'MSmerge_tombstone' with unique index 'uc1MSmerge_tombstone'." error. What I would like to do is somehow delete the rows from the table bypassing the merge replication trigger. Is this possible? I don't want to remove and redo the replication because the subscribers are 50+ windows mobile devices.
Edit: I have tried the Truncate Table command. This gives the following error "Cannot truncate table xxxx because it is published for replication"
Have you tried truncating the table?
You may have to truncate the table and reset the ID field back to 0 if you need the inserted rows to have the same ID. If not, just truncate and it should be fine.
You also could look into temporarily dropping the unique index and adding it back when you're done.
Look into sp_mergedummyupdate
Would creating a second table be an option? You could create a second table, populate it with the needed data, add the constraints/indexes, then drop the first table and rename your second table. This should give you the data with the right keys...and it should all consist of SQL statements that are allowed to trickle down the replication. It just isn't probably the best on performance...and definitely would impose some risk.
I haven't tried this first hand in a replicated environment...but it may be at least worth trying out.
Thanks for the tips...I eventually found a solution:
I deleted the merge delete trigger from the table
Deleted the DTSed rows
Recreated the merge delete trigger
Added my rows correctly using an insert statement.
I was a little worried bout fiddling with the merge triggers but every thing appears to be working correctly.

Resources