[SymmetricDS]: Missing changes in SYM_DATA - symmetricds

I have new records insert into source database but some records are not synced to target database. When I look into SYM_DATA, between 2 consecutive insert there are some update event triggered to the same table but different row. Log file has deadlocks error but after retry becomes OK.
My question is can SymmetricDS trigger update and insert if both event type happen together ? How to avoid deadlocks and make sure no missing records to sync from source to target ?

Data can not be synced only if it hasn't been inserted or editing data updated in the database. SymmetricDs extraction of data happens in the same transaction as the application uses. Check if there data has been successfully inserted or updates. Maybe some transactions have been rolled back. If they have been successfully committed order the table sym_data by its primary key data_id description to be sure you haven't missed a row.

Related

Sending incremental data to other application from Oracle database:- Even small suggestion would be very helpful

I have 1 table lets suppose Item. there are many DML happens on this table daily. Whatever DML(Insert update delete) happens on this table I need to insert this transaction data into another application using APIs.
if in item table ,2 record gets inserted, 1 updated and, 1 deleted I need to inject data into another application in the below form. file will be in json format.
I can create below file. My question is regarding how to extract daily transactional data.
{
"insert": ["A1,A2"].
"delete": "B1",
"update": "C1 "
}
something like above. means if A1 A2 inserted into Item table, B1 got deleted and C1 got updated. so i will send the data in above format to target application to do changes.
To do this I created one more table Item_trigger. also I created trigger on Item table. so if any DML happens trigger will insert into Item_trigger table with value
('A1','Insert'), ('A2','Insert'),('B1','delete'),('C1','Update')
then using Item_trigger table I will create file and send the data to target system.
The above design have been rejected because i am using trigger.is there any good solution? I was thingking about MV but it doesn't consider delete. doesn't consider delete so I can not use even that.
Could you please help me with design. Is there anyway to record transaction without using trigger
You can make use of statement level auditing on particular table. But that will only provide the information of what type of operation has been performed but not the actual data. You can combine this information with storing the value of whatever inserted, deleted and updated in another table or use the main table to directly transmit data.
Below is the script
audit select,insert,update,delete on test.test_audit by access;
delete from test_audit where id <= 10;
select * from Dba_Audit_Object where OBJ_NAME='TEST_AUDIT';

data migration in informatica

A large amount of data is coming from source to target. After a successful insertion in target, we have to change the status to every rows as "committed". But when will we know that all datas have come or not in target without directly querying the source?
For example - suppose 10 records have migrated to target from source.
We cannot change the status of all the records as "committed" before successful insertion of all records in target.
So before changing the status of all the records, how will we know that 11th record is coming or not?
Is there anything that will give me the information about total records in source?
I need a real-time based answer.
we had the same scenario and this is what we did:
First of all
to check if data is loaded in target you can join source and target table, update will lock the rows so for this commit must be fired at database level in target table (so that lock for update can happen).
after joining, update the loaded data based on join with target column.
Few things.
You have to stop you session (used pmcmd to stop session in command task)
update data in your source table and restart session.
keep load for counter of 20k-30 rows so update goes smoothly.

Alternative Method to Polling/Trigger a Table in Oracle?

I have a db on Oracle 11g where there's a table updated by external users. Now I want to catch the insert/update/delete on this table in order to bring these changes on a table on another db and I'm trying different methods for research. I tested polling (a job to check every minute if there is an update, insert or delete on the table) and trigger (fire on update, insert or delete on the table) yet, so are there alternative methods?
I found AOQ (Oracle Advanced Queuing), DBMS_PIPE, Oracle SNMP Agent Integrator Polling Activity, but I don't know if they are right for this case...
It depends.
Polling or triggers are often all you need depending on the volume of data involved, and the frequency of inserts/updates/deletes.
For example, the polling method might be as simple as adding a column which is set to 1 by default, and updated to NULL when the row is "consumed" by the replication code. A trigger on the table would set it back to 1 if a row is updated. An index on this column would be lightweight (the index would only include entries for rows where the column is 1) and therefore fast to query. You'd need another table to handle deletes, though.
The trigger method would merely write insert/update/delete rows into a log table of some sort, which would then get purged periodically by a job.
For heavier volumes solutions include Oracle GoldenGate and Oracle Streams: http://www.oracle.com/technetwork/database/focus-areas/data-integration/index.html

row is not inserting into table

The table is present in oracle database, I am updating that table with one record. It's executing and when I type select * from that table it's showing that record.
But the problem is when I commit the changes. The table is showing nothing - I am not seeing anythng inside table, it's showing 0 records.
Can you please help me?
insert into recon values(1,'sri',-1,'20090806');
after this if i write
select * from recon;
It's showing that record but after commit it's showing nothing. There is no trigger for that table.
its not a view.
It's a global temporary table, after commiting it is emptied.
There are two kind of temp tables, 1 after commit emptied, 2.after ending session emptied.
My guess would be that the transaction was not committed properly, initially I thought that it was because of nested transactions (I work in SQLServer) but could be basically because of not properly committed transaction
If you are the admin, Check if other users are logged into the the database and are using that table, unlock the user from that table/database.

Deleting Rows from a SQL Table marked for Replication

I erroneously delete all the rows from a MS SQL 2000 table that is used in merge replication (the table is on the publisher). I then compounded the issue by using a DTS operation to retrieve the rows from a backup database and repopulate the table.
This has created the following issue:
The delete operation marked the rows for deletion on the clients but the DTS operation bypasses the replication triggers so the imported rows are not marked for insertion on the subscribers. In effect the subscribers lose the data although it is on the publisher.
So I thought "no worries" I will just delete the rows again and then add them correctly via an insert statement and they will then be marked for insertion on the subscribers.
This is my problem:
I cannot delete the DTSed rows because I get a "Cannot insert duplicate key row in object 'MSmerge_tombstone' with unique index 'uc1MSmerge_tombstone'." error. What I would like to do is somehow delete the rows from the table bypassing the merge replication trigger. Is this possible? I don't want to remove and redo the replication because the subscribers are 50+ windows mobile devices.
Edit: I have tried the Truncate Table command. This gives the following error "Cannot truncate table xxxx because it is published for replication"
Have you tried truncating the table?
You may have to truncate the table and reset the ID field back to 0 if you need the inserted rows to have the same ID. If not, just truncate and it should be fine.
You also could look into temporarily dropping the unique index and adding it back when you're done.
Look into sp_mergedummyupdate
Would creating a second table be an option? You could create a second table, populate it with the needed data, add the constraints/indexes, then drop the first table and rename your second table. This should give you the data with the right keys...and it should all consist of SQL statements that are allowed to trickle down the replication. It just isn't probably the best on performance...and definitely would impose some risk.
I haven't tried this first hand in a replicated environment...but it may be at least worth trying out.
Thanks for the tips...I eventually found a solution:
I deleted the merge delete trigger from the table
Deleted the DTSed rows
Recreated the merge delete trigger
Added my rows correctly using an insert statement.
I was a little worried bout fiddling with the merge triggers but every thing appears to be working correctly.

Resources