SymmetricDS Trigger-Based Update Cannot Routing - symmetricds

I have table A in corp and store node which has before update and insert triggers. The trigger just update other column in the current row that updated/inserted. And also I have been configured sync_on_incoming_batch=1.
But the problem is, when the row is inserted from corp then the column is updated by the trigger at store. The sync_on_incoming_batch is triggered, but it can't route to corp node.
I also have been set ping_back_enabled=1, and succeed to sync again to corp node but update loop is happened. How to handle this?
I think sym_conflict can handle this, but i have no idea.

the loop has to get broken. there's no other way around it. conflict detection wouldn't work because there's no change between destination and incoming data

Are you trying to allow changes from Store 1 to send up to Corp and then send back down to Store 2? If so you will want two sets of triggers. One set that is installed on corp with the "sync on incoming" flag checked. The other set with the sync on incoming not checked and applied to the store. This allows changes sent from corp to store to stop there and not loop. Alternately it allows changes from store 1 to hit corp and sync back to all other store nodes except store 1.

Related

execute logic beforedelete event trigger

Before deleting the record (Ex: Account obj record), I want to update the field on the account record and send it to content management, hold it for a few seconds, and then delete it.
For this scenario, I used beforedelete event and updated the fields in the record, and called the content management with updated record data. The record is updated with new values (i verified after restoring it from recycle bin), But it is not calling the content management before deleting the record. Is there any option that we can wait for a few seconds until the record is updated on content management and delete the record? Please share your suggestions. Thank you.
You can't make a callout straight from a trigger (SF database table/row can't be locked and held hostage until 3rd party system finishes, up to 2 minutes), it has to be asynchronous. So you probably call from #future but by then the main trigger finished, the record is deleted, if you passed an Id - probably the query inside #future returns 0 rows.
Forget the bit about "holding it for few seconds". You need to make some architecture decisions. Is it important that delete succeeds no matter what? or do you want to delete only after the external system acknowledged the message?
You could query your record in the trigger (or take whole trigger.old) and pass to the future method? It's supposed to take only primitives, not objects/collections but you could always JSON.serialize it before passing as string.
You could hide the standard delete button and introduce custom one. There you'd have a controller which can make the callout, wait till success response comes back and then delete?
You could rethink the request-response thing. What if you make the callout (or raise platform event?) and it's the content management system that then reaches to salesforce and deletes (via REST API for example).
What if you just delete right away, hope they stay in recycle bin and then external system can query the bin / make special getDeleted call and pull the data.
See Salesforce - Pull all deleted cases in Salesforce for some more bin-related api calls.

Logic Apps - SQL Connector returning cached data?

I have a Logic App that uses the "SQL Server - When an item is modified (V2)" trigger, monitoring an Azure SQL DB for updated rows. When running this LA, I noticed that the modified row that came as output for this trigger did NOT contain the updated data.
I thought this might be by design (don't really see why, but ok...) so I added a "Get Row" action directly after the trigger, to go fetch the most recent data for the row that triggered the LA. But even this step still returned the old, not-updated data for that row.
However, when I resubmit the run some seconds later, the "Get Row" action does get the updated data from the database.
Is this normal behavior? Is the SQL DB row version already updated even though the data update isn't committed yet, triggering the Logic App but not returning the updated data yet?
Thanks for pointing me to add a timestamp to my table, I add the timestamp and then I can find the table in the selection. I test it in my side but the trigger works fine, it output the updated data. I provide my logic below for your reference:
My table show as:
My logic app:
Please note I disable the "Split On" in "Settings" of the trigger.
After running the update sql:
update Table3 set name = 'hury1' where id = 1;
update Table3 set name = 'jim1' where id = 2;
I got the result (the variable updateItems in screenshot contains both updated items):

SQLWatch - notifications not being sent

I’m wondering if someone with knowledge/experience of SQLWatch could help me out with something.
We have SQLWatch set up on 2 DEV servers and 1 Central monitoring server, its working fine and the data from the 2 DEV servers is coming over to the central server, I can see alerts are being recorded in the table - [dbo].[sqlwatch_logger_check].
However, our issue that we are not being notified by any means (email, Powershell script running).
What’s interesting is that if we drop a row into the table [dbo].[sqlwatch_meta_action_queue] then alert notification does happen.
So our issue seems to be for some reason alerts are being raised but the record is not being inserted into the queue table. I suspect some sort of mapping issue but as it stands now it all looks ok, I use the following to check
SELECT C.check_id,check_name,check_description,check_enabled,A.action_description,A.action_exec_type,A.action_exec
FROM [dbo].[sqlwatch_config_check] C
LEFT JOIN [dbo].[sqlwatch_config_check_action] CA ON C.check_id = CA.check_id
LEFT JOIN [dbo].[sqlwatch_config_action] A ON CA.action_id = A.action_id
WHERe C.check_id = -1
And it shows the failed job is set to run our PowerShell script, which it does when the row is manually inserted.
Any ideas on what the cause may be here?
Thanks,
Nic
I am the creator of SQLWATCH.
Firstly, just to clarify, default notifications that come with SQLWATCH only work in a local scope i.e. they will happen on each monitored instance where ##SERVERNAME = sql_instance. If you are expecting the default notifications to fire from the central server for a remote instance this will not happen. The default notifications on the central server will only fire for the central server itself and not for data imported from the remote instances. This is done to avoid a situation where pull into the central repository is rare and thus notifications could be well delayed.
However, there is nothing stopping you from creating Check Rules or Reports to fire on the back of the imported data.
Secondly, the checks are not alerts per se. Checks are just... well, checks... that run periodically and make sure everything is in order. Checks can trigger an action to send an email. For this, as you have worked out, there is an association table that links together checks and actions.
As for your problem, is the actual action enabled? All actions that are not associated with report are disabled by default as they need to be configured first:
Add a column to your query to bring action_enabled column:
SELECT C.check_id, check_name, check_description, check_enabled, A.action_description, A.action_exec_type, A.action_exec, [action_enabled]
FROM [dbo].[sqlwatch_config_check] C
LEFT JOIN [dbo].[sqlwatch_config_check_action] CA ON C.check_id = CA.check_id
LEFT JOIN [dbo].[sqlwatch_config_action] A ON CA.action_id = A.action_id
WHERE C.check_id = -1
Or, there is already a view that should provide you with the complete mapping:
SELECT *
FROM [dbo].[vw_sqlwatch_report_config_check_action]
WHERE check_id = -1
The application log table [dbo].[sqlwatch_app_log] should also contain valuable information. Did you look in there for anything out of ordinary?
Summarising
In order to enable alerts in a brand new install of SQLWATCH, all it's needed is setting up action_exec with your email details and action_enabled set to 1. If you have made some other changes it may be easier to reinstall back to default.

send a alert when value of a column reaches a certain value

I am using oracle 10g and I have a table where values are inserted in ascending order (but does not use a DB sequence).
I want to get a email notification when the value reaches a certain number or above, what is the easiest way to do this? does oracle offer anything like this or will it be easiest to write a job externally to connect to the DB?
You can use a trigger on the table(s) where the value gets stored, and when the value of interest is inserted, use DBMS_JOB to send the email.
Since jobs created with DBMS_JOB don't run until a commit is encountered, the email will only get sent when the value is successfully committed to the database.

How to capture table level data changes in SQL Server 2008 R2?

I have high volume of data normalized into more than 100 tables. There are multiple applications which change underlying data in those tables and I want to raise events on those changes. Possible options that I know of are:
Change Data Capture
Change Tracking
Using Triggers on each table (bad option but possible)
Can someone share the best way of doing this if someone has already done this before?
What I really want in the end is if there is one transaction that affected 12 tables off 100 I should be able to bubble one event up instead of 12. Assume there are concurrent users change these tables.
Two options I can think of:
Triggers ARE the right way to capture change events in the DB layer
Codewise, I make sure in my app that each table is changed through only one place in the code, regardless what the change is (I call it a hub for that table, as it channels many different pathways into one place), it becomes very easy to catch change events that way in the code layer
One possibility is SQL Server Query Notifications: Using Query Notifications
As long as you want to 'batch' multiple changes, I think you should follow the route of Change Data Capture or Change Tracking (depending on whether you just want to know that something changed or what changes happened).
They should be used by a 'polling' procedure, where you poll for changes every few minutes (seconds, miliseconds???) and raise events. The nice thing about this is that as long as you store the last rowversion of the previous poll -for each table- you can check whenever you like for changes since the last poll. You don't rely on a real time triggers approach, that if halted you would loose all events forever. The procedure could be easily created inside a procedure that checks each table and you would need only 1 more table to store last rowversion per table.
Also, the overhead of this approach would be controlled by you and by how frequently the polling happens.

Resources