Symmetricds Looping Configuration - symmetricds

Using SymmtricDS 3.9:-
I have corp database which with replicate store-1 database in bi-direction.
But as per log configuration are looping again and again from corp to store and store to corp.
Below are getting printed in corp log :-
[corp-1] - ConfigurationChangedDataRouter - About to refresh the cache of nodes because new configuration came through the data router
[corp-1] - RouterService - Routed 5 data events in 49 ms
[corp-1] - PullUriHandler - 5 data and 3 batches sent during pull request from store:2:2
[corp-1] - DataLoaderService - 5 data and 3 batches loaded during push request from store:2:2.
[corp-1] - ConfigurationChangedDataRouter - About to refresh the cache of nodes because new configuration came through the data router
[corp-1] - RouterService - Routed 5 data events in 187 ms
[corp-1] - PullUriHandler - 5 data and 3 batches sent during pull request from store:2:2
[corp-1] - DataLoaderService - 3 data and 2 batches loaded during push request from store:2:2.
[corp-1] - ConfigurationChangedDataRouter - About to refresh the cache of nodes because new configuration came through the data router
[corp-1] - RouterService - Routed 3 data events in 94 ms
[corp-1] - PullUriHandler - 3 data and 2 batches sent during pull request from store:2:2
[corp-1] - DataLoaderService - 4 data and 3 batches loaded during push request from store:2:2.
[corp-1] - ConfigurationChangedDataRouter - About to refresh the cache of nodes because new configuration came through the data router
[corp-1] - RouterService - Routed 4 data events in 111 ms
[corp-1] - PullUriHandler - 4 data and 3 batches sent during pull request from store:2:2
[corp-1] - DataLoaderService - 4 data and 3 batches loaded during push request from store:2:2.
[corp-1] - ConfigurationChangedDataRouter - About to refresh the cache of nodes because new configuration came through the data router
[corp-1] - RouterService - Routed 4 data events in 94 ms
[corp-1] - PullUriHandler - 4 data and 3 batches sent during pull request from store:2:2
[corp-1] - DataLoaderService - 4 data and 3 batches loaded during push request from store:2:2.
[corp-1] - ConfigurationChangedDataRouter - About to refresh the cache of nodes because new configuration came through the data router
[corp-1] - RouterService - Routed 4 data events in 59 ms
[corp-1] - PullUriHandler - 4 data and 3 batches sent during pull request from store:2:2
[corp-1] - DataLoaderService - 4 data and 3 batches loaded during push request from store:2:2.
below is getting printed in store log:-
[store-2] - PushService - Pushed data to node corp:1:1. 4 data and 3 batches were processed. (sym_node, sym_node_host, accounttypes)
[store-2] - PullService - Pull data received from corp:1:1 on queue default. 4 rows and 3 batches were processed. (sym_node, sym_node_host, accounttypes)
[store-2] - ConfigurationChangedDataRouter - About to refresh the cache of nodes because new configuration came through the data router
[store-2] - RouterService - Routed 8 data events in 63 ms
[store-2] - PushService - Push data sent to corp:1:1
[store-2] - PushService - Pushed data to node corp:1:1. 4 data and 3 batches were processed. (sym_node, sym_node_host, accounttypes)
[store-2] - PullService - Pull data received from corp:1:1 on queue default. 4 rows and 3 batches were processed. (sym_node, sym_node_host, accounttypes)
[store-2] - ConfigurationChangedDataRouter - About to refresh the cache of nodes because new configuration came through the data router
[store-2] - RouterService - Routed 8 data events in 115 ms
[store-2] - PushService - Push data sent to corp:1:1
[store-2] - PushService - Pushed data to node corp:1:1. 4 data and 3 batches were processed. (sym_node, sym_node_host, accounttypes)
[store-2] - PullService - Pull data received from corp:1:1 on queue default. 4 rows and 3 batches were processed. (sym_node, sym_node_host, accounttypes)
[store-2] - ConfigurationChangedDataRouter - About to refresh the cache of nodes because new configuration came through the data router
[store-2] - RouterService - Routed 8 data events in 120 ms
[store-2] - PushService - Push data sent to corp:1:1
[store-2] - PushService - Pushed data to node corp:1:1. 4 data and 3 batches were processed. (sym_node, sym_node_host, accounttypes)
[store-2] - PullService - Pull data received from corp:1:1 on queue default. 4 rows and 3 batches were processed. (sym_node, sym_node_host, accounttypes)
[store-2] - ConfigurationChangedDataRouter - About to refresh the cache of nodes because new configuration came through the data router
[store-2] - RouterService - Routed 8 data events in 122 ms
[store-2] - PushService - Push data sent to corp:1:1
[store-2] - PushService - Pushed data to node corp:1:1. 4 data and 3 batches were processed. (sym_node, sym_node_host, accounttypes)
[store-2] - PullService - Pull data received from corp:1:1 on queue default. 4 rows and 3 batches were processed. (sym_node, sym_node_host, accounttypes)
[store-2] - ConfigurationChangedDataRouter - About to refresh the cache of nodes because new configuration came through the data router
[store-2] - RouterService - Routed 8 data events in 64 ms
[store-2] - PushService - Push data sent to corp:1:1
[store-2] - PushService - Pushed data to node corp:1:1. 4 data and 3 batches were processed. (sym_node, sym_node_host, accounttypes)
About to refresh the cache of nodes because new configuration came
through the data router
above printing again and again. i tried drop database and reconfigure it again but same issue.
EDIT
I have used same configuration with SymmtricDS 3.7 and its working fine without any issue with looping.
not sure what is causing issue between 3.7 and 3.9 version.

Could you try deleting from sym_node_identity at the store to see if it will request configuration and get you out of the loop.

Related

Hasura GraphQL 504 error(Gateway Time-out) while run the query in GraphQL console - API explorer tool

I have a table with 2.3 millions of data in Postgres DB. While I do the search for user email Id using '_ilike' in Hasura GraphQL console - API explorer tool. Its took too long time to load after 30 seconds its thrown 504 error(Gateway Time-out).Its happens 3/4 times. If I use the same in API in React Application its shown the correct search output within 30 Seconds.

socket.io with apache and angularjs

I would be implementing a notification in my web app.
The technologies on which the app is built is Angularjs Apache PHP(For Api).
Our initial approach was to query the database every 10 seconds for any notification.
Is using socket.io beneficial in this scenario.?
It depends on the way you will implement it.
If you got only 1 or less new datasets in 10 seconds you will save capacity by sending a message via socket.io to the client to pull the new data.
If you got more than 1 new dataset in 10 seconds, the "pull every 10 seconds" will be better.
If you dont know how much new datasets will came up, you could the a mixed thing. Send a message to the client (via socket.io) to do the next pull. and then the client will pull timer-elapsion. if not, your timer will not pull the new data and will save network-traffic.

Node.JS & Socket.IO - Prioritise / Pause & Resume

I am building a real time application using AngularJS, NodeJS & Socket.IO. The application facilitates 3-4 large tables which are populted from a MongoDB database and after the initial load they only get updated.
Each table store almost 15-30Mb of data and it takes about 30 seconds to populate them using Socket.IO . Now, if a user navigates to a different page before all data is downloaded, the data required by the new page stays in the queue and is received after the table from the first page populates in full. Is there a way to pause or cancel the request when navigating to a different page?
Thanks in advance for your help.
--- Edit ---
To make the question more clear, here is the process that a user might follow:
The User opens /index and Grid_1 starts loading with Data from the server using Socket.IO. The data is coming in chunks of 50 records at the time. Grid_1 will eventualy get populated with 15.000 records and it will take 30 to download all the data.
After the user waits at /index for 10 secords, he decides to visit /mySecondPage where there is Grid_2 which simillar with Grid_1 populates from the database with 15.000 records which takes 30 seconds to download - again using Socket.IO. Since the user switched from /index to /mySecondPage before the data is populated in Grid_1, Grid_2 is not populated with data before Grid_1's data is fully downloaded. Hope that made it more clear?

GAE: Queues, Quotas and backend instances

I have a queue with a lot of tasks in it. I would like to use one backend instance to process this queue. My quota info tells me I have blown my budget on hundreds of frontend instance hours and have not used any backend instance hours. As I had configured only one backend instance, I was expecting to be charged no more than 1 (backend) instance hour per hour. Here is my configuration:
backends.yaml
backends:
- name: worker
class: B8
instances: 1
options:dynamic
queue.yaml
- name: import
rate: 20/s
bucket_size: 40
adding tasks to queue in my script
deferred.defer(importFunction, _target='worker', _queue="import")
bill status
Resource Usage
Frontend Instance Hours 198.70 Instance Hours
Backend Instance Hours 0.00 Instance Hours
Task Headers
X-AppEngine-Current-Namespace
Content-Type application/octet-stream
Referer http://worker.appname.appspot.com/_ah/queue/deferred
Content-Length 1619
Host worker.appname.appspot.com
User-Agent AppEngine-Google; (+http://code.google.com/appengine)
I needed to deploy my backend code:
appcfg.py backends update dir instance_name

How to finish a broken data upload to the production Google App Engine server?

I was uploading the data to App Engine (not dev server) through loader class and remote api, and I hit the quota in the middle of a CSV file. Based on logs and progress sqllite db, how can I select remaining portion of data to be uploaded?
Going through tens of records to determine which was and which was not transfered, is not appealing task, so I look for some way to limit the number of record I need to check.
Here's relevant (IMO) log portion, how to interpret work item numbers?
[DEBUG 2010-03-30 03:22:51,757 bulkloader.py] [Thread-2] [1041-1050] Transferred 10 entities in 3.9 seconds
[DEBUG 2010-03-30 03:22:51,757 adaptive_thread_pool.py] [Thread-2] Got work item [1071-1080]
<cut>
[DEBUG 2010-03-30 03:23:09,194 bulkloader.py] [Thread-1] [1141-1150] Transferred 10 entities in 4.6 seconds
[DEBUG 2010-03-30 03:23:09,194 adaptive_thread_pool.py] [Thread-1] Got work item [1161-1170]
<cut>
[DEBUG 2010-03-30 03:23:09,226 bulkloader.py] [Thread-3] [1151-1160] Transferred 10 entities in 4.2 seconds
[DEBUG 2010-03-30 03:23:09,226 adaptive_thread_pool.py] [Thread-3] Got work item [1171-1180]
[ERROR 2010-03-30 03:23:10,174 bulkloader.py] Retrying on non-fatal HTTP error: 503 Service Unavailable
You can resume a broken upload:
If the transfer is interrupted, you
can resume the transfer from where it
left off using the --db_filename=...
argument. The value is the name of the
progress file created by the tool,
which is either a name you provided
with the --db_filename argument when
you started the transfer, or a default
name that includes a timestamp. This
assumes you have sqlite3 installed,
and did not disable the progress file
with --db_filename=skip.

Resources