SymmetricDS sync to same node - symmetricds

I am trying track changes to a table via SymmetricDS (v3.7.32), use two specific columns and route them to another table on the same node.
I've added the trigger, router and trigger_router records, and my trigger seems to be working as I can see the record appearing in sym_data, however even with a table of exactly the same format, the table is not being loaded.
I cannot see relevant batches in sym_incoming_batch on the same server, which I assume I should from the same channel.
I have other tables being replicated, so the config is all correct, I am not seeing any errors on the batch, but nothing seems to be happening on the router side, though in sym_data_event, I can see that the record has the correct router_id ... it just doesn't seem to be doing anything.
Can anyone shed any light on this issue for me?

I think that two engines will have to be declared declared on the same SymmetricDS instance to sync the data between two tables within the same DB

Related

SQL Server: Copy Live Data from View to Table

right now I have a challenge what I'm not sure about how to solve in the best manner. I searched the internet but did not find a suitable solution ...
I want to copy data from a view on a linked server (read only; view has severals sub views and tables) to a table which is located on my database. This View contains live data, basically showing the last 100 occuring events. However, what I need is the whole history of the data shown by the view. As I have just reading permission on that specific view and the admin of the linked server is not able (or willing) to give further rights or change the view, I was wondering what the best way is to copy the view data and basically building up the whole history on my database.
I was thinking about a stored procedure and run it on a schedule, but as the last 100 events can change very quickly, this does not seem like an appropriate approach. Another option would be to build a trigger, which takes new rows an copies them to my table. However, I'm not sure if this would be possible.
I appreciate any hints, tips or impulses.

SSIS- Roll back delete commands

I am using SSIS packages to daily refresh the data. Package logic is as follows,
Delete all rows in destination table
Insert full new data into destination table.
I am trying to find out ways to rollback delete if my insert fails. I tried using SSIS package transaction as below:
But now, after Delete SQL task is run , my package goes stuck for long time and does not respond.
What is the recommended way for doing this?
Any help is much appreciated.
There are quite a few techniques to consider here including some more complex ideas, but if we're looking at simpler ones, you could insert into a table with a different name but the same structure, and only if that works would you then swap it out somehow. One way of doing this is to use views for your access to tables, and then modify the view on success to use the table you've just inserted into.
It might not be the most elegant way, but it is one of the simpler ones to consider.
Change the default of the Transaction Option property of the package to "Required" and make sure each object has that property set to "Supported" which is the default.
Additionally, you can minimize the transaction by doing the same thing with a sequence container around just your Execute SQL task and data flow.
FYI, I can't see pictures at work so I do not know what your package looks like.

How to reload file_snapshot in SymmetricDS

I have a filesync configured in a one way direction and suddenly all the records of the 'sym_file_snapshot' table have been deleted. Well, what i'd like to know is if there is a way to reload this table without to send all the files to clients again.
Thanks in advance.
Without a deep knowledge in symmetricDs I wouldn't recommend manual update of this and other sym_* tables. touch all files that are under file synchronization, let them have synced to targets, wait a bit and everything should be fine.

Tidy up DotNetNuke database tables

I've inherited the maintenance of a DotNetNuke (v6.2.0.1610) site, and one of the things I'd like to do is to tidy up the database tables being used.
It looks like there might have been two installations of DNN into the same database (I'm guessing, I don't know its history and cannot find out), I'm making this assumption because there are two sets of DotNetNuke tables.
For example, we have:
dbo.Portals, dbo.PortalSettings, dbo.Profile, dbo.Roles, etc.
However, then we also have the same set, prefixed with dnn_ -
dbo.dnn_Portals, dbo.dnn_PortalSettings, dbo.dnn_Profile, dbo.dnn_Roles, etc.
I spent a good while tearing my hair out when I could not get our portal to load, when I discovered it is because I was editing the dbo.PortalAlias table and I needed to be editing the dbo.dnn_PortalAlias table instead.
I wanted to avoid this future maintenance headache, so I backed up the database, and set about deleting all the tables without the dnn_ prefix (web.config specifies objectQualifier="dnn_"). I diligently ensured there was a matching dnn_ table before deleting any.
At first it seemed fine - the portal loaded and all the content was there, I thought I was on to a winner. However when I logged in and accessed the site admin section, that's when I started to get lots of error messages. So I figured I'd deleted too much, I restored the backup, and all is well - portal working again.
However, I really would like to get rid of the unnecessary tables, because no doubt at some point in the future I'll start doing some work on the database, forget about the dnn_ prefix and waste a bunch of time wondering why something isn't working.
So, as a bit of a DotNetNuke newbie, I'm after some help - how can I know what tables are in use, what aren't, and how can I set about tidying up the SQL Server tables? Thanks.
I suggest you to delete only the tables which have an equivalent with the "dnn_" prefix.
The DNN database should contain at least the "aspnet_" prefixed tables which are used for the authentication on the portals.
Then, you could have some extensions which could use tables without the "dnn_" prefix. It depends on the sql scripts that those extensions have used during their installation. I hope that those extensions don't run queries on the dnn tables without the "dnn_" prefix. Otherwise it could explain the errors you've encountered.
You could use the SQL Server Profiler to check it.
It turns out there was a view, dnn_Lists which was still referencing dbo.Lists without the dnn_ prefix.
I fixed this view and it's fine now.
(PS: Turns out that it's useful to set IsSuperUser = 1 in the users table for who you're logged in as, because then you get the full exception details and can fix it.)
Thanks
It would make sense to delete all tables WITHOUT "dnn_", but you said you got a problem.
If you have time and patience and is adamant about tiding things up, I would delete 1 table at a time and test the admin feature it broke last time until you find the culprit. That is a long shot, but that is how I would approach.
What might be happening here is that you may have a 3rd module installed that ignores the objectQualifier and when you deleted those tables, you then broke that module.

Does hbm2ddl.auto=update not honor different DB users, maybe?

We are encountering strange hibernate behavior with hbm2ddl.auto set to update.
In our test setup, we have two database users, one containing the tables for our beta application, the other one is mainly used for development. I.e. same table names with different users.
When new tables are to be created, we do so by using hbm2ddl.auto=update.
Now suddenly the strange behavior is: the update process looks for existing tables with the wrong user and creates those not found with the right user.
E.g. if the following tables exists
USER_A.TABLE_1
USER_B.TABLE_2
and we update with three tables configured: TABLE_1, TABLE_2, TABLE_3 using USER_B, we end up with
USER_A.TABLE_1
USER_B.TABLE_2
USER_B.TABLE_3
TABLE_1 is not created for USER_B. After renaming USER_A.TABLE_1 to USER_A.TABLE_0 and updating again we end up with the expected result:
USER_A.TABLE_0
USER_B.TABLE_1
USER_B.TABLE_2
USER_B.TABLE_3
Does this make any sense to anyone? Is there something like an internal hibernate cache remembering like "Hey I have already created this table on this server (and I do not care about the user)".
We have spent quite some testing to reassure this is not a configuration problem, reproduced this on different machines, different configurations, from ant or using the IDE, making sure USER_A's password cannot be found anywhere in the build directory etc. So we are 100% sure, the behavior is as described - but we are completely out of ideas what happens.
I'd be very happy to hear your ideas about this, since this problem is nagging for some time now.
Thanks a lot,
Peter
Is there something like an internal hibernate cache remembering like "Hey I have already created this table on this server (and I do not care about the user)".
No. What is probably happening is that USER_A can see the tables created under USER_B account, and vice-versa. It's not clear which database you are using, but I would try to configure Hibernate to use two different schemas, in addition to use just different users. You may also want to try to set the property "hibernate.default_schema", but I'm not sure that this only will solve your problem.

Resources