How to configure bi-directional sync in SymmetricDS? - symmetricds

Has anyone achieved a bidirectional configuration with SymmetricDS?
There are many things to configure, and i got most of them:
server.properties:
#XXXXXXX nombre de la cabana
#SSSSSSS ip del servidor
engine.name=XXXXXXXXXXX
# The class name for the JDBC Driver
db.driver=com.mysql.jdbc.Driver
# The JDBC URL used to connect to the database
db.url=jdbc:mysql://localhost/HutteBullen_XXXXXXXXXXX? tinyInt1isBit=false
# The user to login as who can create and update tables
db.user=aDDD
# The password for the user to login as
db.password=CC
registration.url=http://SSSSSSS:31415/sync/XXXXXXXXXXX
sync.url=http://SSSSSSS:31415/sync/XXXXXXXXXXX
# Do not change these for running the demo
group.id=server
external.id=000
initial.load.create.first=true
auto.registration = true
auto.reload = true
create.table.without.foreign.keys=true
Client is embedded HSQL
client.properties(Generated in the code):
Properties props = new Properties();
props.setProperty("engine.name", "cabana-" + args[0]);
props.setProperty("db.driver", "org.hsqldb.jdbcDriver");
props.setProperty("db.user", args[1]);
props.setProperty("db.password", args[2]);
props.setProperty("registration.url", "http://" + args[4] + ":31415/sync/" + args[5]);
props.setProperty("group.id", "cabana");
props.setProperty("external.id", args[0]);
props.setProperty("job.routing.period.time.ms", "5000");
props.setProperty("job.push.period.time.ms", "10000");
props.setProperty("job.pull.period.time.ms", "10000");
props.setProperty("job.heartbeat.period.time.ms", "15000");
props.setProperty("intial.load.create.first", "true");
props.setProperty("create.table.without.foreign.keys", "true");
props.setProperty("create.table.without.defaults", "true");
The Triggers:
insert into sym_trigger (trigger_id,source_table_name, channel_id, last_update_time,create_time, sync_on_incoming_batch)
values('TriggerAll', '*', 'transaction', current_timestamp, current_timestamp,1);
insert into sym_trigger_router (trigger_id,router_id,initial_load_order,last_update_time,create_time)
values('TriggerAll','server_2_cabana', 100, current_timestamp,current_timestamp);
insert into sym_trigger_router (trigger_id,router_id,initial_load_order,last_update_time,create_time)
values('TriggerAll','cabana_2_server', 200, current_timestamp, current_timestamp);
And then, the problem, the sym_conflict(in the server):
insert into sym_conflict (conflict_id, target_channel_id, source_node_group_id, target_node_group_id, detect_type, detect_expression, resolve_type,ping_back, resolve_changes_only, resolve_row_only, create_time, last_update_time)
values ('Conflict-Server-Cabana', 'transaction', 'server', 'cabana', 'USE_TIMESTAMP', 'LASTMODIFIEDUTCDATETIME', 'NEWER_WINS', 'REMAINING_ROWS', 0, 1, current_timestamp, current_timestamp);
insert into sym_conflict (conflict_id, target_channel_id, source_node_group_id, target_node_group_id, detect_type, detect_expression, resolve_type,ping_back, resolve_changes_only, resolve_row_only, create_time, last_update_time)
values ('Conflict-Cabana-Server', 'transaction', 'cabana', 'server', 'USE_TIMESTAMP', 'LASTMODIFIEDUTCDATETIME', 'NEWER_WINS', 'REMAINING_ROWS', 0, 1, current_timestamp, current_timestamp);
The big problem is the next one:
I have many nodes that sync in a star topology. All of them sync in a bi-directional way. All of them have the same schema, and should have the exact same data.
Inserts, and updates work correctly with the configuration above. The problems are the deletes. I am node one, and i create a row, and then it gets synced to the central server, and then to node two. And then node two decides to delete this row, it gets deleted on node two and then on the server, but not on the node that created the row, and i dont know why!. It doesn't stay consistent.
Did anybody achieve full bi-directional replication with SymmetricDS?

Set the value for the column sync_on_incoming_batch to 1 as explained in the documentation http://www.symmetricds.org/doc/3.8/html/user-guide.html#_bi_directional_synchronization

Related

IdentityServer4: what is breaking changes from version 2 and/or 3 to version 4?

Where can I find breaking changes list from version 2 and/or 3 to version 4 of IdentityServer4?
I am trying to upgrade a project with IdentityServer4 version 2 to version 4.
I made a migration from 2.2 to 4.1.2 version two years ago and I remember I found some breaking changes. Others were imposed by the upgrade of framework I made (net core 2.1 to 3.1). These are some of the changes related only to IdentityServer4:
Database scheme. If you have data in production you'll need to maintain this data in the migration. Automatic migrations will delete and recreate tables mercilessly. There are table and column renames, new columns, deleted columns, changes in the indexes...
Client Cors Origins validation. There is a new validation to force all Url configured in ClientCorsOrigins to complain with an Origin format. If only one of them does not comply with the format, an exception is thrown. You should review your production values to avoid fails.
// format:
<protocol>://<domain>:<port>
// good examples:
http://localhost:5000
http://example.com
https://anotherdomain.com
http://example.com:1234
// bad examples
http://example.com/
https://example.com/mypath
example.com:1234
Some code changes:
AuthorizationRequest.ClientId -> AuthorizationRequest.Client.ClientId.
ResourceValidationResult groups ApiResources and IdentityResources properties in a common property called Resources.
ValidatedTokenRequest changes his property Scopes to RequestedScopes.
GetAllUserConsentsAsync -> GetAllUserGrantsAsync.
In your UI some ModelViews will need to be updated to the new scheme. If you stared with the QuickStart.UI you can compare with the new version to add the new features.
If you have an admin you'll have to adapt it to the new scheme.
Migrations
I created the migrations automatically and then I edited the migration to reorder and to add manual scripts to save the data (For example, create a table before deleting the old one and move the data).
These are the scripts I had to insert manually for the Up migration.
Reorder the code to create ApiResourceScopes table before deleting column ApiResourceId from ApiScopes table.
Insert Into [ApiResourceScopes] ([ApiResourceId], [Scope]) Select [ApiResourceId], [Name] From [ApiScopes]
As ApiScopes has a new field called Enabled and by default takes 0, you'll want to enable for all of them. Run this script just before create the Enabled column:
Update [ApiScopes] set [Enabled] = 1
ApiSecrets must be moved to the new table ApiResourceSecrets. So you should run this script before delete ApiSecrets:
Insert Into [ApiResourceSecrets] ([Description], [Value], [Expiration], [Type], [ApiResourceId], [Created]) Select [Description], [Value], [Expiration], [Type], [ApiResourceId], GetDate() From [ApiSecrets]
The table IdentityClaims is renamed to IdentityResourceClaims. So you'll need to run this script after crate IdentityResourceClaims and before delete IdentityClaims.
Insert Into [IdentityResourceClaims] ([Type], [IdentityResourceId]) Select [Type], [IdentityResourceId] From [IdentityClaims]
For the Down migration you need to do exactly the reverse:
Restore ApiScopes. Move the data from ApiResourceScopes.ApiResourceId by using in the join the Scope and the Name fields respectively.
Update [ApiScopes] Set [ApiScopes].[ApiResourceId] = apir.[ApiResourceId] from [ApiScopes] apis Inner Join [ApiResourceScopes] apir On apis.[Name] = apir.[Scope]
Restore ApiSecrets. Move the data after create ApiSecrets table and before delete ApiResourceSecrets:
Insert Into [ApiSecrets] ([Description], [Value], [Expiration], [Type], [ApiResourceId]) Select [Description], [Value], [Expiration], [Type], [ApiResourceId] From [ApiResourceSecrets]
Restore IdentityClaims. Move the data after create IdentityClaims and before delete IdentityResourceClaims:
Insert Into [IdentityClaims] ([Type], [IdentityResourceId]) Select [Type], [IdentityResourceId] From [IdentityResourceClaims]

Oracle (v18/19) Trigger on Materialized View does not know about old values

In our tool we use triggers on materialized views in order to create log-entries (and do some other things) when a transaction is commited.
The code works good in Oracle 12. In Oracle 19 the old values in that trigger (":old") seems to be lost.
Investigations:
This seems to be the case in the combination of materialized views/triggers. If we set the same trigger on a table the logs are generated correctly (but we do not get the transaction-awareness which is required).
I have created a MWE and added comments to the DBMS_OUTPUT-Lines which describe what we see in oracle 12 and Oracle 18/19:
/*Create Test-Table*/
CREATE TABLE MAT_VIEW_TEST (
PK number(10,0) PRIMARY KEY ,
NAME NVARCHAR2(50)
);
/*insert some values*/
insert into MAT_VIEW_TEST values (1, 'Herbert');
insert into MAT_VIEW_TEST values (2, 'Hubert');
commit;
/*Create mateterialized view (log) in order to set trigger on it*/
CREATE MATERIALIZED VIEW LOG ON MAT_VIEW_TEST WITH PRIMARY KEY, ROWID including new values;
CREATE MATERIALIZED VIEW MV_MAT_VIEW_TEST
refresh fast on commit
AS select * from MAT_VIEW_TEST;
/*Create trigger to log old and new value*/
CREATE OR REPLACE TRIGGER MAT_VIEW_TRIGGER
BEFORE INSERT OR UPDATE
ON MV_MAT_VIEW_TEST
FOR EACH ROW
DECLARE
old_pk number(10,0);
new_pk number(10,0);
old_name NVARCHAR2(50);
new_name NVARCHAR2(50);
BEGIN
old_pk := :old.pk;
old_name := :old.name;
new_pk := :new.pk;
new_name := :new.name;
DBMS_OUTPUT.PUT_LINE('TEST BEGIN');
DBMS_OUTPUT.PUT_LINE('old p ' || old_pk); /*old is set in oracle 12, but not in oracle18/19*/
DBMS_OUTPUT.PUT_LINE('old n ' || old_name); /*old is set in oracle 12, but not in oracle18/19*/
DBMS_OUTPUT.PUT_LINE('new p ' || new_pk); /*new is set correctly*/
DBMS_OUTPUT.PUT_LINE('new n ' || new_name); /*new is set correctly*/
DBMS_OUTPUT.PUT_LINE('TEST END');
END;
/
/*test the log*/
update MAT_VIEW_TEST set name = 'Test' where pk = 1;
commit;
Any ideas what was changed in Oracle or what we could do to get the old values in our trigger?
I don't have a 12c to rerun your tests, but I did on a 21c, and with the trigger you show, the old values are never shown, neither on insert (normal) nor on update( which is what you're complaining about). When I changed the trigger to be 'on insert or update or delete', and reran an update, I can see the old values. So, the refresh process is converting your UPDATE to DELETE/INSERT, hence the old values when it is deleting the old row.

Changing table structure on the fly using SymmetricDS

My setup consists of a master and a slave node connected to a MySQL and Oracle DB, respectively. The slave node already successfully pushes rows added to a table to the master node. However, when I add a column to the source table nothing changes at the target table. So far I figured out that
INSERT INTO `symmetricds`.`sym_table_reload_request` (`target_node_id`, `source_node_id`, `trigger_id`, `router_id`, `create_time`, `create_table`, `delete_first`, `processed`, `last_update_time`)
VALUES ('master', 'client', 'ALL', 'ALL', CURRENT_TIMESTAMP(), '1', '0', '0', CURRENT_TIMESTAMP());
should cause an update of the target schema. However, this only works when I restart the symmetricDS slave node (which sends the data). That is, adding the column, restarting and then performing the insert works and the server's logs confirm that the XML containing the table structure contains the new column. Yet, when I skip the restart the XML shown in the server's logs still misses the new column. Is there a way to make this work without a restart?
With the help of chenson42 I was able to make this work. Let's say you have the folowing trigger for table "my_table"
insert into sym_trigger
(trigger_id,source_catalog_name, source_table_name,channel_id,last_update_time,create_time)
values('my_trigger','my_catalog','my_table','default',current_timestamp,current_timestamp);
Now you add a column:
ALTER TABLE my_catalog.my_table ADD hacky_works varchar(40);
Then, in order to synchronize the chagned table structure with the master note run the following lines:
UPDATE sym_trigger SET last_update_time=CURRENT_TIMESTAMP() where trigger_id="my_trigger";
INSERT INTO sym_table_reload_request (`target_node_id`, `source_node_id`, `trigger_id`, `router_id`, `create_time`, `create_table`, `delete_first`, `processed`, `last_update_time`)
VALUES ('master', 'client', 'ALL', 'ALL', CURRENT_TIMESTAMP(), '1', '0', '0', CURRENT_TIMESTAMP());
Note that in this example 'master' and 'client' are the configured names of the source and target node.

Spring boot: What is the right way to seed the databse?

I have a Spring Boot application and would like to seed the database the first time the application runs, but not every time the application runs, and then only if the data does not already exist.
My application has a data.sql file and I have it inserting the default users:
-- insert the administrator
INSERT INTO users(id, username, password_hash, email, first_name, last_name) VALUES
(1, 'admin', 'comixed', 'email1#domain.com', 'ComixEd', 'Administrator'),
(2, 'user', 'comixeduser', 'email2#domain.com', 'ComixEd', 'User')
;
-- insert the supported roles
INSERT INTO roles(id, name) VALUES
(1, 'Administrator'),
(2, 'User')
;
-- set the administrator roles
INSERT INTO users_roles(user_id, role_id) VALUES
(1, 1),
(1, 2),
(2, 2)
;
But Spring is obviously trying to run this file every time I start the app. And when it does an exception is raised since the users and roles are already in the database.
What's a better way to do this? And, optionally, what's a way to add new seed data if, in future, new features require new roles, etc.?
Flyway is what you are looking for https://flywaydb.org/
It is a database migration tool that can be used to create and modify databases in a project. It creates its own table in your schema and adds the script name and a checksum value into the tables. During application boot it scans the scripts, checks to see if they're in the table and if the checksums match still. This makes sure the files don't change and everything has been migrated.
You have to configure it in the application.properties file.
Add this line and it should work as you wish:
spring.jpa.hibernate.ddl-auto = update

fatfreeframework with SQL Server databse using mapper copyfrom method with partial insert

I am attempting to insert a record using the copyFrom('POST') and save() methods of fatfreeframework v3.5. The data from POST does not contain an id field which for this table is set as an autoincrement. The SQL from the logs is
SET IDENTITY_INSERT [xrefs] ON;
INSERT INTO [xrefs] ([status], [supply_id], [description], [unit], [unitcost], [cap], [rev], [buq])
VALUES ('test', 'Htest', 'test', 'test', '1', '1', 1, 1)
As you can see fatfree is adding the set identity insert despite the fact there is no id column included in the insert. Is there a way to tell mapper not to set this flag? Or is there another workaround? I could get the current max ID and then insert +1 but that seems clunky.
I should add this SQL fails because the id column is not included in the columns list.
$this->db->exec(
(preg_match('/mssql|dblib|sqlsrv/',$this->engine) &&
array_intersect(array_keys($pkeys),$ckeys)?
'SET IDENTITY_INSERT '.$this->table.' ON;':'').
'INSERT INTO '.$this->table.' ('.$fields.') '.
'VALUES ('.$values.')',$args
);
This is the code that sets IDENTITY_INSERT in mapper.php function insert.
$this->logger->write( 'xrefs schema:'.
json_encode( $this->tongpodb->schema( 'xrefs' ) ) );
Calling schema on the the db object gives back this array
{"id":{"type":"int","pdo_type":1,"default":null,"nullable":false,"pkey":true},"changed_date":{"type":"datetime","pdo_type":2,"default":null,"nullable":true,"pkey":false},"status":{"type":"varchar","pdo_type":2,"default":null,"nullable":false,"pkey":false},"supply_id":{"type":"varchar","pdo_type":2,"default":null,"nullable":false,"pkey":true},"description":{"type":"varchar","pdo_type":2,"default":null,"nullable":true,"pkey":false},"unit":{"type":"varchar","pdo_type":2,"default":null,"nullable":false,"pkey":false},"hcpcs":{"type":"char","pdo_type":2,"default":null,"nullable":true,"pkey":false},"unitcost":{"type":"decimal","pdo_type":2,"default":null,"nullable":false,"pkey":false},"cap":{"type":"decimal","pdo_type":2,"default":null,"nullable":false,"pkey":false},"rev":{"type":"smallint","pdo_type":1,"default":null,"nullable":false,"pkey":false},"buq":{"type":"smallint","pdo_type":1,"default":null,"nullable":true,"pkey":false},"create_ts":{"type":"datetime","pdo_type":2,"default":null,"nullable":true,"pkey":false},"log_ts":{"type":"int","pdo_type":1,"default":null,"nullable":true,"pkey":false},"filename":{"type":"varchar","pdo_type":2,"default":null,"nullable":true,"pkey":false},"line_no":{"type":"smallint","pdo_type":1,"default":null,"nullable":true,"pkey":false},"file_ts":{"type":"datetime","pdo_type":2,"default":null,"nullable":true,"pkey":false}}
As you can see id has a "pkey":true entry so one could look at the fields from post then look at this and determine if IDENTITY_INSERT needs to set. Perhaps I will implement this. I worry this is above my paygrade.
Updated to the latest version of fatfree fixed this issue.

Resources