We have following setup:
Multiple clients with MySQL 8.x, each
One schema for SymmetricDS
Multiple schemas containing (source) tables to be synced.
One master with an Oracle 12.2
One Service which contains SymmetricDS tables AND the (target) tables to be synced.
So it looks like:
Client: Schema/Service : qwerz; Table: table_a
Master: Schema/Service: dabc_svc.tst.tns; Table: table_a
Our router looks like this:
INSERT INTO SYM_ROUTER (router_id, target_catalog_name,
source_node_group_id, target_node_group_id, create_time,
last_update_time) values ('client2master', 'dabc_svc.tst.tns','abc_client', 'abc_central',
current_timestamp, current_timestamp);
The trigger:
insert into sym_trigger
(trigger_id,source_catalog_name, source_table_name,channel_id,last_update_time,create_time)
values('trigger_a','qwertz','*','default',current_timestamp,current_timestamp);
If I trigger now an initial load with:
insert into SYM_TABLE_RELOAD_REQUEST (target_node_id, source_node_id, trigger_id, router_id, create_time, create_table, last_update_time)
values ('master', 'client_a', 'ALL', 'ALL', current_timestamp, 1, current_timestamp);
SymmetricDS tries to create the tables on Oracle with a statement like this:
CREATE TABLE "dabc_svc.tst.tns"."SYM_USER"."TABLE_A" (...);
Which fails with the error “missing or invalid options” on Oracle.
It looks to me like the create table statement is wrong, because executing manually CREATE TABLE “TABLE_A" (...); works just fine.
Do I overlook something?
Related
I need to directly insert the data from one server table to another server table with an after insert trigger on server 1 table. I have used the following code to do the work. Tables on both the servers have identical columns, table name and schema.
CREATE TRIGGER trigger_name ON [server1].db.schema.table1
FOR INSERT
AS
INSERT INTO [server_2].db2.[schema].[table1](
PLANTID
,PLANTNAME
,PLANTLOCATION
,COMPANYNAME
,DISPLAY_STATUS
,VERSION
,DEPARTMENT
,PURPOSE
,PERFORMEDBY
)
SELECT
PLANTID
,PLANTNAME
,PLANTLOCATION
,COMPANYNAME
,DISPLAY_STATUS
,VERSION
,DEPARTMENT
,PURPOSE
,PERFORMEDBY
FROM INSERTED
order by plmstrauid
However, I get the following error. Can anyone please help me with this?
Error:
The object name '[server1].db.schema.table1' contains more than the
maximum number of prefixes. The maximum is 2.
I am using MSSQL and evaluating Liquibase to use for migrations. So, I wanted to generate my first changelog using generateChangeLog. My database has two schemas: the default schema and another called 'blah'. I have a table in each schema with the same table name: test1. I ran:
liquibase --dataOutputDirectory=./data/ --schemas=blah,dbo --changeLogFile=changelog.mssql.sql --includeSchema=true generateChangeLog
It completed and I looked at the generated SQL:
-- liquibase formatted sql
-- changeset bmccord2:1604068236633-1
CREATE TABLE blah.test1 (id int NOT NULL, name varchar(255), CONSTRAINT PK__test1__3213E83F4F883C7C PRIMARY KEY (id));
-- changeset bmccord2:1604068236633-2
INSERT INTO blah.test1 (id, name) VALUES (1, 'Brian'),(2, 'Kim');;
-- changeset bmccord2:1604068236633-3
CREATE TABLE dbo.test1 (id int NOT NULL, name varchar(255), CONSTRAINT PK__test1__3213E83F6FD50901 PRIMARY KEY (id));
-- changeset bmccord2:1604068236633-4
INSERT INTO dbo.test1 (id, name) VALUES (1, 'Brian'),(2, 'Kim');;
At first, it looks ok, but then I noticed that the data being inserted into the blah.test1 table is not the data that is actually in that table. The data in that table is:
"id","name"
"1","Miranda"
"2","Kyle"
So, it is using the second table's data for both tables. It is also only generating one .csv file in the data folder.
Obviously, this isn't my real database. I simplified the problem down to the smallest thing that causes the problem.
Is there any way to make this work properly?
Checking their forum they state that:
The way that Liquibase is designed, it only works with a single schema at a time.
If it fits your use case you could try to define two separate migrations and apply them one by one, e.g:
liquibase --dataOutputDirectory=./data/ --schemas=blah --changeLogFile=changelog.blah.mssql.sql --includeSchema=true generateChangeLog
liquibase --dataOutputDirectory=./data/ --schemas=dbo --changeLogFile=changelog.dbo.mssql.sql --includeSchema=true generateChangeLog
or if you would prefer to apply the exact same changelog for both database you can call it twice with the same changeLogFile. In that case only the --schemas needs to be adjusted. (--schemas=dbo and --schemas=blah)
I am using Liquibase for managing SQL Server scripts (create, update, delete, alters etc.).
My requirement was to create a backup table (say old_table_a) before I could drop two columns (column_1, column_2) from the original table (table_a).
The new backup table does not need a primary key, so it will just have two columns as shown below
old_table_a
column_1 (from original table_a)
column_2 (from original table_a)
If I just write INSERT query as shown below, without having a CREATE TABLE old_table_a
INSERT INTO old_table_a (column_1, column_2)
SELECT column_1, column_2
FROM table_a
I had read this somewhere on some blog, but cannot find this.
Please provide some information if this is possible.
Otherwise I know that the usual way to do this is to create the new backup table and then populate the new table with values from the original.
This can be done with SELECT * INTO:
SELECT * INTO [NEWTABLE] FROM [OLDTABLE]
INSERT tableName1 (ColumName)
(select (ColumName ) from TableName2)
I'm migrating our system from Oracle to SQL SERVER. In Oracle we have insert triggers that are resposnible for setting primary key if not set. Below you will find code from PL/SQL.
create or replace trigger trigg1
before insert on table1
for each row
when (new.ID_T1 is null) -- if primary key is null
begin
select OUR_SEQ.nextval into :new.ID_T1 from dual;
end trigg1;
Now I have to do something similar in T-SQL. I found the solution, but unfortunatelly I have to list all the columns for the table trigger is created. This is something I want to avoid (model for the system is still very dynamic).
Is it possible to implement such trigger without listing all the columns in trigger?
Marcin
I have two identical databases on the same server. During a deployment process, I have data in tables in database A that need copied over to the tables in database B. What is the easiest way to programmatically accomplish this task?
EDIT:
Tables do have identity columns.
There are tables with foreign key constraints, so insert order is important.
All of the rows will need to be copied. As far as I'm aware, this will always be the case.
Assuming that the tables don't have identity columns and belongs to the default (dbo) schema, try the TSQL insert query below;
Insert Into DatabaseB.dbo.DestinationTable
Select * From DatabaseA.dbo.SourceTable
If you have an identity column then execute statements below
SET IDENTITY_INSERT DatabaseB.dbo.DestinationTable ON
GO
Insert Into DatabaseB.dbo.DestinationTable
Select * From DatabaseA.dbo.SourceTable
GO
SET IDENTITY_INSERT DatabaseB.dbo.DestinationTable OFF
GO
If the databases are in different servers:
exec sp_addlinkedserver ServerA
Insert Into DatabaseB.dbo.DestinationTable
Select * From ServerA.DatabaseA.dbo.SourceTable