Initial load creates my tables in the sym_x tables schema - symmetricds

After setting up a master-master replication on top of PostgreSQL , I tried to perform an initial load using:
./symadmin -engine octopusdb reload-node 2
My setup is:
1. I created all sym_x tables in a separate schema (replication).
2. I created all my application tables in other schemas of their own.
3. I inserted into sym_trigger.source_schema_name the suitable schema name for each application table.
Still, the initial load seem to create the application tables under the 'replication' schema instead of in their own schemas.
Is there some parameter I am missing for the properties file, or the initial load command?

So apparently for multi-schmea configuration,you need to create a separate record for each schema in sym_router (with a separate router_id, and appropriate target_schema_name), and for each table put a record in sym_trigger_router and sym_trigger with the appropriate router_id and schema name).
Also, once failed, I needed to remove everything from the tmp directory which is under the symmetric software, so the updates to sym tables will be recognized.

Related

Create a default column filter at schema/database level in Oracle and SQL Server?

We have enabled versioning of database records in order to maintain multiple versions of product configurations for our customers. To achieve this, we have created 'Version' column in all our tables with default entry 'core_version'. Customers can create a new copy of the same records by changing one or two column values and say that as 'customer_version1'. So, the PK of all our tables are (ID column and Version).
Something like this:
Now, the version column will act as an identifier, when performing CRUD operations via application as well as when executing sql queries directly in DB, to ensure against which version of records the CRUD operation update should happen.
Is there any way to achieve this in Oracle & SQL server? A Default filter for the "Version" column at Schema level that should get added as a mandatory where clause on performing/executing any query operation.
Say, If want only "Core_version" records. Then, Select * from employee; should return me only 3 records respective to core_version without having the version column filter explicitly in query.

Rename table or column in SQL server without breaking existing apps

I have an existing database in MS SQL server and want to rename some tables and columns because the names currently used aren't accurate to what it represents.
I have multiple web and desktop applications that access the database, using Entity Framework (code first). Too many to update in one go and cannot afford for all apps to start working.
I was thinking it was nice is SQL server allowed a 'permanent' alias for tables and columns but I don't think this feature exists.
Or I was wondering if there was a way in EF to have two names for the same property?
For the tables, you could rename them and then create a synonym with the old name pointing to the new name.
For the columns, changing their name will break your application.You could create computed columns with the old name as well, that simply display the value of the new named column though (but this seems a little silly).
Note, however, that a computed column cannot reference another computed column, so you would have to duplicate the column in its entirety. That could lead to problems down the line if you don't update the definition of both columns.
A view containing a simple select statement acts exactly like a table. You really need to fix this properly across the database and applications. However if you want to go the view route, I suggest you do this:
Say you have a table called MyTable that you rename TheTable and with a column called MyColumn that you want to rename to TheColumn
Create a schema, say, new
Move the original table into it with this ALTER SCHEMA new TRANSFER MyTable
Rename the table and column.
Now you have a table called new.TheTable with a column called TheColumn. Everything is broken
Lastly, create a view that looks just like the old table
CREATE VIEW dbo.MyTable
AS
SELECT Column1, Column2, Column3, TheColumn As MyColumn
FROM new.TheTable;
Now everything works again.
All your fixed 'new' tables are in the new schema
However now everything is extra complicated
This is basically an illustration that you should just fix it properly across the whole app one at a time with careful change management. Definitely don't complicate it with triggers
Since you are using code first with multiple web and desktop applications, you are likely managing database changes from one place through migrations and ignoring changes other places.
You can create an empty migration and add code that will change the table name and column names to what you want. The migration should then create a view that will select from that table with the original table and column names. When you apply this migration, everything should still be working as normal from all applications. There are no model changes since you didn’t touch the model classes. Inserts, updates, and deletes will still happen through the view. There is no need for potentially buggy triggers or synonyms on the table in this option.
Now that you have the table changed, you can focus on the application code. If it helps, you can add annotations over the column and table names and start refactoring the code. You need to make sure you don’t make model changes that will break the other apps. If apps ignore model changes, you can get away with adding annotations over the columns and classes on all the apps before refactoring. You can get rid of the view sooner this way.

How to use the pre-copy script from the copy activity to remove records in the sink based on the change tracking table from the source?

I am trying to use change tracking to copy data incrementally from a SQL Server to an Azure SQL Database. I followed the tutorial on Microsoft Azure documentation but I ran into some problems when implementing this for a large number of tables.
In the source part of the copy activity I can use a query that gives me a change table of all the records that are updated, inserted or deleted since the last change tracking version. This table will look something like
PersonID Age Name SYS_CHANGE_OPERATION
---------------------------------------------
1 12 John U
2 15 James U
3 NULL NULL D
4 25 Jane I
with PersonID being the primary key for this table.
The problem is that the copy activity can only append the data to the Azure SQL Database so when a record gets updated it gives an error because of a duplicate primary key. I can deal with this problem by letting the copy activity use a stored procedure that merges the data into the table on the Azure SQL Database, but the problem is that I have a large number of tables.
I would like the pre-copy script to delete the deleted and updated records on the Azure SQL Database, but I can't figure out how to do this. Do I need to create separate stored procedures and corresponding table types for each table that I want to copy or is there a way for the pre-copy script to delete records based on the change tracking table?
You have to use a LookUp activity before the Copy Activity. With that LookUp activity you can query the database so that you get the deleted and updated PersonIDs, preferably all in one field, separated by comma (so its easier to use in the pre-copy script). More information here: https://learn.microsoft.com/en-us/azure/data-factory/control-flow-lookup-activity
Then you can do the following in your pre-copy script:
delete from TableName where PersonID in (#{activity('MyLookUp').output.firstRow.PersonIDs})
This way you will be deleting all the deleted or updated rows before inserting the new ones.
Hope this helped!
In the meanwhile the Azure Data Factory provides the meta-data driven copy task. After going through the dialogue driven setup, a metadata table is created, which has one row for each dataset to be synchronized. I solved this UPSERT problem by adding a stored procedure as well as a table type for each dataset to be synchronized. Then I added the relevant information in the metadata table for each row like this
{
"preCopyScript": null,
"tableOption": "autoCreate",
"storedProcedure": "schemaname.UPSERT_SHOP_SP",
"tableType": "schemaname.TABLE_TYPE_SHOP",
"tableTypeParameterName": "shops"
}
After that you need to adapt the sink properties of the copy task like this (stored procedure, table type, table type parameter name):
#json(item().CopySinkSettings).storedProcedure
#json(item().CopySinkSettings).tableType
#json(item().CopySinkSettings).tableTypeParameterName
If the destination table does not exist, you need to run the whole task once before adding the above variables, because auto-create of tables works only as long as no stored procedure is given in the sink properties.

Storing/creating local table from linked SQL table

I have a linked table in my access database (dbo_Billing_denied (DSN=WTSTSQL05_BB;DATABASE=DEPTFINANCE), etc.) and I want to create a table that will store the data from this linked into local table, so I can use it to run other queries. Currently I can use this because it tells me that it can not make connection (ODB--connection to 'WTSTSQL05_BB' failed.
Do I have to create a table first and assign all the fields before I can do this (create a table and fields that are same as what's in the linked table and than create append query to do this...)?
It sounds like you might have two problems. I will address the second one. You will need to reestablish connection to the linked table before this will work.
You can use a "make table query" in Access to make a local copy of the linked table. You can use the GUI for this, or you can structure the SQL something like this:
SELECT <list of various fields, or * for all fields>
INTO <name of new local table>
FROM <name of linked table(s) on the server>
WHERE <any other conditions you want to put on which records are included>;
I mentioned that there might be more than one table. You can also do this with joined tables or unions etc. The "where" clause is optional. Removing it will copy the entire data set.
You will get a warning when you try to execute this query in Access. It will tell you that you are about to write (or overwrite) a table. If you are trying to write a cleaner application with fewer nuisance messages for the end user, call this query from a macro. The macro would need to turn the warnings off, execute the query, then turn the warnings back on.
Microsoft Access does not require you to create this table before you write it; if the table does not exist Access will create this table for you, based on the field definitions in the source data. If a table of the same name does exist, Access will drop this table from your database and then create a new table of that name.
This also implies that the local table you are generating will need a unique name. If your query tries to overwrite the linked table by using the same name, the first thing Access will do is drop the linked table. It will then look for field definitions and input data in the linked table that it just dropped.
Since the new local table will have a new name, queries developed for the linked table will not work with the new local table. One possible work-around would be to rename the linked table in your local Access database. The table name in Access does not need to equal the name in the database it's linking to. The query could then write to a table with the correct name, and previous queries should work. Still, keep in mind that these queries would no longer be working on live data.

Composite Projects - handling additional columns

From this post....
http://blogs.msdn.com/b/ssdt/archive/2012/06/26/composite-projects-and-schema-compare.aspx
...it seems that (Same) Database References are a way to share common parts of a database.
If a specific database needs additional columns on a table from a (Same) Database Reference is there any way of handling that?
I was hoping you might be able to override the definition of a table from a Database Reference simply by re-declaring the table in the referencing Database Project.
e.g. if you had a Employee table in a Common Database project, a definition for Employee table in a Client Database referencing Common Database would override the definition in the Common project. Instead when you go to deploy the porject you get the error...
SQL71508: The model already has an element that has the same name dbo.Employee.
EDIT:
Anticipating the feedback below, the resolution I've made is to not use database references for the existing client databases. Instead I've created a structure as follows....
+OurCompanyDatabases
+Common
Common.sqlproj
+dbo...
+ClientA
+dbo....
+ClientB
+dbo....
ClientA.sqlproj
ClientB.sqlproj
So I've got multiple sqlproj files within the same folder and I include and exclude files from the projects as required.
So for example ClientA's Sales table has a ClientARewardsID column added I exclude the Sales table within the /OurCompanyDatabases/Common/dbo folder and create add a new Sales table within the /OurCompanyDatabases/ClientA/dbo folder.
This way Client A and Client B can retain the full use of SSDT update and deployment, whilst minimizing the duplication of sql scripts. I'm hoping this will reduce the cost of maintenance on the sites.
Going forward I will use database references and additional columns will be added in new tables with a foreign 1:1 foreign key relationship with the Common table.
No it doesn't support an inheritance type model and you can only really share complete objects so in your case you would have it structured like:
proj a - TableA
references - proj shared
proj b - TableA
references - proj shared
proj shared - TableXYZ
Then you can have two different definitions of TableA but still share all of the objects that are the same.
There is another option you could not include the table definition in SSDT or include one or the other and then handle any changes and the deployment yourself in post deploy scripts and use my filter (http://agilesqlclub.codeplex.com/) to stop ssdt deploying any changes to your table but this sort of invalidates one of the main reasons for using ssdt (merge type deployments for free).
ed
It's much safer and better practise to add a new table for the extra columns, and make its primary key a foreign key to the table it extends.

Resources