SQL Server Replication Action if name is in use questions - sql-server

When configuring a new Merge Replication, setting properties of all articles, I'm having a problem. In Destination Object -> Action if name is in use, I can select four different options. I'm trying to figure out what is each one. I'm not finding anything about it, they are:
Keep existing object unchanged
Drop existing object and create a new one
Delete data. If article has a row filter, delete only data that matches the filter.
Truncate all data in the existing object

The article property Action if name is in use correlates to the #pre_creation_cmd argument of sp_addmergearticle:
Specifies what the system is to do if the table exists at the
subscriber when applying the snapshot. pre_creation_cmd is
nvarchar(10), and can be one of the following values.
none - If the table already exists at the Subscriber, no action is taken.
delete - Issues a delete based on the WHERE clause in the subset filter.
drop (default) - Drops the table before re-creating it. Required to support Microsoft SQL Server Compact Subscribers.
truncate - Truncates the destination table.

Related

How to determine a table's dependencies?

I need to migrate some data between environments. The data structures are going to be exactly the same, however some of the data is ID dependent on existing data which will have to be adapted, or more precisely, userIds (such as ownership, last modification, etc).
I have already established as a requirement that usernames on both environments will always refer to the same user, so what I need to do is to determine which columns reference the user table, and transform my data from one environment to the other.
I have tried checking the SQL Server Management Studio's Object dependencies on the user table, and got a detailed list of which objects are referencing the ID column. However, there is at least one table that i know that does refer the ID column that does not appear in the list.
Constraint Options on Table
Dependants on the user table
Attempting to update a row from the proposal table with an unexisting id displays the expected exception:
Msg 547, Level 16, State 0, Line 1
The UPDATE statement conflicted with the FOREIGN KEY constraint "OSFRK_OSUSR_tpt_PROPOSAL_OSUSR_A7L_USER_MASTER_CREATEDBY". The conflict occurred in database "databasename", table "dbo.OSUSR_A7L_USER_MASTER", column 'ID'.
Can there be any reason for this table to not appear as a dependent on the user's table? Is there a way to determine if there are more tables that are not being displayed?
Try this may be it will helpful for you.
EXEC sp_fkeys 'TableName'
You can follow the following procedure, in Object Explorer, expand Databases, expand a database, and then expand Tables.
Right-click a table, and then click View Dependencies.
In the Object Dependencies dialog box, select either Objects that depend on , or Objects on whichdepends.
Select an object in the Dependencies grid. The type of object (such as "Trigger" or "Stored Procedure"), appears in the Type box.
Also, this link is very helpful: https://learn.microsoft.com/en-us/sql/relational-databases/tables/view-the-dependencies-of-a-table?view=sql-server-2017
You can use also this command:
USE [databasename]
GO
EXEC sp_depends #objname = 'objectname';

Concept of "version control" for database table rows (Not referring to storing scripts in GIT/SVN)

I require a data store that will maintain not only a history of changes made to data (easy to do) but also store any number of proposed changes to data, including chained proposals (ie. proposal-on-proposal).
Think of these "changes" as really long-running transactions which are saved to the database and have a lifespan of anywhere between minutes and years.
They are created (proposed) and then either rolled back (essentially deleted) or committed, when committed they become the effective data visible to 3rd parties.
Of course this all requires some form of conflict resolution as proposed changes can be in contradictory states (eg. Change A proposes to delete a record but change B proposes to update it - if change A is committed first then change B will have to revert)
I have found no off-the-shelf product that can do this. The closest was Oracle Workspace Manager but it did not provide for change-on-change or the ability to see proposed deletes. The only way I have been able to achieve this is to have a set of common columns on my versioned tables:
Root ID: Required - set once to the same value as the primary key when the first version of a record is created. This represents the primary key across all of time and is copied into each version of the record. You should consider the Root ID when naming relation columns (eg. PARENT_ROOT_ID instead of PARENT_ID). As the Root ID is also the primary key of the initial version, foreign keys can be created against the actual primary key - the actual desired row will be determined by the version filters defined below.
Change ID: Required - every record is created, updated, deleted via a change
Copied From ID: Nullable - null indicates newly created record, not-null indicates which record ID this row was cloned/branched from when updated/deleted
Effective From Date/Time: Nullable - null indicates proposed record, not-null indicates when the record became current. Unfortunately a unique index cannot be placed on Root ID/Effective From as there can be multiple null values for any Root ID. (Unless you want to restrict yourself to a single proposed change per record)
Effective To Date/Time: Nullable - null indicates current or proposed, not-null indicates when it became historical. Not technically required but helps speed up queries finding the current data. This field could be corrupted by hand-edits but can be rebuilt from the Effective From Date/Time if this occurs.
Delete Flag: Boolean - set to true when it is proposed that the record be deleted upon becoming current. When deletes are committed, their Effective To Date/Time is set to the same value as the Effective From Date/Time, filtering them out of the current data set.
The query to get the current state of data at a point in time would be;
SELECT * FROM table WHERE EFFECTIVE_FROM <= :Now AND (EFFECTIVE_TO IS NULL OR EFFECTIVE_TO > :Now)
The query to get the current state of data according to a change would be;
SELECT * FROM table WHERE (CHANGE_ID IN :ChangeIds OR (EFFECTIVE_FROM <= :Now AND (EFFECTIVE_TO IS NULL OR EFFECTIVE_TO > :Now) AND ROOT_ID NOT IN (SELECT ROOT_ID FROM table WHERE CHANGE_ID IN :ChangeIds)))
Note that this 2nd query contains the 1st time-based query to overlay the current data with the proposed changed data.
The change ID column refers to the primary key of a change table which also contains a parent ID column (nullable) providing the change-on-change functionality. Hence the 2nd query refers to change IDs not a single change ID. I am filtering multiple versions in a change-on-change scenario in the client and not using SQL so it's not seen in those queries (The client has a linked list of change IDs in memory and if more than 1 version of a row is retrieved it uses the linked list to determine which version to use).
Does anybody know of an off-the-shelf product that I could use? It is a large amount of work handling this versioning myself and introduces all manner of issues.
There does not appear to be any off-the-shelf database or database plugin that does what I need. So I ended up utilising Oracle features to implement a solution.
The final table structure is slightly different - "Delete Flag" turned into "Change Action" which is either Add, Remove or Modify.
A global temporary table was used to store the current connection change identifier/date-time settings and a stored procedure created to populate it after connecting. This is referred to as 'context'.
Views joining versioned tables to this temporary, connection-specific context table are created programmatically for every versioned table, including instead-of insert/update/delete triggers which perform the required data versioning.
The result is that you treat the versioned tables like normal tables (and don't use the suffix _ROOT_ID for foreign keys) for select, insert, update and delete.
Only the Change Action is returned in the views and this is the only field that distinguishes a versioned table from a normal one.
Revert (which doesn't have a SQL keyword) is achieved by a double-delete. That is, if we update a record and then want to undo that update, we issue a delete command which deletes the proposed row and the record reverts to the current version. It's the most fitting SQL keyword - the alternative is to make a specific revert stored procedure.
A virtual Change Action of None exists in the views which indicates the record is not affected by the current context.
This all works quite effectively making the concept of versioning largely transparent, the only custom action required is setting the connection after connecting to the database.

How to use the pre-copy script from the copy activity to remove records in the sink based on the change tracking table from the source?

I am trying to use change tracking to copy data incrementally from a SQL Server to an Azure SQL Database. I followed the tutorial on Microsoft Azure documentation but I ran into some problems when implementing this for a large number of tables.
In the source part of the copy activity I can use a query that gives me a change table of all the records that are updated, inserted or deleted since the last change tracking version. This table will look something like
PersonID Age Name SYS_CHANGE_OPERATION
---------------------------------------------
1 12 John U
2 15 James U
3 NULL NULL D
4 25 Jane I
with PersonID being the primary key for this table.
The problem is that the copy activity can only append the data to the Azure SQL Database so when a record gets updated it gives an error because of a duplicate primary key. I can deal with this problem by letting the copy activity use a stored procedure that merges the data into the table on the Azure SQL Database, but the problem is that I have a large number of tables.
I would like the pre-copy script to delete the deleted and updated records on the Azure SQL Database, but I can't figure out how to do this. Do I need to create separate stored procedures and corresponding table types for each table that I want to copy or is there a way for the pre-copy script to delete records based on the change tracking table?
You have to use a LookUp activity before the Copy Activity. With that LookUp activity you can query the database so that you get the deleted and updated PersonIDs, preferably all in one field, separated by comma (so its easier to use in the pre-copy script). More information here: https://learn.microsoft.com/en-us/azure/data-factory/control-flow-lookup-activity
Then you can do the following in your pre-copy script:
delete from TableName where PersonID in (#{activity('MyLookUp').output.firstRow.PersonIDs})
This way you will be deleting all the deleted or updated rows before inserting the new ones.
Hope this helped!
In the meanwhile the Azure Data Factory provides the meta-data driven copy task. After going through the dialogue driven setup, a metadata table is created, which has one row for each dataset to be synchronized. I solved this UPSERT problem by adding a stored procedure as well as a table type for each dataset to be synchronized. Then I added the relevant information in the metadata table for each row like this
{
"preCopyScript": null,
"tableOption": "autoCreate",
"storedProcedure": "schemaname.UPSERT_SHOP_SP",
"tableType": "schemaname.TABLE_TYPE_SHOP",
"tableTypeParameterName": "shops"
}
After that you need to adapt the sink properties of the copy task like this (stored procedure, table type, table type parameter name):
#json(item().CopySinkSettings).storedProcedure
#json(item().CopySinkSettings).tableType
#json(item().CopySinkSettings).tableTypeParameterName
If the destination table does not exist, you need to run the whole task once before adding the above variables, because auto-create of tables works only as long as no stored procedure is given in the sink properties.

DACPAC package with complex changes

I'm looking to switch to DACPACs for our database changes, but I'm a bit at a loss about what to do when it comes to more complex database updates. To illustrate what I mean, let me use a simple example that has the same problem.
Say I have a Customer table that is currently live and I want to add a new CustomerType table with a foreign key from Customer to CustomerType. The new column in Customer should be required (not nullable), but should not have a default value.
I want to use some arbitrary formula to setup the initial type for the existing customers upon upgrading. How would I accomplish this using a DACPAC?
The DACPAC will only know there's a new column and will try to add it to the Customer table, which will of course fail because it is required. Setting a default value is undesirable, as is allowing null values.
Since the DACPAC should be usable to upgrade from every state to the latest, I don't see what kind of configuration or pre/post scripts I should setup to make this work.
Various searches have produced a disappointing lack of useful results :(
I hope there's someone here that can help out. Thanks in advance.
The answer will vary a bit depending on how you're planning to deploy the dacpac(s). One common case is having the dacpac replace some collection of T-SQL update scripts that are executed in sequence to update a database schema from one version to the next. In this case you might choose to have one dacpac file for each schema-version of your database and to update a database you would plan to publish the dacpacs in sequence to update a database to the latest version.
In that case, it's possible to use a post-deploy script to fix up the schema as appropriate. For your example scenario, you can model the database in the database project with the new column specified as NULL and without the FK relationship with the new table. Then, in a post-deploy script you can author the T-SQL necessary to execute an UPDATE statement to fill the new table and the new column, an ALTER statement to change the column's type from NULL to NOT NULL, and finally to add the foreign key relationship.
Then moving forward you can remove the post-deploy script and model the new column and table with the proper column type and FK relationship.

Why i cant add new columns to my Users table?

I am doing some homework. The users of my database uses some other attributes, not just the ones that ASP 2.0 automatically created for me when i implemented the login and registration mechanism. But when i try to save the modification displays me an error. Can someone give me a hand?
This is the error:
The error says:
'aspnet_Users' table
- Unable to modify table. ALTER TABLE only allows columns to be added
that can contain nulls, or have a
DEFAULT definition specified, or the
column being added is an identity or
timestamp column, or alternatively if
none of the previous conditions are
satisfied the table must be empty to
allow addition of this column. Column
'kjoptekvoten' cannot be added to
non-empty table 'aspnet_Users' because
it does not satisfy these conditions.
That database was automatically created when i implemented Forms based authentification and registration. The problem now is that that users needs some more attributes. How can i give to it more attributes? What is the easiest way to do it?Does not mind if it is not theorically correct(It is just for a homework).
I would appreciate a lot your help.
Apart form the technicalities on the database side, there is a deeper issue here.
You should not alter the aspnet_Users table because you are bypassing the way the membership 'system' in asp.net is working. Instead, have a look into the Profile mechanism: https://web.archive.org/web/20211020111657/https://www.4guysfromrolla.com/articles/101106-1.aspx
You need to make the new attributes nullable or provide a default value. But you also need to consider how to obtain the values from db. The sql membership provider utilizes an auto generated stored procedure to put data into the membership user instance returned,so just adding the attributes in the table will not be sufficient to get the attribute values to your application. I would use a user attribute table instead.
The error message says it all:
You are adding a new column that can't be Null (checkbox "Allow Nulls" not checked), but as you didn't provide a default value, it will be Null.
So SQL Server can't create the new column.
You can do two things:
a) Create the new column with Nulls allowed.
THEN put a default value in all existing rows:
update aspnet_Users set kjoptekvoten = 0)
...and THEN uncheck "Allow Nulls"
b) Create the new column directly with default values.
I don't know if you can do this in Management Studio, but it's easy in T-SQL:
alter table aspnet_Users
add kjoptekvoten int not null
constraint Name_For_Constraint default(0) with values
This will add the new not nullable column, AND create a constraint with a default value, AND fill the default value in all existing rows (SQL Server will not do this without the "with values" clause).
Normally I just set the column as allow nulls
then do an SQL UPDATE TABLE SET VALUE = whateva
then update the table definition to not allow nulls.

Resources