SQL Merge replication : How to tell what has changed. - sql-server

I have a query regarding the merge replication. Is there any stored procedure which gives me exactly the column values that have been changed at the server, because of which the row will be replicated to the subscriber in the next replication session ?
I have looked at this link http://www.replicationanswers.com/Script9.asp which allows me to get the info of all the rows that need to be replicated. But i want to know the specific columns that have changed for these rows.

You can reference sys.sp_showlineage and possibly sys.sp_showcolv, but you are heading down a path of system internals which requires a great deal of learning and understanding.

Related

Data warehouse: Figuring out what rows changed of a sql server table to facilitate data warehouse

BUSINESS SCENARIO, SEEKING A WAY TO PROGRAM THIS:
Every night, I have to update table ABC in the data warehouse database from the production database. The table is millions of rows, so I want to do this efficiently.
The table doesn't have any sort of timestamp marker (LastUpdated Date\Time).
The database was created by our vendor whose software we run, and they are giving us visibility into our data. We may not have much leverage in terms of asking for new columns to house information such as LastUpdate DateTime stamp.
Is there a way, absent such information, to be able to identify those rows that have changed or added.
For example, is there such a thing as query-able physical row number associated with the table record, that might help us work towards a solution? If that could be queried, and perhaps go sequentially, then maybe there is a way to get the inserted rows.
Updated rows, I am not so sure.
Just entertaining ideas at this point in time to see if there is an efficient solution for this scenario.
Ideally, the solution will be geared towards a stored procedure we can have run every night be a job.
Thank you.
I saw this comment but I am not so sure that the solution is efficient:
Find changed rows (composite key with nulls)
Please check the MERGE operator,You can create a SQL Server Job which can execute the MERGE Script to check and update the changes if any.

How to flip tables in sql server 2014

I have a requirement wherein there are 2 tables (Staging & Target) in the same database.
Everytime data is first loaded in the Staging table. Now in second run data will be again first Loaded to the Staging table. Now I want to flip the tables using SQL query in such a way that after data is loaded into the Staging table make this changes
Staging becomes(flip) Target
Target becomes(flip) Staging
So ideally we will see both tables. But in actual at a time only 1 table has latest data.
Before opting for flip tables approach I have tried the sp_rename but that results in deadlock if someone tried to query Target table while it is being dropped and getting renamed.
Example,
IF OBJECT_ID('[dbo].[Target]','U') IS NOT NULL DROP TABLE [dbo].[Target] ;
EXEC sp_rename '[dbo].[Staging]','Target';
If we use the flip approach then there will be minimal chances of a lock. I tried to understand this flip tables concept and one approach I see is, it could be done using some kind of flag setting in SQL but not sure how. Any help on this would be really appreciated.

Update table as same table in another database changes

I have two databases in one instance of SQL server and they have the same structure.
Now I want to write some triggers for some of the tables in databases to get synced with each other whenever they got inserted, updated or deleted records.
something like below will be going to be one of the triggers :
CREATE TRIGGER AdminMessage_Insert
ON AdminMessage
AFTER INSERT
AS
INSERT INTO SecondDb.dbo.AdminMessage
( ID ,
DeptKey ,
AdminKey ,
ReceiverKey ,
MessageText ,
IsActive
)
SELECT i.ID, i.DeptKey, i.AdminKey, i.ReceiverKey, i.MessageText, i.IsActive
FROM INSERTED i
so my problem is that there are many tables and writing about three triggers for each of them doesn't seem to be the best solution.
can you give me a better and smaller approach?
UPDATE
I found some ways like CDC, Change Tracking, SQL Audit And of course Replication (snap replication) and read about them.
as I understand the best solution for me is using 'CDC' Or 'Audit'.
in both of them, I must work with each table one by one that takes a long time from me.
can I have all table changes with less work and with one SQL instance? (replication is good, but it needs more than one instance)
what's your idea?
While Change Data Capture (CDC) wasn't designed to be used as a sort of replication, we use it in this way at my company because it works for us. You enable CDC for the specific tables that you need to only get the net changes. The records are then stored in a database created by CDC. From there you can push the changes to the other database. You can find more information about CDC here.
Because it seems like you are looking for a solution that is only replicating the data one way, can I assume that the second source is read-only? If so, and because you said both databases are on the same instance, you can use synonyms in your secondary database.

How to get a list of updated/inserted rows into a SQL Server database after multiple stored procedure have executed?

Consider Java application reading/modifying data from a SQL Server database using only stored procedures.
I am interested in knowing exactly what rows were inserted/updated after execution of some code.
Code that is executing could trigger multiple stored procedures and these procedures are working with different tables in general case.
My current solution is to debug low level Java code executed before any of stored procedures is called and inspecting parameters passed, to manually reconstruct impacts.
This seem to be ineffective and unreliable.
Is there a better approach?
To know exactly what rows were inserted/updated after execution of some code, you can implement triggers for UPDATE, DELETE and INSERT operations for the tables involved. These triggers should be almost the same for every table, changing just the name and the association with its table.
For this suggestion, these tables should have audit columns, like one for the datetime when they rows were inserted and one for datetime when they rows were updated - at least. You can search for more audit ideas if you want (and need), like a column to know wich user triggered the insert/update, or how many times the row was altered, an so on.
You should elaborate a different approach to achieve this depending of how much data you intend to generate with these triggers.
I'm assuming you know how to do this with best practices (for example, you can [and should, IMHO] create these triggers dinamically to facilitate maintenance).
Finally, you will be able to elaborate a query from sys tables that contains information about tables and rows and return only the rows involved, ordered by these new columns (just an idea that I hope fits in your particular case).

How Do I find what is populating a table?

I constantly run into this problem. I am working in a data warehouse and I cannot find out what is populating a table. Typically the table is being populated on a daily basis from either other table in the warehouse or from an Oracle database. I have tried the below query and can confirm the updates, but i cannot see what is doing it. I searched to the known SSIS package and stored procedure with similar names and SQL jobs but I can find nothing.
select object_name(object_id) as DatabaseName, last_user_update, *
from sys.dm_db_index_usage_stats
where database_id = DB_ID('Warehouse')
and object_id=object_id('PAYMENTS_DAILY')
I only have the most basic SQL Server tools available so no fancy search tools :(
There is no way to tell, after data has been inserted into a data, where the data came from without having some sort of logging.
SSIS has logging, you can use triggers on the tables, change data capture, audit columns, etc. are the many ways to do this.
Frequently, if you know when the row was added, that can help you figure out what process is adding it. Add a new "InsertedDatetime" column to your warehouse table and give it a default value of getdate(). If you know that the rows always come in at 11:15 AM, you can use that to narrow your search.
That will probably be enough information, but if that doesn't help you track down the process, then you can add additional columns that contain everything from a source IP address to a calling object name.
As a last resort, you could rename your table and create a view named the same and then use an Instead Of Insert trigger on it that just holds open the connection so you can examine the currently executing processes to figure out where it's coming from.
I bet you can figure it out from the time alone though.

Resources