SQL Server CDC: Track additional column after the fact - sql-server

If CDC has been setup on a table, only tracking Columns A,D,E instead of the entire table, is it possible to add a column Z to the source table, then add column Z to the list of tracked columns for CDC? Is it possible to do this without losing CDC data?
I've looked around and the only examples I find are for tracking the entire table and not for cherry picking columns. I'm hoping for a way to update a table schema and not lose CDC history, without doing the whole copy CDC to temp table then back to CDC process.
Do you always have to create a new instance of CDC for schema changes?
SQL Server 2012

The reason that CDC allows for two capture instances on a given table is exactly this reason. The idea is this:
You have a running instance tracking columns A, B, and C
You now want to start tracking column D
So you set up a second capture instance to track A, B, C, and D.
You then process everything from the original capture instance and note the last LSN that you process
Using the fn_cdc_increment_lsn() function, you set the start point to start processing the new capture instance
Once you're up and running on the new instance, you can drop the old one
Of course, if you're using CDC as history of all changes ever on the tableā€¦ that's not what it was designed for. CDC was meant to be an ETL facilitator. That is, capturing changes to data so that they could be consumed and then ultimately discarded from the CDC system. If you're using it for a historical system of record, I'd suggest setting up a "dumb" ETL meaning a straight copy out of the CDC tables into a user table. Once you do that, you can implement the above.

Related

Temporal Tables Manually Update Data

Using SQL Server 2019, can I push data (snapshot data) from the Current (Temporal Table) to the History Table only when I want to rather than it happening automatically after every row commit? I understand that Temporal Tables are designed to record all data changes to a row - great for auditing. But what if I don't want to save all changes? What If I only want to 'baseline' data on a set of tables every week, (or when the user wants to) and I don't care what changes are made during the week? I know you can disable and enable the temporal tables, but that is more of a high level control, and the architecture is multi-tenanted,and different tenants will snapshot at different times.
Or perhaps Temporal Tables is the wrong tool for me? My use case is as follows - A user creates a mathematical model altering many parameters, they do this many times over many days, persisting to the database with every change. When they get it right they press 'Baseline' Everything is stored. They then continue with the next changes to the next baseline. At any point they can compare the difference between any two baselines. I only retain the data at the date of 'Baseline'. This would require that I move the data to the temporal history table manually..or let it go automatically and purge everything in between two baselines, seems a waste of DB resources.

SSIS ETL Pattern based on Rowversion occasionally missing rows, how to correct?

We have been using an SSIS pattern based around rowversion to synchronize records between two databases by looking at only rows in the source that have been inserted or updated since the last package run. Note data is never deleted from the source table, which is a prerequisite for this SSIS pattern.
However lately we discovered despite running daily our import has actually missed rows from last month, leaving them out of our data warehouse entirely!
This is what i'm seeking a solution to..how can we change our ETL pattern to avoid this problem, without going back to reading every row from source every day?
From internet searching we found an explanation for why this might be happening, but not a solution. The flaw seems to be related to the fact that the SQL column rowversion gets its value when an insert/update starts, not when it commits, which can lead to rows not being available at package execution time, but getting committed later with rowversion values less than your stored ETLRowversion value, so next time your job runs they get skipped.
In brief our pattern currently is like this: (I've left out steps involving index maintenance, etc for simplicity.)
Get the last active rowversion from source DB using min_active_rowversion() call that #MaxRv.
Get the rowversion value as of last successful execution of our SSIS task (stored in our data warehouse in a table called ETLRowversions). Call that #LSERV.
Read rows from the source table WHERE rowversion is >= #LSERV and rowversion is <= #MaxRv
For each row read, check if the row exists in target DB (if so add the row to an update staging table) or not (in which case, insert it directly into Target table)
Update the Target table using the update staging table
Update ETLrowVersions table in our data warehouse with the #MaxRv value.
Edit: Comments have suggested to implement Change Tracking and Snapshot Isolation as the best solution to this problem. Unfortunately both change tracking and allow_snapshot_isolation are both OFF for the source database..and I am pessimistic about my chances of getting these features turned on. For better or worse our BI concerns carry far less weight than performance concerns of the production application/DB that is our source.

Schema for tracking SQL Server table updates

I have a set of 16 SQL Server tables, which get updated constantly from a Web UI. I need to track every change happening to these tables and call a separate system every 10 minutes sending each inserted or updated row through a Windows service.
I can duplicate the schema and create another set of 16 similar tables to track changes in the original set. There will be triggers that insert a new row into tracking tables (plus ins/upd flag, timestamp etc fields) every time a corresponding source table is modified.
I am wondering is there any better way I can do this using 1 (or few) common tables that can hold data from multiple tables? Something that does not force me to maintain a duplicate set of tracking tables?
If you have Enterprise Edition, Change Data Capture is an option which is available to you.
Other editions have Change Tracking, which doesn't track history but can get you the net changes.
Comparison: https://technet.microsoft.com/en-us/library/Cc280519(v=SQL.105).aspx

How do you reload incremental data using SQL Server CDC?

I haven't been able to find documentation/an explanation on how you would reload incremental data using Change Data Capture (CDC) in SQL Server 2014 with SSIS.
Basically, on a given day, if your SSIS incremental processing fails and you need to start again. How do you stage the recently changed records again?
I suppose it depends on what you're doing with the data, eh? :) In the general case, though, you can break it down into three cases:
Insert - check if the row is there. If it is, skip it. If not, insert it.
Delete - assuming that you don't reuse primary keys, just run the delete again. It will either find a row to delete or it won't, but the net result is that the row with that PK won't exist after the delete.
Update - kind of like the delete scenario. If you reprocess an update, it's not really a big deal (assuming that your CDC process is the only thing keeping things up to date at the destination and there's no danger of overwriting someone/something else's changes).
Assuming you are using the new CDC SSIS 2012 components, specifically the CDC Control Task at the beginning and end of the package. Then if the package fails for any reason before it runs the CDC Control Task at the end of the package those LSNs (Log Sequence Number) will NOT be marked as processed so you can just restart the SSIS package from the top after fixing the issue and it will just reprocess those records again. You MUST use the CDC Control Task to make this work though or keep track the LSNs yourself (before SSIS 2012 this was the only way to do it).
Matt Masson (Sr. Program Manager on MSFT SQL Server team) has a great post on this with a step-by-step walkthrough: CDC in SSIS for SQL Server 2012
Also, see Bradley Schacht's post: Understanding the CDC state Value
So I did figure out how to do this in SSIS.
I record the min and max LSN number everytime my SSIS package runs in a table in my data warehouse.
If I want to reload a set of data from the CDC source to staging, in the SSIS package I need to use the CDC Control Task and set it to "Mark CDC Start" and in the text box labelled "SQL Server LSN to start...." I put the LSN value I want to use as a starting point.
I haven't figured out how to set the end point, but I can go into my staging table and delete any data with an LSN value > then my endpoint.
You can only do this for CDC changes that have not been 'cleaned up' - so only for data that has been changed within the last 3 days.
As a side point, I also bring across the lsn_time_mapping table to my data warehouse since I find this information historically useful and it gets 'cleaned up' every 4 days in the source database.
To reload the same changes you can use the following methods.
Method #1: Store the TFEND marker from the [cdc_states] table in another table or variable. Reload back the marker to your [cdc_states] from the "saved" value to process the same range again. This method, however, allows you to start processing from the same LSN but if in the meanwhile you change table got more changes those changes will be captured as well. So, you can potentially get more changes that happened after you did the first data capture.
Method #2: In order to capture the specified range, record the TFEND markers before and after the range is processed. Now, you can use the OLEDB Source Connection (SSIS) with the following cdc functions. Then use the CDC Splitter as usual to direct Inserts, Updates, and Deletes.
DECLARE #start_lsn binary(10);
DECLARE #end_lsn binary(10);
SET #start_lsn = 0x004EE38E921A01000001;-- TFEND (1) -- if null then sys.fn_cdc_get_min_lsn('YourCapture') to start from the beginnig of _CT table
SET #end_lsn = 0x004EE38EE3BB01000001; -- TFEND (2)
SELECT * FROM [cdc].[fn_cdc_get_net_changes_YOURTABLECAPTURE](
#start_lsn
,#end_lsn
,N'all' -- { all | all with mask | all with merge }
--,N'all with mask' -- shows values in "__$update_mask" column
--,N'all with merge' -- merges inserts and updates together. It's meant for processing the results using T-SQL MERGE statement
)
ORDER BY __$start_lsn;

Order by creation time in OpenEdge

Is there an automatic way of knowing which rows are the latest to have been added to an OpenEdge table? I am working with a client and have access to their database, but they are not saving ids nor timestamps for the data.
I was wondering if, hopefully, OpenEdge is somehow doing this out of the box. (I doubt it is but it won't hurt to check)
Edit: My Goal
My goal from this is to be able to only import the new data, i.e. the delta, of a specific table. Without having which rows are new, I am forced to import everything because I have no clue what was aded.
1) Short answer is No - there's no "in the box" way for you to tell which records were added, or the order they were added.
The only way to tell the order of creation is by applying a sequence or by time-stamping the record. Since your application does neither, you're out of luck.
2) If you're looking for changes w/out applying schema changes, you can capture changes using session or db triggers to capture updates to the db, and saving that activity log somewhere.
3) If you're just looking for a "delta" - you can take a periodic backup of the database, and then use queries to compare the current db with the backup db and get the differences that way.
4) Maintain a db on the customer site with the contents of the last table dump. The next time you want to get deltas from the customer, compare that table's contents with the current table, dump the differences, then update the db table to match the current db's table.
5) Personally. I'd talk to the customer and see if (a) they actually require this functionality, (b) find out what they think about adding some fields and a bit of code to the system to get an activity log. Adding a few fields and some code to update them shouldn't be that big of a deal.
You could use database triggers to meet this need. In order to do so you will need to be able to write and deploy trigger procedures. And you need to keep in mind that the 4GL and SQL-92 engines do not recognize each other's triggers. So if updates are possible via SQL, 4GL triggers will be blind to those updates. And vice-versa. (If you do not use SQL none of this matters.)
You would probably want to use WRITE triggers to catch both insertions and updates to data. Do you care about deletes?
Simple-minded 4gl WRITE trigger:
TRIGGER PROCEDURE FOR WRITE OF Customer. /* OLD BUFFER oldCustomer. */ /* OLD BUFFER is optional and not needed in this use case ... */
output to "customer.dat" append.
export customer.
output close.
return.
end.

Resources