How do I handle rows that were deleted from the source using SSIS Slowly Changing Dimension - sql-server

I am trying to implement an ETL process for our Type 1 slowly changing dimension tables in a SQL 2014 database. The load needs to happen across servers, and I would prefer not to use linked servers.
I have been looking for ways to do this in SSIS and found the slowly changing dimension wizard which works fine except that this seems to only allow either inserting new rows or updating rows where there is a match on the business key, however I haven't found a place where it allows me to handle when a record exists in the dimension table but was deleted from the source. I would like to make sure these are deleted. Am I missing something? Has anyone found a better way to handle this in SSIS?
I know that I could just dump everything into another table on the destination server and write a TSQL merge, but there just seems like should be a simple way to do this in SSIS.

First, I would avoid the SCD functionality in SSIS, as its performance tends to be terrible - I've actually been told to avoid it by MS certified trainers, as well as plenty of people with a lot of experience. It's OK-ish on very small dimensions, but quickly tends to become unmanageable. There's a blog post here from someone who thinks it's usable in some situations, but even they suggest using a staging table for updates.
If you want to do this in SSIS you could use a Lookup to find the rows that need to be deleted (find the rows in your destination which aren't in the source using the no match output), then an OLE DB Command to delete them. But I'd give some serious thought to simply moving the data over to a staging area and doing this in TSQL, because SSIS will do it row by agonising row. Similarly to the SCD tool - it might be OK on small amounts of data, but if you're dealing with larger amounts (or might be in future), it may well become unmanageable.
If you don't want to move all of the data over to a staging area, you could use SSIS to build up a table only holding the unique IDs of the rows that need deleting, then fire off an Execute SQL Task from the Control Flow to delete them all at once.

Related

Is there a way I can fast load data with SSIS?

I'm moving data from ODBC to OLE Destination, records get inserted everyday on the ODBC in different tables. The packages gets slower and slower it take about a day for million records sometimes more. The tables can have new data inserted or new updated data and the loading and looking up of new data slows the processs. Is the anyway i can fast track the ETL process or is there any open source platform i can use to load the data faster
Tried to count the number of rows in the OLE Destination to check and only insert new records that are greater than the ones in the ODBC Source, but to my surprise the ROW_NUMBER() function isn't supported in Openedge ODBC
Based on the limited information in your question, I'd design your packages like the following
SEQC PG to SQL
The point of these operations is to transfer data from our source system verbatim to the target. The target table should be brand new and the SQL Server equivalent of the PG table from a data type perspective. Clustered Key if one exists, otherwise, see how a heap performs. I am going to reference this as a staging table.
The Data Flow itself is going to be bang simple
By default, the destination will perform a fast load and lock the table.
Run the package and observe times.
Edit the OLE DB Destination and change the Maximum Commit Size to something less than 2147483647. Try 100000 - is it better, worse? Move up/down an order of magnitude until you have an idea of what it looks like will be the fastest the package can move data.
There are a ton of variables at this stage of the game - how busy is the source PG database, what are the data types involved, how far does the data need to travel from the Source, to your computer, to the Destination but this can at least help you understand "can I pull (insert large number here) rows from the source system within the expected tolerance" If you can get the data moved from PG to SQL within the expected SLA and you still have processing time left, then move on to the next section.
Otherwise, you have to rethink your strategy for what data gets brought over. Maybe there's reliable (system generated) insert/update times associated to the rows. Maybe it's a financial-like system where rows aren't updated, just new versions of the row are insert and the net values are all that matters. Too many possibilities here but you'll likely need to find a Subject Matter Expert on the system - someone who knows the logical business process the database models as well as how the data is stored in the database. Buy that person some tasty snacks because they are worth their weight in gold.
Now what?
At this point, we have transferred the data from PG to SQL Server and we need to figure out what to do with it. 4 possibilities exist
The data is brand new. We need to add the row into the target table
The data is unchanged. Do nothing
The data exists but is different. We need to change the existing row in the target table
There is data in the target table that isn't in the staging table. We're not going to do anything about this case either.
Adding data, inserts, are easy and can be fast - it depends on table design.
Changing data, updates, are less easy in SSIS and are slower than adding new rows. Slower because behind the scenes, the database will delete and add the row back in.
Non-Clustered indexes are also potential bottlenecks here, but they can also be beneficial. Welcome to the world of "it depends"
Option 1 is to just write the SQL statements to handle the insert and update. Yes, you have a lovely GUI tool for creating data flows but you need speed and this is how you get it (especially since we've already moved all the data from the external system to a central repository)
Option 2 is to use a Data Flow and potentially an Execute SQL Task to move the data. The idea being, the Data Flow will segment your data into New which will use an OLE DB Destination to write the inserts. The updates - it depends on volume what makes the most sense from an efficiency perspective. If it's tens, hundreds, thousands of rows to update, eh take the performance penalty and use an OLE DB Command to update the row. Maybe it's hundreds of thousands and the package runs good enough, then keep it.
Otherwise, route your changed rows to yet another staging table and then do a mass update from the staged updates to the target table. But at this point, you just wrote half the query you needed for the first option so just write the Insert and be done (and speed up performance because now everything is just SQL Engine "stuff")
You might want to investigate Progress' Change Data Capture feature. If you have a modern release of OpenEdge (11.7 or better) and the proper licenses you can enable CDC policies to track changes. Your ETL process could then use that information to target its efforts.
Warning: it's complicated. There is a lot more to actually doing it than marketing would have you believe. But if your use-case is straight-forward it might not be too terrible.
Or you could implement Progress "Pro2" product to do all the dirty work for you. (That's an extra cost option.)

KISS way to copy a list of tables between SQL instances?

My main application has nightly SSIS jobs that move some (not all) of the data from the production servers to various test and development servers.
These jobs do nothing fancy. They delete the destination table and repopulate it from the source. There are many dozens of tables and 4 or 5 servers in the mix, with plenty of foreign keys, but everything is SQL to SQL and there is no merging or lookups.
Using SSIS to do this has proven painfully brittle. When a new application release changes the schema, more than half the time, jobs begin failing. Why? The updates are done by the developers, and the packages have been tweaked and changed dozens of times by different developers and often the SSIS changes happen during crunch time of the development cycle.
It has occurred to me that SSIS may or may not be the right tool for this. SSIS is pretty bloated for simple table copies. (Column Mapping, etc.) Is there a better way? Ideally, it would simply take as input (preferably from a central source):
an unordered list of tables (Nothing but exact match column naming supported)
a source server
a destination server
It would then simply:
- Begin execution on a schedule at the designated server.
- sort the list referentially (to not violate FK constraints)
- delete/truncate all the destination tables in referential order
- Copy tables from source to destination in referential order
- Report back success or failure.
The only challenging things on the list (I think) would be a good low-maintenance way to schedule the jobs and for the jobs to report back success or failure.
[Note: I'm not looking for technical details. I am simply looking for a lower maintenance, less brittle way to make these data moves happen. I'll post my initial idea below as a possible solution, but I'll be quite happy if there is a simpler solution out there.]
Instead of truncating tables and copying data, you could DROP the tables, and use the Transfer SQL Server Objects Task to copy the table, which would include DDL changes.
Caveat being that you would have to handle foreign keys accordingly.

How to copy entire SQL Server 2008 database, applying WHERE clause to restrict copied data

To allow more realistic conditions during development and testing, we want to automate a process to copy our SQL Server 2008 databases down from production to developer workstations. Because these databases range in size from several GB up to 1-2 TB, it will take forever and not fit onto some machines (I'm talking to you, SSDs). I want to be able to press a button or run a script that can clone a database - structure and data - except be able to specify WHERE clauses during the data copy to reduce the size of the database.
I've found several partial solutions but nothing that is able to copy schema objects and a custom restricted data without requiring lots of manual labor to ensure objects/data are copied in correct order to satisfy dependencies, FK constraints, etc. I fully expect to write the WHERE clause for each table manually, but am hoping the rest can be automated so we can use this easily, quickly, and frequently. Bonus points if it automatically picks up new database objects as they are added.
Any help is greatly appreciated.
Snapshot replication with conditions on tables. That way you will get your schema and data replicated whenever needed.
This article describes how to create a merge replication, but when you choose snapshot replication the steps are the same. And the most interesting part is Step 8: Filter Table Rows. of course, because with this you can filter out all the unnecessary data to get replicated. But this step needs to be done for every entity and if you've got like hundreds of them, then you'd better analyze how to do that programmatically instead of going through the wizard windows.

Copy Multiple Tables into ONE Table (From Multiple Databases)

I've got multiple identical databases (distributed on several servers) and need to gather them to one single point to do data mining, etc.
The idea is to take Table1, Table2, ..., TableN from each database and merge them and put the result into one single big database.
To be able to write queries, and to know from which database each row came from we will add a single column DatabaseID to target table, describing where the row came from.
Editing the source tables is not an option, it belongs to some proprietary software.
We've got ~40 servers, ~170 databases and need to copy ~40 tables.
Now, how should we implement this given that it should be:
Easy to setup
Easy to maintain
Preferably easy to adjust if database schema changes
Reliable, logging/alarm if something fails
Not too hard to add more tables to copy
We've looked into SSIS, but it seemed that we would have to add each table as a source/transformation/destination. I'm guessing it would also be quite tied to the database schema. Right?
Another option would be to use SQL Server Replication, but I don't see how to add the DatabaseID column to each table. It seems it's only possible to copy data, not modify it.
Maybe we could copy all the data into separate databases, and then to run a local job on the target server to merge the tables?
It also seems like a lot of work if we'd need to add more tables to copy, as we'd have to redistribute new publications for each database (manual work?).
Last option (?) is to write a custom application to our needs. Bigger time investment, but it'd at least do precisely what we'd like.
To make it worse... we're using Microsoft SQL Server 2000.
We will upgrade to SQL Server 2008 R2 within 6 months, but we'd like the project to be usable sooner.
Let me know what you guys think!
UPDATE 20110721
We ended up with a F# program opening a connection to the SQL Server where we would like the aggregated databases. From there we query the 40 linked SQL Servers to fetch all rows (but not all columns) from some tables, and add an extra row to each table to say which DatabaseID the row came from.
Configuration of servers to fetch from, which tables and which columns, is a combination of text file configuration and hard coded values (heh :D).
It's not super fast (sequential fetching so far) but it's absolutely manageable, and the data processing we do afterwards takes far longer time.
Future improvements could be to;
improve error handling if it turns out to be a problem (if a server isn't online, etc).
implement parallel fetching, to reduce the total amount of time to finish fetching.
figure out if it's enough to fetch only some of the rows, like only what's been added/updated.
All in all it turned out to be quite simple, no dependencies to other products, and it works well in practice.
Nothing fancy but couldn't you do something like
DROP TABLE dbo.Merged
INSERT INTO dbo.Merged
SELECT [DatabaseID] = "Database1", * FROM ServerA.dbo.Table
UNION ALL SELECT [DatabaseID] = "Database2", * FROM ServerB.dbo.Table
...
UNION ALL SELECT [DatabaseID] = "DatabaseX", * FROM ServerX.dbo.Table
Advantages
Easy to setup
Easy to maintain
Easy to adjust
Easy to add more tables
Disadvantages
Performance
Reliable logging
We had a similar requirement where we took a different approach. first created a central database to collect the data. Then we created a inventory table to store the list of target servers / databases. Then a small vb.net based CLR procedure which take the path of SQL query, target SQL Instance name and the target table which will store the data(This would eliminate the setup of linked server when new targets are added). This also adds two additional columns to the result set. The Target server name and the timestamp when the data is captured.
Then we set up a service broker queue/service and pushed list of target servers to interogate.
The above CLR procedure is wrapped in another procedure which dequeues the message, executes the SQL on the target server provided. The wrapper procedure is then configured as the activated procedure for the queue.
With this we are able to achieve a bit of parallelism to capture the data.
Advantages :
Easy to setup Easy to manage (Add / Remove targets)
Same framework works for multiple queries
Logging tables to check for failed queries.
Works independent of each target, so if one of the target fails to
respond, others still continue.
Workflow can be pause gracefully by disabling the queue (for
maintenance on central server) and then resume collection be
re-enabling it.
Disadvantage:
requires good understanding of service brokers.
should properly handle poison messages.
Please Let me know if it helps

Fastest method to fill a database table with 10 Million rows

What is the fastest method to fill a database table with 10 Million rows? I'm asking about the technique but also about any specific database engine that would allow for a way to do this as fast as possible. I"m not requiring this data to be indexed during this initial data-table population.
Using SQL to load a lot of data into a database will usually result in poor performance. In order to do things quickly, you need to go around the SQL engine. Most databases (including Firebird I think) have the ability to backup all the data into a text (or maybe XML) file and to restore the entire database from such a dump file. Since the restoration process doesn't need to be transaction aware and the data isn't represented as SQL, it is usually very quick.
I would write a script that generates a dump file by hand, and then use the database's restore utility to load the data.
After a bit of searching I found FBExport, that seems to be able to do exactly that - you'll just need to generate a CSV file and then use the FBExport tool to import that data into your database.
The fastest method is probably running an INSERT sql statement with a SELECT FROM. I've generated test data to populate tables from other databases and even the same database a number of times. But it all depends on the nature and availability of your own data. In my case i had enough rows of collected data where a few select/insert routines with random row selection applied half-cleverly against real data yielded decent test data quickly. In some cases where table data was uniquely identifying i used intermediate tables and frequency distribution sorting to eliminate things like uncommon names (eliminated instances where a count with group by was less than or equal to 2)
Also, Red Gate actually provides a utility to do just what you're asking. It's not free and i think it's Sql Server-specific but their tools are top notch. Well worth the cost. There's also a free trial period.
If you don't want to pay or their utility you could conceivably build your own pretty quickly. What they do is not magic by any means. A decent developer should be able to knock out a similarly-featured though alpha/hardcoded version of the app in a day or two...
You might be interested in the answers to this question. It looks at uploading a massive CSV file to a SQL server (2005) database. For SQL Server, it appears that a SSIS DTS package is the fastest way to bulk import data into a database.
It entirely depends on your DB. For instance, Oracle has something called direct path load (http://download.oracle.com/docs/cd/B10501_01/server.920/a96652/ch09.htm), which effectively disables indexing, and if I understand correctly, builds the binary structures that will be written to disk on the -client- side rather than sending SQL over.
Combined with partitioning and rebuilding indexes per partition, we were able to load a 1 billion row (I kid you not) database in a relatively short order. 10 million rows is nothing.
Use MySQL or MS SQL and embedded functions to generate records inside the database engine. Or generate a text file (in cvs like format) and then use Bulk copy functionality.

Resources