SSIS Select from Excel Worksheet with Where condition - sql-server

We are seeing an SSIS process taking too long overnight, and running late into the morning.
The process queries a database table to return a list of files that have been changed since the last time the process ran (it runs daily), and this can be anything from 3 (over a weekend) to 40 during the working week. There are possibly 258 Excel files (all based on the same template) that could be imported.
As I stated, the process seems to be taking too long some days (it's not on a dedicated server) so we decided to look at performance improvement suggestions i.e. increase the DefaultBufferMaxRows and DefaultBufferSize to 50,000 and 50MB respectively for each Data Flow Task in the project. The other main suggestion was to always use a SQL command over a Table or View - each of my Data Flow Tasks (I have nine) all rely on a range name from the spreadsheet - if it might help with performance, what I want to know is - is it possible to select from an Excel worksheet with a WHERE condition?
The nine import ranges vary from a single cell to a range of 10,000 rows by 230 columns. Each of these is imported into a staging table, and then merged into the appropriate main table, but we have had issues with the import not correctly understanding data types (even with IMEX=1), so it seems that I might get a better import if I could select the data differently, and restrict it to only the rows I'm interested in (rather than all 10,000 and then filter them as part of the task), i.e. all rows where a specific column is not blank.
This is just initially an exercise to look into performance improvement, but also it's going to help me going forward with maintaining and improving the package as it's an important process for the business.

Whilst testing the merge stored procedure to look at other ways to improve the process, it came to light that one of the temporary tables populated via SSIS was corrupted. It was taking 27 seconds to query and return 500 records, but when looking at the table statistics, the table was much bigger than it should have been. After conversations with our DBA, we dropped and recreated the table and now the process is running back at it's previous speed, i.e. for 5 spreadsheets, the process takes 1 minute 13 seconds, and for 43 spreadsheets it's around 7 minutes!
Whilst I will still be revisiting the process at some point, everyone agrees to leave it unchanged for the moment.

Related

Developing an optimal solution/design that sums many database rows for a reporting engine

Problem: I am developing a reporting engine that displays data about how many bees a farm detected (Bees is just an example here)
I have 100 devices that each minute count how many bees were detected on the farm. Here is how the DB looks like:
So there can be hundreds of thousands of rows in a given week.
The farmer wants a report that shows for a given day how many bees came each hour. I developed two ways to do this:
The server takes all 100,000 rows for that day from the DB and filters it down. The server uses a large amount of memory to do this and I feel this is a brute force solution
I have a Stored Procedure that returns a temporarily created table, with every hour the amount of bees collected for each device totaled. The server takes this table and doesn't need to process 100,000 rows.
This return (24 * 100) rows. However it takes much longer than I expected to do this ~
What are some good candidate solutions for developing a solution that can consolidate and sum this data without taking 30 seconds just to sum a day of data (where I may need a months worth divided between days)?
If performance is your primary concern here, there's probably quite a bit you can do directly on the database. I would try indexing the table on time_collected_bees so it can filter down to 100K lines faster. I would guess that that's where your slowdown is happening, if the database is scanning the whole table to find the relevant entries.
If you're using SQL Server, you can try looking at your execution plan to see what's actually slowing things down.
Give database optimization more of a look before you architect something really complex and hard to maintain.

Recommended ETL approachs for large to big data

I've been reading up on this (via Google search) for a while now and still not a getting a clear answer so finally decided to post.
Am trying to get a clear idea on what is a good process for setting up automated ETL process. Let's take the following problem:
Data:
Product Code, Month, Year, Sales, Flag
15 million rows for data, where you had 5000 products. Given this data, calculate whether the cumulative sales for a particular product exceeds X. And set flag = 1 if at that point in time the threshold was exceeded.
How would people approach this task? My approach was to attempt it using SQL Server but that was painfully slow at times. In particular there was step in the transformation that would have required me to write a Stored Proc that created an index on a temp table on the fly on order to speed up .. all of which seemed like a lot of bother.
Should I have coded it in Java or Python? Should I have used Alteryx or Lavastorm? Is this something I should ideally be doing using Hadoop?

Given a 10^8 rows, 10GB import, is it better to import data on separate rows or consolidate and separate the rows in the DB?

I'm doing a rather large import to a SQL Database, 10^8+ items and I am doing this with a bulk insert. I'm curious to know if the speed at which the bulk insert runs can be improved by importing multiple rows of data as a single row and splitting them once imported?
If the time to import data is defined by the sheer volume of data itself (ie. 10GB), then I'd expect that importing 10^6 rows vs 10^2 with the data consolidated would take about the same amount of time.
If the time to import however is limited more by row operations and logging each line and not by the data itself then I'd expect that consolidating data would have a performance benefit. I'm not sure however how this would carry over if one had tot then break up the data in DB later on.
Does anyone have experience with this and can shed some light on what specifically can be done to reduce bulk insert time without simply adding that time later to split the data in DB?
Given a 10GB import, is it better to import data on separate rows or consolidate and separate the rows in the DB?
[EDIT] I'm testing this on a Quad 2.5GH with 8GB or RAM and 300MB/sec of read/writes to disk (stripped array). The files are hosted n the same array and the average row size varies with some rows containing large amounts of data (> 100 KB) and many under 100 B.
I've chunked my data into 100 MB files and it takes about 40 seconds to import the file. Each file has 10^6 rows in it.
Where is the data that you are importing? If it is on another server, then the Network might be the bottleneck. This then depends on number of NIC'S and frame sizes.
If it is on the same server, things to play with are batch size and recovery model which effect the log file. In full recovery model, everything is written to a log file. Bulk copy recovery model is a little less overhead in the log.
Since this is staging data, maybe a full backup before the process, change the model to simple, then import might reduce the time. Of course, change the model back to full and do another backup.
As for importing non-normalized data, multiple rows at a time, I usually stay away from the extra coding.
Most of the time, I use SSIS packages. More packages, threads, means a fuller NIC pipe. I usually have at least a 4 GB back bone that is seldom full.
Other things that come to play are your disks. Do you have multiple files (path ways) to the RAID 5 array? If not, you might want to think about it.
In short, it really depends on your environment.
Use a DMAIC process.
1 - Define what you want to do
2 - Measure the current implementation
3 - Analyze ways to improve.
4 - Implement the change.
5 - Control the environment by remeasuring.
Did the change go in the positive direction?
If not, rollback the change and try another one.
Repeat the process until the desired result (timing) is achieve.
Good luck, J
If this is a one time thing and done in an offline change window.. you may want to consider to put the database in simple recovery model prior to inserting the data.
Keep in mind though this would break the log chain....

Daily data generation and insertion

I'm facing a problem that perhaps someone around here can help me with.
I work in a business intelligence company and I'd like to simulate the whole usage cycle of our product the way our clients use it.
The short version is that our customers are inserting some 20 million records to their database on a daily basis, and our product crunches the new data at the end of the day.
I would like to automatically create around 20 million records and insert them into some database, everyday (MSSQL probably).
I should point out that the number of records should change from day to day between 15 to 25 million. Other than that, the data is supposed to be inserted to 6 tables linked with foreign keys.
I ususally use Redgate's SQL Generator to create data, but as far as I can tell it's good for one time data generation as opposed to the on going data generation I'm looking for.
If anyone knows of methods/tools adequate to this situation, please let me know.
Thanks!
You could also write a small Java (or similar) program to get the starting ID from the database, pick a random number of rows to insert, and then execute the data-generation tool as a child process.
For example, see Runtime.exec():
http://docs.oracle.com/javase/7/docs/api/java/lang/Runtime.html
You can then run your program as a scheduled task or cron job.

How to efficiently utilize 10+ computers to import data

We have flat files (CSV) with >200,000,000 rows, which we import into a star schema with 23 dimension tables. The biggest dimension table has 3 million rows. At the moment we run the importing process on a single computer and it takes around 15 hours. As this is too long time, we want to utilize something like 40 computers to do the importing.
My question
How can we efficiently utilize the 40 computers to do the importing. The main worry is that there will be a lot of time spent replicating the dimension tables across all the nodes as they need to be identical on all nodes. This could mean that if we utilized 1000 servers to do the importing in the future, it might actually be slower than utilize a single one, due to the extensive network communication and coordination between the servers.
Does anyone have suggestion?
EDIT:
The following is a simplification of the CSV files:
"avalue";"anothervalue"
"bvalue";"evenanothervalue"
"avalue";"evenanothervalue"
"avalue";"evenanothervalue"
"bvalue";"evenanothervalue"
"avalue";"anothervalue"
After importing, the tables look like this:
dimension_table1
id name
1 "avalue"
2 "bvalue"
dimension_table2
id name
1 "anothervalue"
2 "evenanothervalue"
Fact table
dimension_table1_ID dimension_table2_ID
1 1
2 2
1 2
1 2
2 2
1 1
You could consider using a 64bit hash function to produce a bigint ID for each string, instead of using sequential IDs.
With 64-bit hash codes, you can store 2^(32 - 7) or over 30 million items in your hash table before there is a 0.0031% chance of a collision.
This would allow you to have identical IDs on all nodes, with no communication whatsoever between servers between the 'dispatch' and the 'merge' phases.
You could even increase the number of bits to further lower the chance of collision; only, you would not be able to make the resultant hash fit in a 64bit integer database field.
See:
http://en.wikipedia.org/wiki/Fowler_Noll_Vo_hash
http://code.google.com/p/smhasher/wiki/MurmurHash
http://www.partow.net/programming/hashfunctions/index.html
Loading CSV data into a database is slow because it needs to read, split and validate the data.
So what you should try is this:
Setup a local database on each computer. This will get rid of the network latency.
Load a different part of the data on each computer. Try to give each computer the same chunk. If that isn't easy for some reason, give each computer, say, 10'000 rows. When they are done, give them the next chunk.
Dump the data with the DB tools
Load all dumps into a single DB
Make sure that your loader tool can import data into a table which already contains data. If you can't do this, check your DB documentation for "remote table". A lot of databases allow to make a table from another DB server visible locally.
That allows you to run commands like insert into TABLE (....) select .... from REMOTE_SERVER.TABLE
If you need primary keys (and you should), you will also have the problem to assign PKs during the import into the local DBs. I suggest to add the PKs to the CSV file.
[EDIT] After checking with your edits, here is what you should try:
Write a small program which extract the unique values in the first and second column of the CSV file. That could be a simple script like:
cut -d";" -f1 | sort -u | nawk ' { print FNR";"$0 }'
This is a pretty cheap process (a couple of minutes even for huge files). It gives you ID-value files.
Write a program which reads the new ID-value files, caches them in memory and then reads the huge CSV files and replaces the values with the IDs.
If the ID-value files are too big, just do this step for the small files and load the huge ones into all 40 per-machine DBs.
Split the huge file into 40 chunks and load each of them on each machine.
If you had huge ID-value files, you can use the tables created on each machine to replace all the values that remained.
Use backup/restore or remote tables to merge the results.
Or, even better, keep the data on the 40 machines and use algorithms from parallel computing to split the work and merge the results. That's how Google can create search results from billions of web pages in a few milliseconds.
See here for an introduction.
This is a very generic question and does not take the database backend into account. Firing with 40 or 1000 machines on a database backend that can not handle the load will give you nothing. Such a problem is truly to broad to answer it in a specific way..you should get in touch with people inside your organization with enough skills on the DB level first and then come back with a more specific question.
Assuming N computers, X files at about 50GB files each, and a goal of having 1 database containing everything at the end.
Question: It takes 15 hours now. Do you know which part of the process is taking the longest? (Reading data, cleansing data, saving read data in tables, indexing… you are inserting data into unindexed tables and indexing after, right?)
To split this job up amongst the N computers, I’d do something like (and this is a back-of-the-envelope design):
Have a “central” or master database. Use this to mangae the overall process, and to hold the final complete warehouse.
It contains lists of all X files and all N-1 (not counting itself) “worker” databases
Each worker database is somehow linked to the master database (just how depends on RDBMS, which you have not specified)
When up and running, a "ready" worker database polls the master database for a file to process. The master database dolls out files to worker systems, ensuring that no file gets processed by more than one at a time. (Have to track success/failure of loading a given file; watch for timeouts (worker failed), manage retries.)
Worker database has local instance of star schema. When assigned a file, it empties the schema and loads the data from that one file. (For scalability, might be worth loading a few files at a time?) “First stage” data cleansing is done here for the data contained within that file(s).
When loaded, master database is updated with a “ready flagy” for that worker, and it goes into waiting mode.
Master database has it’s own to-do list of worker databases that have finished loading data. It processes each waiting worker set in turn; when a worker set has been processed, the worker is set back to “check if there’s another file to process” mode.
At start of process, the star schema in the master database is cleared. The first set loaded can probably just be copied over verbatim.
For second set and up, have to read and “merge” data – toss out redundant entries, merge data via conformed dimensions, etc. Business rules that apply to all the data, not just one set at a time, must be done now as well. This would be “second stage” data cleansing.
Again, repeat the above step for each worker database, until all files have been uploaded.
Advantages:
Reading/converting data from files into databases and doing “first stage” cleansing gets scaled out across N computers.
Ideally, little work (“second stage”, merging datasets) is left for the master database
Limitations:
Lots of data is first read into worker database, and then read again (albeit in DBMS-native format) across the network
Master database is a possible chokepoint. Everything has to go through here.
Shortcuts:
It seems likely that when a workstation “checks in” for a new file, it can refresh a local store of data already loaded in the master and add data cleansing considerations based on this to its “first stage” work (i.e. it knows code 5484J has already been loaded, so it can filter it out and not pass it back to the master database).
SQL Server table partitioning or similar physical implementation tricks of other RDBMSs could probably be used to good effect.
Other shortcuts are likely, but it totally depends upon the business rules being implemented.
Unfortunately, without further information or understanding of the system and data involved, one can’t tell if this process would end up being faster or slower than the “do it all one one box” solution. At the end of the day it depends a lot on your data: does it submit to “divide and conquer” techniques, or must it all be run through a single processing instance?
The simplest thing is to make one computer responsible for handing out new dimension item id's. You can have one for each dimension. If the dimension handling computers are on the same network, you can have them broadcast the id's. That should be fast enough.
What database did you plan on using with a 23-dimensional starscheme? Importing might not be the only performance bottleneck. You might want to do this in a distributed main-memory system. That avoids a lot of the materalization issues.
You should investigate if there are highly correlating dimensions.
In general, with a 23 dimensional star scheme with large dimensions a standard relational database (SQL Server, PostgreSQL, MySQL) is going to perform extremely bad with datawarehouse questions. In order to avoid having to do a full table scan, relational databases use materialized views. With 23 dimensions you cannot afford enough of them. A distributed main-memory database might be able to do full table scans fast enough (in 2004 I did about 8 million rows/sec/thread on a Pentium 4 3 GHz in Delphi). Vertica might be an other option.
Another question: how large is the file when you zip it? That provides a good first order estimate of the amount of normalization you can do.
[edit] I've taken a look at your other questions. This does not look like a good match for PostgreSQL (or MySQL or SQL server). How long are you willing to wait for query results?
Rohita,
I'd suggest you eliminate a lot of the work from the load by sumarising the data FIRST, outside of the database. I work in a Solaris unix environment. I'd be leaning towards a korn-shell script, which cuts the file up into more managable chunks, then farms those chunks out equally to my two OTHER servers. I'd process the chunks using a nawk script (nawk has an efficient hashtable, which they call "associative arrays") to calculate the distinct values (the dimensions tables) and the Fact table. Just associate each new-name-seen with an incrementor-for-this-dimension, then write the Fact.
If you do this through named pipes you can push, process-remotely, and readback-back the data 'on the fly' while the "host" computer sits there loading it straight into tables.
Remember, No matter WHAT you do with 200,000,000 rows of data (How many Gig is it?), it's going to take some time. Sounds like you're in for some fun. It's interesting to read how other people propose to tackle this problem... The old adage "there's more than one way to do it!" has never been so true. Good luck!
Cheers. Keith.
On another note you could utilize Windows Hyper-V Cloud Computing addon for Windows Server:http://www.microsoft.com/virtualization/en/us/private-cloud.aspx
It seems that your implementation is very inefficient as it's loading at the speed of less than 1 MB/sec (50GB/15hrs).
Proper implementation on a modern single server (2x Xeon 5690 CPUs + RAM that's enough for ALL dimensions loaded in hash tables + 8GB ) should give you at least 10 times better speed i.e at least 10MB/sec.

Resources