Any tips for speeding up the import processes? Theres alot of Joins in the db.
Also, when a SSIS task is completed, would the best way to handle the next functions by code or using the Emailing notification SSIS has..?
Here is a sample that I have used to illustrate loading 1 million rows in under 3 minutes from text file to SQL Server database. The package in the sample was created using SSIS 208 R2 and was executed on Xeon single core CPU 2.5GHz and 3.00 GB RAM.
Import records on SSIS after lookup
One of the main bottlenecks in importing large number of rows will be the destination component. Faster the destination component can insert the rows, the faster the preceding source or transformation components can process the rows. Again if you happen to have components like Sort transformation that will be different because Sort transformation would hold up all the data to sort before sending it down the pipeline.
Sending email depends on what you would like to do.
If you need simple success or failure, you could simply use Send Mail task. Other option is that you could also enable the Alert notification on SQL jobs from where you might schedule the package to run on regular basis.
If you need more information to be added to the email, then you might need to use a Script Task to formulate the message body. After creating the message body, you can send the mail from within Script Task or use Send Mail task.
Hope that example along with the article #Nikhil S provided should help you fine tune your package.
This SimpleTalk article discusses ways to optimize your data flow task
Horizontally partition your data-to-be transferred into N data flows. Where N is the number of cpu cores available on your server where ssis is installed.
Play with the ssis buffer size property to figure our setting optimal for your kind of data.
Related
We have an SSIS package that is apparently termed as 'slow' by the development team. Since they do not have a person with SSIS ETL, as a DBA I tried digging into it. Below is the information I found:
SQL Server was 2014 version upgraded -inplace to 2017 so it has SSIS of both versions.
They load a SQL Server table of size 200 GB into SSIS and then zip the data into flatfile using command line zip functionality.
The data flow task simple hits a select * from view - the view is nothing but containing the table with no other fancy joins.
While troubleshooting I found that on SQL Server, there is hardly any load coming, possibly because the select command is running in single thread and not utilizing SQL server cores.
When I run the same select * command (only for 5 seconds, since it is 200 GB table), even my command is single threaded.
The package has a configuration file that the SQL job shows (this is how the package runs) with some connection settings.
Opening the package in BIDS show defaultBufferMaxRows as 10000 only (possibly default value) (since configuration file or any variables does not has a customer value, I guess this is what the package is using too).
Both SQL and SSIS are on same server. SQL has been allocated max memory leaving around 100 GB for SSIS and OS.
Kindly share any ideas on how can I force the SQL Server to run this select command using multiple threads so that entire table gets inside SSIS buffer pool faster.
Edit: I am aware that bcp can read data faster than any process and save it to flatfile but at this point changes to the SSIS package has to be kept minimum and exploring options that can be incorporated within SSIS package.
Edit2: Parallelism works perfectly for my SQL Server as I verified for a lot of other queries.The table in question is 200 GB. It is something with SSIS only which is not hammering my DB as hard as it should.
Edit3: I have made some progress, adjusted the buffer value to 100 MB and max rows to 100000 and now the package seem to be doing better. when I run this package on the server directly using dtexec utility, it generates good load of 40- 50 MB per second but through SQL job it never generates lod more than 10 MB. so I am trying to figure out this behavior.
Edit4: I found that when I run the package directly from logging to the server and invoking dtexec utility, it runs good because it generates good load on the DB causing data I\O to remain steady between 30-50 MB\sec.
The same thing from SQL job never exceeds the I\O more than 10 MB\sec.
I even tried to run the package using agent and opting for cmdline operation but no changes. Agent literally sucks here, any pointers on what could be wrong here?
Final Try:
I am stumped at the observation I have finally:
1)Same package runs 3x faster when run from command prompt from windows node by invoking dtexc utility
2) Exact same package runs 3 times slower than above when involked by SQL agent which has sysadmin permissions on windows as well as SQL Server
In both cases, I tried to see the version of DTEXEC they invoke, and they both invoke the same version. So why one would be so slow is out of my understanding.
I don't think that there is a general solution to this issue since it is a particular case that you didn't provide much information. Since there are two components in your data flow task (OLE DB Source and Flat File Destination), I will try to give some suggestions related to each component.
Before giving suggestions for each component, it is good to mention the following:
If no transformations are applied within the data flow task, It is not recommended to use this task. It is preferable to use bcp utility
Check the TempDb and the database log size.
If a clustered index exists, try to rebuild it. If not, try to create a clustered index.
To check the component that is slowing the package execution, open the package in Visual Studio and try to remove the flat file destination and replace it with a dummy Script Component (write any useless code, for example: string s = "";). And then run the package; if it is fast enough, then the problem is caused by the Flat File Destination, else you need to troubleshoot the OLE DB Source.
Try executing the query in the SQL Server management studio and shows the execution plan.
Check the package TargetServerVersion property within the package configuration and make sure it is correct.
OLE DB Source
As you mentioned, you are using a Select * from view query where data is stored in a table that contains a considerable amount of data. The SQL Server query optimizer may find that reading data using Table Scan is more efficient than reading from indexes, especially if your table does not have a clustered index (row store or column store).
There are many things you may try to improve data load:
Try replacing the Select * from view with the original query used to create the view.
Try changing the data provider used in the OLE DB Connection Manager: SQL Server Native Client, Microsoft OLE DB provider for SQL Server (not the old one).
Try increasing the DefaultBufferMaxRows and DefaultBufferSize properties. more info
Try replacing using SQL Command with specific column names instead of selecting the view name (Table of View data access mode). more info
Try to load data in chunks
Flat File Destination
Check that the flat file directory is not located on the same drive where SQL Server instance is installed
Check that the flat file is not located on a busy drive
Try to export data into multiple flat files instead of one huge file (split data into smaller files) , since when the exported data size increase in a single file, writing to this file become slower, then the package will become slower. (Check the 5th suggestion above)
Any indexes on the table could slow loading. If there are any indexes, try dropping them before the load and then recreating them after. This would also update the index statistics, which would be skewed by the bulk insert.
Are you seeing SQL server utilizing other cores too for other queries? If not, maybe someone played with the following settings:
Check these under server configuration setting:
Maximum Degree of Parallelism
Cost Threshold for Parallelism (server configuration setting).
Does processors affinitized to a CPU.
Also, MaxDOP query hint can cause this too but you said there is no fancy stuff in the view.
Also, it seems you have enough memory on error, why not increase defaultBufferMaxRows to an extremely large number so that SQL server doesn't get slowed down waiting for the buffer to get empty. Remember, they are using the same disk and they will have to wait for each other to use the disk, which will cause extra wait times for the both. It's better SQL server uses it, put into the buffer, and then SSIS starts processing and writing it into disk.
DefaultBufferSize : default is 10MB, max possible 2^31-1 bytes
DefaultBufferMaxRows : default is 10000
you can set AutoAdjustBufferSize so that DefaultBufferSize is automatically calculated based on DefaultBufferMaxRows
See other performance troubleshooting ideas here
https://learn.microsoft.com/en-us/sql/integration-services/data-flow/data-flow-performance-features?view=sql-server-ver15
Edit 1: Some other properties you can check out. These are explained in the above link as well
MaxConcurrentExecutables (package property): This defines how many threads a package can use.
EngineThreads (Data Flow property): how many threads the data flow engine can use
Also try running dtsexec under the same proxy user used by SQL agent to see if you get different result with this account versus your account. You can use runas /user:... cmd to open a command window under that user and then execute dtexec.
Try changing the proxy user used in SQL Agent to a new one and see if it will help. Or try giving elevated permissions in the directories it needs access to.
Try keeping the package in file-system and execute through dtexec from the SQL Agent directly instead of using catalog.start_execution.
Not your case but for other readers: if you have "Execute Package Task", make sure the child packages to be executed are set to run in-process via ExecuteOutOfProcess property. This just reduces overhead of using more processes.
Not your case but for other readers: if you're testing in BIDS, it will run in debug mode by default and thus run slow. Use CTRL-F5 (start without debugging). The best is to use dtexec directly to test the performance
A data flow task may not be the best choice to move this data. SSIS Data Flow tasks are an ETL tool where you can do transformations, look ups, redirect invalid rows, add derived columns and a lot more. If the data flow task is simple and only moves data with no manipulation or redirection of rows then ditch the Data Flow task and use a simple Execute SQL Task and OPENROWSET to import the flat file that was generated from command line and zipped up. Assuming the flat file is a .csv file here are some working examples to query a .csv and insert the data to a table.
You need [Ad Hoc Distributed Queries] run_value set to 1
into dbo.Destination
SELECT *
from openrowset('MSDASQL', 'Driver={Microsoft Text Driver (*.txt; *.csv)};
DefaultDir=D:\YourCsv.csv;Extensions=csv;','select * from YourCsv.csv') File;
Here is some additional examples https://sqlpowershell.blog/2015/02/09/t-sql-read-csv-files-using-openrowset/
There are suggestions in this MSDN article: MSDN DataFlow performance features
Key ones appear to be:
Check the EngineThreads property of the DataFlow task, which tells SSIS how may source and worker threads it should use
If using OLE DB Source to select data from a view uses "SQL Command" and write a SELECT * From View rather than Table or View
Let us know how you get on
You may be facing I/O bottleneck while writing the 200GB to the flat file. I don't see any problem with SQL Query.
If possible create multiple files and split the data (either by modifying SSIS or changing the select query)
I know 2008 is outdated but a need is a need irrelevant to sources
I am using SSIS 2008 and SQL Server 2008 R2 ,
My requirements are
Load multiple source file data using Bulk load in control flow (Each File size is 20 Gig .txt files-- In total 150 Gig for 5 files apprx)
The SQL table is single (Tabledata with 243 columns)
I used SSIS 5 files bulk loading but it got blocked for a long time any help is appreciated..
To get the best performance in loading a table, we want to shovel the data in by the truckload (table lock). The problem is, there's only room for one truck in the bay. Otherwise, if you want multiple feeds into a table at one time, you're likely looking at throwing data in by the shovel full - that way, 5 workers can be there and their loading won't block each other but the throughput is lessened.
If you're Enterprise Edition, or you want to go old school with a partitioned view on Standard Edition, then you could have each partition/individual table loaded in parallel and then you have N worker processes pouring the data in as fast as the disk subsystem allows and none of the contention you're currently experiencing.
As #David Browne points out, SQL Server supports parallel bulk loads to unindexed heap tables
To get BU lock you need to specify the TABLOCK option with each bulk import stream without blocking other bulk import streams
In your OLE DB Destination, this will be a Fast Load (default) and check the Table lock checkbox. As the msdn article calls out, the destination table needs to be empty or the lock will be IX-Tab and not BU-tab
Yes, you can load from multiple sources. I read from multiple sources at the same time, and write using the fast load option.
We are working on requirement where we want to fetch incremental data from one redshift cluster "row wise", process it based on requirement and insert it in another redshift cluster. We want to do it "row wise" not "batch operation." For that we are writing one generic service which will do row processing from Redshift -> Redshift. So, it is like Redshift -> Service -> Redshift.
For inserting data, we will use insert queries to insert. We will commit after particular batch not row wise for performance.
But I am bit worried about performance of multiple insert queries. Or is there any other tool available which does it. There are many ETL tools available but all do batch processing. We want to process row wise. Can someone please suggest on it?
I can guarantee that your approach will not be efficient based on experience. You can refer this link for detailed best practices :
https://docs.aws.amazon.com/redshift/latest/dg/c_loading-data-best-practices.html
But, I would suggest that you do as follows :
Write a python script to unload the data from your source Redshift to S3 based on a query condition that filters data as per your requirement, i.e based on some threshold like time, date etc. This operation should be fast and you can schedule this script to execute every minute or in a couple of minutes, generating multiple files.
Now, you basically have a continuous stream of files in S3, where the size of each file or batch size can be controlled based on your frequency for the previous script.
Now, all you have to do is set up a service that keeps polling S3 for objects/files as and when they are created and then process them as needed and put the processed file in another bucket. Let's call this as B2.
Set up another python script/ETL step that remotely executes a COPY command from bucket B2.
This is just an initial idea though. You have to evolve on this approach and optimize this. Best of luck!
Loading data from my OLTP database (it's part of ETL) via OPENQUERY or SSIS Data Flow to another SQL Server database (Warehouse which run this SSIS package / OPENQUERY statement), kills it. As I checked in Performance Monitor I use resources from source database, not from destiny. Is possible to reverse this resource utilization (using SQL Server 2016 or SSIS)?
The problem here is in your destination write operation. If you are using OLE DB Destination with fast load access mode try setting the rows per batch value to a non-zero value and reduce the maximum insert commit size to a value that will be easy on your memory and CPU. SSIS will not have to wait for the default of 2147483647 before writing to the destination table which can have a large impact on your log file slowing your process down. Please refer to this Article for more info on setting this values. All the best
How does your export query looks like? Is it just a simple data dump or do you have some complex logic in (e.g. doing some denormalization/aggregation with the export)?
If it's just a simple export, check on which server your SSIS package runs and what resources it uses. In any case, you need to read the data from your source system, so expect some read disc operations.
In general it is better to get the data from an OLTP as quickly as possible and then apply other operations in further steps of your ETL process on your ETL/Data warehouse server. In order to reduce an impact on your transactional system.
Hope it helps.
We are seeing enormous amounts of data-traffic to and fro our SSIS server. We cannot find the culprit. Is there any way to find out which package is causing all the trafffic? Any advice on that? We are thinking that maybe all the merges we do cause all the traffic. Our SSIS machine gets data from several production SQL servers, merges that with data in our warehouses. Dies that mean that
a) new data is transfered to the SSI machine,
b) existing data is transferred to the SSIS machine,
c) Merge is done and then all data is transferred to the
warehouse?
Then how would you go about limiting all the data moved from and to?
The answer to your questions a, b and c (if you're using SSIS transformation components in SSIS) is essentially “yes, all new data and existing data required for transformation will flow into SSIS instance, and the resulting merged data will flow out of SSIS instance to the target server”. More detailed explanation is below.
Assuming that you are using SQL Server 2012 and above, you would be able to enable Verbose logging to capture the number of rows transferred. The details are captured in [catalog].[execution_data_statistics]. If you are looking for the size in bytes, you would need to calculate that based on the columns that are being extracted and transformed against the number of rows. The [catalog].[execution_data_statistics] captures package name, task name, data flow path and source/destination component name, the time of execution and execution path, which is great for diagnosing.
SSIS is an in-memory pipeline. If you have 3 separate servers, Source, SSIS and Target, the amount of data/traffic will vary. As an example, if the Data Flow Tasks require transformation and use components such as Merge, Merge Join, Lookup etc, you can expect data flowing from Source Server, SSIS Server and Target Server.
On the other hand if you are running a simple Data Flow Task with SQL Server Destination for the Target between 2 databases with the same source and target, SSIS will issue a BULK INSERT statement on the target (= source = SSIS server) instance. In this case, there will be very low data traffic across the network (at least not related to the BULK INSERT statement).
If your package contains an “Execute SQL Task” component that invoke MERGE t-sql statements, this would not cause data traffic into/out of SSIS Server. The activity will be done on the SQL Server instance that the MERGE statement is executed on. If you are using Linked Servers, then the data will flow into/out of linked server as required by the MERGE statement just the same way as if you're invoking the statement on the instance.
My recommendation for limiting the amount of data moved from and to, is to be selective at the source level. For example, if you know that you are only going to be using ColumnA, ColumnB, ColumnC in dbo.Customer, then use
SELECT [ColumnA], [ColumnB], [ColumnC] FROM [dbo].[Customer] --
Better!
instead of the following statement which potentially can retrieve more than those 3 columns:
SELECT *
FROM [dbo].[Customer] -- Do Not Use
There are also a number of best practices to optimize SSIS including reducing bandwidth and optimizing the amount of data transferred, that you can follow. Please have a read here: http://blogs.msdn.com/b/sqlcat/archive/2013/09/16/top-10-sql-server-integration-services-best-practices.aspx.
If you are working on Hybrid platform, you may also be interested in reading "SSIS for Azure and Hybrid Data Movement" white paper (https://msdn.microsoft.com/en-us/library/jj901708.aspx). This white paper has an additional link to "SSIS Operational and Tuning Guide" that would be useful as well.
In addition, you may also be interested in having a look at SSIS Reporting Pack available on CodePlex to get more visualization of SSIS executions on the server.
Hope this helps.
Julie