I'm looking for some tips on optimalizing SSIS packages. I run a lot of them but the execution time is high. Some packages run on thousands or milions(2-15) of records. As the night time is not enough anymore they overlap and run 3-4 at once sometimes and that makes it even more difficult.
I did some testings.
I found that vievs are really bad in SSIS... They run faster when I run a SQL query and select into the view into a table and then go with the OleDB SOurce/Destination work.
As I look on the executions the source rows are selected in 5-10 min with over 5milin records but it takes 10 times more time to insert that data into destination table. As it goes I get this informations.
[SSIS.Pipeline] Information: The buffer manager detected that the system was low on virtual memory, but was unable to swap out any buffers. 4 buffers were considered and 4 were locked. Either not enough memory is available to the pipeline because not enough is installed, other processes are using it, or too many buffers are locked.
Information: Buffer manager allocated 0 megabyte(s) in 0 physical buffer(s).
Are there any options to make things move faster? Do I need to invrease DefaultBufferSize and DefaultBufferMaxRows at the same time? It goes on default 10k and 10mil.
A few notes to get you going in the right direction:
About this error below, this indicates that the server running your ssis packages ran out of memory. As suggested, this could be caused because you have other processed consuming that memory, i.e. other packages running at the same time.
[SSIS.Pipeline] Information: The buffer manager detected that the
system was low on virtual memory, but was unable to swap out any
buffers. 4 buffers were considered and 4 were locked. Either not
enough memory is available to the pipeline because not enough is
installed, other processes are using it, or too many buffers are
locked.
SSIS uses memory buffers to process data through a dataflow. There will be a max of 5 used at any given time. In this case, SSIS was unable to allocate the buffers needed to run the dataflow because there was not enough memory available. Increasing the size of the memory buffer will only make things worse if the server is low on memory.
Here's more details about how SSIS runs stuff so you can consider how memory is being used. If you have multiple dataflows running in parallel in the same package, by default SSIS will run as many as the number of processors on your server +2. So if you have 4 cores, you'll have 6 dataflows running at the same time. In a dataflow, parallel execution is managed by the EngineThreads property. The important thing to note here is just that the number of things that a data flow is doing between sources and transformations is limited. So the less complex a dataflow, the more efficient it will run.
What this means for you in a memory constrained system:
Limit the number of packages running at the the same time
Limit the number of dataflows running concurrently in a package
As a best practice, only have one source and destination in a dataflow
Keep the dataflows simple. Use them to move data from one server to another. Use SQL to do transformation work. Use staging tables to land your data and then act on it before it is loaded into the final table. Even if you are not doing any transformations, it will still be more efficient to load the table using SQL and a WHERE NOT EXISTS clause.
Once you've simplified the dataflow, but you are still seeing poor performance, isolate the bottleneck. Run the dataflow with just the source and see how long it takes. If that's slow, tune the source query. Add subsequent tasks to the dataflow and measure the performance. If you're only step that is slow is the destination, tune that.
For tuning a destination, try using a staging table with no indexes on it. Use TABLOCK which will tell SQL to use minimal logging. Once you've implemented the staging pattern, tune the SQL to load the data into the final table.
For tuning the source, avoid views because they may contain unnecessary joins and columns that you are not using, or use a view that is dedicated to your process so it can be written for exactly what you need. A proc can be a better choice overall if there are a lot of tables involved - performance can be improved by using temp tables to stage data before joining it to the final largest table for the select.
Do not use the WITH (NOLOCK) hint in production. If the data is actually being updated at the time that you are reading it, you'd want your data to have those updates before you load it, nevermind phantom reads and the other terrible things that can happen.
Related
First, I am new to SSIS so I am still getting the hang of things.
I am using Visual Studio 19 and SSMS 19
Regardless, I have set-up an OLE DB Package from .TSV file to table in SSMS. The issue is that it took 1 hour and 11 minutes to execute for 500,000 rows.
The data is extremely variable so I have set-up a staging table in SSMS that is essentially all varchar(max) columns. Once all the data is inserted, then I was going to look at some aggregations like max(len(<column_name>)) in order to better optimize the table and the SSIS package.
Anyways, there are 10 of these files so I need to create a ForEach File loop. This would take at minimum (1.17 hours)*10=11.70 hours of total runtime.
I thought this was a bit long and created a BULK INSERT Task, but I am having some issues.
It seems very straightforward to set-up.
I added the Bulk Insert Task to the Control Flow tab and went into the Bulk Insert Task Editor Dialogue Box.
From here, I configured the Source and Destination connections. Both of which went very smoothly. I only have one local instance of SQL Server on my machine so I used localhost.<database_name> and the table name for the Destination Connection.
I run the package and it executes just fine without any errors or warnings. It takes less than a minute for a roughly 600 MB .TSV file to load into a SSMS table with about 300 columns of varchar(max).
I thought this was too quick and it was. Nothing loaded, but the package executed!!!
I have tried searching for this issue with no success. I checked my connections too.
Do I need Data Flow Tasks for Bulk Insert Tasks? Do I need any connection managers? I had to configure Data Flow Tasks and connection managers for the OLE DB package, but the articles I have referenced do not do this for Bulk Insert Tasks.
What am I doing wrong?
Any advice from someone more well-versed in SSIS would be much appreciated.
Regarding my comment about using a derived column in place of a real destination, it would look like 1 in the image below. You can do this in a couple of steps:
Run the read task only and see how long this takes. Limit the total read to a sample size so your test does not take an hour.
Run the read task with a derived column as a destination. This will test the total read time, plus the amount of time to load the data into memory.
If 1) takes a long time, it could indicate a bottleneck with slow read times on the disk where the file is or a network bottleneck if the file is on another server on a shared drive. If 2) adds a lot more time, it could indicate a memory bottleneck on the server that SSIS is running. Please note that you testing this on a server is the best way to test performance, because it removes a lot of issues that probably won't exist there such as network bottlenecks and memory constraints.
Lastly, please turn on the feature noted as 2) below, AutoAdjustBufferSize. This will change the settings for DefaultBufferSize (max memory in the buffer) and DefaultBufferMaxRows (total rows allowed in each buffer, these are the numbers that you see next to the arrows in the dataflow when you run the package interactively). Because your column sizes are so large, this will give a hint to the server to maximize the buffer size which gives you a bigger and faster pipeline to push the data through.
One final note, if you add the real destination and that has a significant impact on time, you can look into issues with the target table. Make sure there are no indexes including a cluster index, make sure tablock is on, make sure there are no constraints or triggers.
Am writing some processes to pre-format certain data for another downstream process to consume. The pre-formatting essentially involves gathering data from several permanent tables in one DB, applying some logic, and saving the results into another DB.
The problem i am running into is the volume of data. the resulting data set that i need to commit has about 132.5million rows. The commit itself takes almost 2 hours. I can cut that by changing the logging to simple, but it's still quite substantial (seeing as the generating of the 132.5 million rows into a temp table only takes 9 mins).
I have been reading on best methods to migrate large data, but most of the solutions implicitly assumes that the source data already resides in a single file/data table (which is not the case here). Some solutions like using SSMS task option makes it difficult to embed some of the logic applications that i need.
Am wondering if anyone here has some solutions.
Assuming you're on SQL Server 2014 or later the temp table is not flushed to disk immediately. So the difference is probably just disk speed.
Try making the target table a Clustered Columnstore to optimize for compression and minimize IO.
I'm loading large amounts of data from a text file into SQL Server. Currently each record is inserted (or updated) in a separate transaction, but this leaves the DB in a bad state if a record fails.
I'd like to put it all in one big transaction. In my case, I'm looking at ~250,000 inserts or updates and maybe ~1,000,000 queries. The text file is roughly 60MB.
Is it unreasonable to put the entire operation into one transaction? What's the limiting factor?
It's not only not unreasonable to do so, but it's a must in case you want to preserve integrity in case any record fails, so you get an "all or nothing" import as you note. 250000 inserts or updates will be no problem for SQL to handle, but I would take a look at what are those million queries. If they're not needed to perform the data modification, I would take them out of the transaction, so they don't slow down the whole process.
You have to consider that when you have an open transaction (regardless of size), looks will occur at the tables it touches, and lengthy transactions like yours might cause blocking in other users that are trying to read them at the same time. If you expect the import to be big and time-consuming and the system will be under load, consider doing the whole process over the night (or any non-peak hours) to mitigate the effect.
About the size, there is no specific size limit in SQL Server, they can theoretically modify any amount of data without problems. The practical limit is really the size of the transaction log file of the target database. The DB engine stores all the temporary and modified data in this file while the transaction is in progress (so it can use it to roll it back if needed), so this file will grow in size. It must have enough free space in the DB properties, and enough HD space for the file to grow. Also, the row or table locks that the engine will put on the affected tables consumes memory, so the server must have enough free memory for all this plumbing too. Anyway, 60MB in size is often too little to worry about generally. 250,000 rows is considerable, but not that much too, so any decent-sized server will be able to handle it.
SQL Server can handle those size transactions. We use a single transaction to bulk load several million records.
The most expensive part of a database operation is usually the client server connection and traffic. For inserts/updates indexing and logging are also expensive, but you can mitigate those costs by using the correct loading techniques(see below). You really want to limit the amount of connections and data transfered between client and server.
To that end, you should consider bulk loading the data using SSIS or C# with SqlBulkCopy. Once you bulk load everything then you can use set based operations ON THE SERVER to update or verify your data.
Take a look at this question for more suggestions about optimizing data loads. The question is related to C# but a lot of the information is useful for SSIS or other loading methods. What's the fastest way to bulk insert a lot of data in SQL Server (C# client) .
There is no issue with doing an all or nothing bulk operation, unless a complete rollback is problematic for your business. In fact, a single transaction is the default behavior for a lot of bulk insert utilities.
I would strongly advise against a single operation per row. If you want to weed out bad data, you can load the data into a staging table first and pro grammatically determine "bad data" and skip those rows.
Well personally, I don't load imported data directly to my prod tables ever and I weed out all the records which won't pass muster long before I ever get to the point of loading. Some kinds of errors kill the import completely and others might just send the record to an exception table to be sent back to the provider and fixed for the next load. Typically I have logic that determines if there are too many exceptions and kills the package as well.
For instance suppose the city is a reuired field in your database and in the file of 1,000,000 records, you have ten that have no city. It is probably best to send them to an exception table and load the rest. But suppose you have 357,894 records with no city. Then you might need to be having a conversation with the data provider to get the data fixed before loading. It will certainly affect prod less if you can determine that the file is unuseable before you ever try to affect production tables.
Also, why are you doing this one record at a time? You can often go much faster with set-based processing especially if you have already managed to clean the data beforehand. Now you may still need to do in batches, but one record at a time can be very slow.
If you really want to roll back the whole thing if any part errors, yes you need to use transactions. If you do this in SSIS, then you can put transactions on just the part of the package where you affect prod tables and not worry about them in the staging of the data and the clean up parts.
There is a project in flight at my organization to move customer data and all the associated records (billing transactions, etc) from one database to another, if the customer has not had account activity within a certain timeframe.
The total number of rows in all the tables is in the millions. Perhaps 100 million rows, with all the various tables combined. The schema is more-or-less normalized. The project's designers have decided on SSIS to execute this and initial analysis is showing 5 months of execution time.
Basically, the process:
Fills an "archive" database that has the same schema as the database of origin
Delete the original rows from the source database
I can provide more detail if necessary. What I'm wondering is, is SSIS the correct approach? Is there some sort of canonical way to move very large quantities of data around? Are there common performance pitfalls to avoid?
I just can't believe that this is going to take months to run and I'd like to know if there's something else that we should be looking into.
SSIS is just a tool. You can write a 100M rows transfer in SSIS to take 24h, you can write it to take 5 mo. The problem is what you write (ie. the workflow in SSIS case), not SSIS.
There isn't anything specific to SSID that would dictate 'the transfer cannot be done faster than 5 mo'.
The guiding principles for such a task (logically partition the data, process each logical partition in parallel, eliminate access and update contention between processing, batch commit changes, don't transfer more data that is necessary on the wire, use set based processing as much as possible, be able to suspend and resume etc etc) can be implemented on SSIS just as well as any other technology (if not better).
For the record, the ETL world speed record stands at about 2TB per hour. Using SSIS. And just as a matter of fact, I just finished a transfer of 130M rows, ~200Gb of data, took some 24h (I'm lazy and not shooting for ETL record).
I would understand 5mo for development, testing and deployment, but not 5mo for actual processing. That is like 7 rows a second, and is realy realy lame.
SSIS is probably not the right choice if you are simply deleting records.
This might be of interest: Performing fast SQL Server delete operations
UPDATE: as Remus correctly points out, SSIS can perform well or badly depending on how the flows are written, and there have been some huge benchmarks (on high end systems). But for just deletes there are simply ways, such as a SQL Agent job running a TSQL delete in batches.
Are there any ways to determine what the differences in databases are that affect a SSIS package load performance ?
I've got a package which loads and does various bits of processing on ~100k records on my laptop database in about 5 minutes
Try the same package and same data on the test server, which is a reasonable box in both CPU and memory, and it's still running ... about 1 hour so far :-(
Checked the package with a small set of data, and it ran through Ok
I've had similar problems over the past few weeks, and here are several things you could consider, listed in decreasing order of importance according to what made the biggest difference for us:
Don't assume anything about the server.
We found that our production server's RAID was miscconfigured (HP sold us disks with firmware mismatches) and the disk write speed was literally a 50th of what it should be. So check out the server metrics with Perfmon.
Check that enough RAM is allocated to SQL Server. Inserts of large datasets often require use of RAM and TempDB for building indices, etc. Ensure that SQL has enough RAM that it doesn't need to swap out to Pagefile.sys.
As per the holy grail of SSIS, avoid manipulating large datasets using T-SQL statements. All T-SQL statements cause changed data to write out to the transaction log even if you use Simple Recovery Model. The only difference between Simple and Full recovery models is that Simple automatically truncates the log file after each transactions. This means that large datasets, when manipulated with T-SQL, thrash the log file, killing performance.
For large datasets, do data sorts at the source if possible. The SSIS Sort component chokes on reasonably large datasets, and the only viable alternative (nSort by Ordinal, Inc.) costs $900 for a non-transferrable per CPU license. So... if you absolutely have to a large dataset then consider loading it into a staging database as an intermediate step.
Use the SQL Server Destination if you know your package is going to run on the destination server, since it offers roughly 15% performance increase over OLE DB because it shares memory with SQL Server.
Increase the network packaet size to 32767 on your database connection managers. This allows large volumes of data to move faster from the source server/s, and can noticably improve reads on large datasets.
If using Lookup transforms, experiment with cache sizes - between using a Cache connection or Full Cache mode for smaller lookup datasets, and Partial / No Cache for larger datasets. This can free up much needed RAM.
If combining multiple large datasets, use either RAW files or a staging database to hold your transformed datasets, then combine and insert all of a table's data in a single data flow operation, and lock the destination table. Using staging tables or RAW files can also help relive table locking contention.
Last but not least, experiment with the DefaultBufferSize and DefaulBufferMaxRows properties. You'll need to monitor your package's "Buffers Spooled" performance counter using Perfmon.exe, and adjust the buffer sizes upwards until you see buffers being spooled (paged to disk), then back off a little.
Point 8 is especially important on very large datasets, since you can only achieve a minimally logged bulk insert operation if:
The destination table is empty, and
The table is locked for the duration of the load operation.
The database is in Simply / Bulk Logged recovery mode.
This means that subesquent bulk loads a table will always be fully logged, so you want to get as much data as possible into the table on the first data load.
Finally, if you can partition you destination table and then load the data into each partition in parallel, you can achieve up to 2.5 times faster load times, though this isn't usually a feasible option out in the wild.
If you've ruled out network latency, your most likely culprit (with real quantities of data) is your pipeline organisation. Specifically, what transformations you're doing along the pipeline.
Data transformations come in four flavours:
streaming (entirely in-process/in-memory)
non-blocking (but still using I/O, e.g. lookup, oledb commands)
semi-blocking (blocks a pipeline partially, but not entirely, e.g. merge join)
blocking (blocks a pipeline until it's entirely received, e.g. sort, aggregate)
If you've a few blocking transforms, that will significantly mash your performance on large datasets. Even semi-blocking, on unbalanced inputs, will block for long periods of time.
In my experience the biggest performance factor in SSIS is Network Latency. A package running locally on the server itself runs much faster than anything else on the network. Beyond that I can't think of any reasons why the speed would be drastically different. Running SQL Profiler for a few minutes may yield some clues there.
CozyRoc over at MSDN forums pointed me in the right direction ...
- used the SSMS / Management / Activity Monitor and spotted lots of TRANSACTION entries
- got me thinking, read up on the Ole Db connector and unchecked the Table Lock
- WHAM ... data loads fine :-)
Still don't understand why it works fine on my laptop d/b, and stalls on the test server ?
- I was the only person using the test d/b, so it's not as if there should have been any contention for the tables ??