Pulling instead pushing data from database - sql-server

Loading data from my OLTP database (it's part of ETL) via OPENQUERY or SSIS Data Flow to another SQL Server database (Warehouse which run this SSIS package / OPENQUERY statement), kills it. As I checked in Performance Monitor I use resources from source database, not from destiny. Is possible to reverse this resource utilization (using SQL Server 2016 or SSIS)?

The problem here is in your destination write operation. If you are using OLE DB Destination with fast load access mode try setting the rows per batch value to a non-zero value and reduce the maximum insert commit size to a value that will be easy on your memory and CPU. SSIS will not have to wait for the default of 2147483647 before writing to the destination table which can have a large impact on your log file slowing your process down. Please refer to this Article for more info on setting this values. All the best

How does your export query looks like? Is it just a simple data dump or do you have some complex logic in (e.g. doing some denormalization/aggregation with the export)?
If it's just a simple export, check on which server your SSIS package runs and what resources it uses. In any case, you need to read the data from your source system, so expect some read disc operations.
In general it is better to get the data from an OLTP as quickly as possible and then apply other operations in further steps of your ETL process on your ETL/Data warehouse server. In order to reduce an impact on your transactional system.
Hope it helps.

Related

SSIS Package Full Table Load Slow

We have an SSIS package that is apparently termed as 'slow' by the development team. Since they do not have a person with SSIS ETL, as a DBA I tried digging into it. Below is the information I found:
SQL Server was 2014 version upgraded -inplace to 2017 so it has SSIS of both versions.
They load a SQL Server table of size 200 GB into SSIS and then zip the data into flatfile using command line zip functionality.
The data flow task simple hits a select * from view - the view is nothing but containing the table with no other fancy joins.
While troubleshooting I found that on SQL Server, there is hardly any load coming, possibly because the select command is running in single thread and not utilizing SQL server cores.
When I run the same select * command (only for 5 seconds, since it is 200 GB table), even my command is single threaded.
The package has a configuration file that the SQL job shows (this is how the package runs) with some connection settings.
Opening the package in BIDS show defaultBufferMaxRows as 10000 only (possibly default value) (since configuration file or any variables does not has a customer value, I guess this is what the package is using too).
Both SQL and SSIS are on same server. SQL has been allocated max memory leaving around 100 GB for SSIS and OS.
Kindly share any ideas on how can I force the SQL Server to run this select command using multiple threads so that entire table gets inside SSIS buffer pool faster.
Edit: I am aware that bcp can read data faster than any process and save it to flatfile but at this point changes to the SSIS package has to be kept minimum and exploring options that can be incorporated within SSIS package.
Edit2: Parallelism works perfectly for my SQL Server as I verified for a lot of other queries.The table in question is 200 GB. It is something with SSIS only which is not hammering my DB as hard as it should.
Edit3: I have made some progress, adjusted the buffer value to 100 MB and max rows to 100000 and now the package seem to be doing better. when I run this package on the server directly using dtexec utility, it generates good load of 40- 50 MB per second but through SQL job it never generates lod more than 10 MB. so I am trying to figure out this behavior.
Edit4: I found that when I run the package directly from logging to the server and invoking dtexec utility, it runs good because it generates good load on the DB causing data I\O to remain steady between 30-50 MB\sec.
The same thing from SQL job never exceeds the I\O more than 10 MB\sec.
I even tried to run the package using agent and opting for cmdline operation but no changes. Agent literally sucks here, any pointers on what could be wrong here?
Final Try:
I am stumped at the observation I have finally:
1)Same package runs 3x faster when run from command prompt from windows node by invoking dtexc utility
2) Exact same package runs 3 times slower than above when involked by SQL agent which has sysadmin permissions on windows as well as SQL Server
In both cases, I tried to see the version of DTEXEC they invoke, and they both invoke the same version. So why one would be so slow is out of my understanding.
I don't think that there is a general solution to this issue since it is a particular case that you didn't provide much information. Since there are two components in your data flow task (OLE DB Source and Flat File Destination), I will try to give some suggestions related to each component.
Before giving suggestions for each component, it is good to mention the following:
If no transformations are applied within the data flow task, It is not recommended to use this task. It is preferable to use bcp utility
Check the TempDb and the database log size.
If a clustered index exists, try to rebuild it. If not, try to create a clustered index.
To check the component that is slowing the package execution, open the package in Visual Studio and try to remove the flat file destination and replace it with a dummy Script Component (write any useless code, for example: string s = "";). And then run the package; if it is fast enough, then the problem is caused by the Flat File Destination, else you need to troubleshoot the OLE DB Source.
Try executing the query in the SQL Server management studio and shows the execution plan.
Check the package TargetServerVersion property within the package configuration and make sure it is correct.
OLE DB Source
As you mentioned, you are using a Select * from view query where data is stored in a table that contains a considerable amount of data. The SQL Server query optimizer may find that reading data using Table Scan is more efficient than reading from indexes, especially if your table does not have a clustered index (row store or column store).
There are many things you may try to improve data load:
Try replacing the Select * from view with the original query used to create the view.
Try changing the data provider used in the OLE DB Connection Manager: SQL Server Native Client, Microsoft OLE DB provider for SQL Server (not the old one).
Try increasing the DefaultBufferMaxRows and DefaultBufferSize properties. more info
Try replacing using SQL Command with specific column names instead of selecting the view name (Table of View data access mode). more info
Try to load data in chunks
Flat File Destination
Check that the flat file directory is not located on the same drive where SQL Server instance is installed
Check that the flat file is not located on a busy drive
Try to export data into multiple flat files instead of one huge file (split data into smaller files) , since when the exported data size increase in a single file, writing to this file become slower, then the package will become slower. (Check the 5th suggestion above)
Any indexes on the table could slow loading. If there are any indexes, try dropping them before the load and then recreating them after. This would also update the index statistics, which would be skewed by the bulk insert.
Are you seeing SQL server utilizing other cores too for other queries? If not, maybe someone played with the following settings:
Check these under server configuration setting:
Maximum Degree of Parallelism
Cost Threshold for Parallelism (server configuration setting).
Does processors affinitized to a CPU.
Also, MaxDOP query hint can cause this too but you said there is no fancy stuff in the view.
Also, it seems you have enough memory on error, why not increase defaultBufferMaxRows to an extremely large number so that SQL server doesn't get slowed down waiting for the buffer to get empty. Remember, they are using the same disk and they will have to wait for each other to use the disk, which will cause extra wait times for the both. It's better SQL server uses it, put into the buffer, and then SSIS starts processing and writing it into disk.
DefaultBufferSize : default is 10MB, max possible 2^31-1 bytes
DefaultBufferMaxRows : default is 10000
you can set AutoAdjustBufferSize so that DefaultBufferSize is automatically calculated based on DefaultBufferMaxRows
See other performance troubleshooting ideas here
https://learn.microsoft.com/en-us/sql/integration-services/data-flow/data-flow-performance-features?view=sql-server-ver15
Edit 1: Some other properties you can check out. These are explained in the above link as well
MaxConcurrentExecutables (package property): This defines how many threads a package can use.
EngineThreads (Data Flow property): how many threads the data flow engine can use
Also try running dtsexec under the same proxy user used by SQL agent to see if you get different result with this account versus your account. You can use runas /user:... cmd to open a command window under that user and then execute dtexec.
Try changing the proxy user used in SQL Agent to a new one and see if it will help. Or try giving elevated permissions in the directories it needs access to.
Try keeping the package in file-system and execute through dtexec from the SQL Agent directly instead of using catalog.start_execution.
Not your case but for other readers: if you have "Execute Package Task", make sure the child packages to be executed are set to run in-process via ExecuteOutOfProcess property. This just reduces overhead of using more processes.
Not your case but for other readers: if you're testing in BIDS, it will run in debug mode by default and thus run slow. Use CTRL-F5 (start without debugging). The best is to use dtexec directly to test the performance
A data flow task may not be the best choice to move this data. SSIS Data Flow tasks are an ETL tool where you can do transformations, look ups, redirect invalid rows, add derived columns and a lot more. If the data flow task is simple and only moves data with no manipulation or redirection of rows then ditch the Data Flow task and use a simple Execute SQL Task and OPENROWSET to import the flat file that was generated from command line and zipped up. Assuming the flat file is a .csv file here are some working examples to query a .csv and insert the data to a table.
You need [Ad Hoc Distributed Queries] run_value set to 1
into dbo.Destination
SELECT *
from openrowset('MSDASQL', 'Driver={Microsoft Text Driver (*.txt; *.csv)};
DefaultDir=D:\YourCsv.csv;Extensions=csv;','select * from YourCsv.csv') File;
Here is some additional examples https://sqlpowershell.blog/2015/02/09/t-sql-read-csv-files-using-openrowset/
There are suggestions in this MSDN article: MSDN DataFlow performance features
Key ones appear to be:
Check the EngineThreads property of the DataFlow task, which tells SSIS how may source and worker threads it should use
If using OLE DB Source to select data from a view uses "SQL Command" and write a SELECT * From View rather than Table or View
Let us know how you get on
You may be facing I/O bottleneck while writing the 200GB to the flat file. I don't see any problem with SQL Query.
If possible create multiple files and split the data (either by modifying SSIS or changing the select query)

SSIS ETL - Is it a good practice to have the destination DB pull data from sources directly

I have an ETL package that moves data from a number of source SQL Server DBs to a single destination SQL Server DB. All these DBs are on the same server. The destination DB contains a large number of views that reference the source DBs. E.g. SELECT * FROM SourceDB1.dbo.Transactions.
So the majority of the data goes directly source DB => destination DB, without passing through the SSIS server. I'm new to SSIS and wondering if this is a good thing to do, or should I look into changing the process.
Time passes, your company grows. You stand up Server2 and have SourceDBN on there. Now what? Your pattern of SELECT * FROM SourceDB.dbo.Transactions breaks.
SourceDB27, that client pays us a lot of money and so they ask us to add column FooBitsWhatsIt to their Transactions table. Now your SELECT * breaks because you have inconsistent columns across your ecosystem.
Someone writes a big query that takes a while to process - the people in the destination database are negatively impacting the ability of the Source databases to do their regular activity. Had the data been copied over to the destination and not merely referenced, there would be isolation between source and destination activities.
Generally speaking, the above costs and risks outweigh the additional development, storage and processing costs.
When I started learning about ETL and data migration using SSIS I was always told that it is best practice to first move the data into a staging database where you can validate the data, deduplicate, clean etc in there then move it to the destination DB

Data streams in case of Merge

We are seeing enormous amounts of data-traffic to and fro our SSIS server. We cannot find the culprit. Is there any way to find out which package is causing all the trafffic? Any advice on that? We are thinking that maybe all the merges we do cause all the traffic. Our SSIS machine gets data from several production SQL servers, merges that with data in our warehouses. Dies that mean that
a) new data is transfered to the SSI machine,
b) existing data is transferred to the SSIS machine,
c) Merge is done and then all data is transferred to the
warehouse?
Then how would you go about limiting all the data moved from and to?
The answer to your questions a, b and c (if you're using SSIS transformation components in SSIS) is essentially “yes, all new data and existing data required for transformation will flow into SSIS instance, and the resulting merged data will flow out of SSIS instance to the target server”. More detailed explanation is below.
Assuming that you are using SQL Server 2012 and above, you would be able to enable Verbose logging to capture the number of rows transferred. The details are captured in [catalog].[execution_data_statistics]. If you are looking for the size in bytes, you would need to calculate that based on the columns that are being extracted and transformed against the number of rows. The [catalog].[execution_data_statistics] captures package name, task name, data flow path and source/destination component name, the time of execution and execution path, which is great for diagnosing.
SSIS is an in-memory pipeline. If you have 3 separate servers, Source, SSIS and Target, the amount of data/traffic will vary. As an example, if the Data Flow Tasks require transformation and use components such as Merge, Merge Join, Lookup etc, you can expect data flowing from Source Server, SSIS Server and Target Server.
On the other hand if you are running a simple Data Flow Task with SQL Server Destination for the Target between 2 databases with the same source and target, SSIS will issue a BULK INSERT statement on the target (= source = SSIS server) instance. In this case, there will be very low data traffic across the network (at least not related to the BULK INSERT statement).
If your package contains an “Execute SQL Task” component that invoke MERGE t-sql statements, this would not cause data traffic into/out of SSIS Server. The activity will be done on the SQL Server instance that the MERGE statement is executed on. If you are using Linked Servers, then the data will flow into/out of linked server as required by the MERGE statement just the same way as if you're invoking the statement on the instance.
My recommendation for limiting the amount of data moved from and to, is to be selective at the source level. For example, if you know that you are only going to be using ColumnA, ColumnB, ColumnC in dbo.Customer, then use
SELECT [ColumnA], [ColumnB], [ColumnC] FROM [dbo].[Customer] --
Better!
instead of the following statement which potentially can retrieve more than those 3 columns:
SELECT *
FROM [dbo].[Customer] -- Do Not Use
There are also a number of best practices to optimize SSIS including reducing bandwidth and optimizing the amount of data transferred, that you can follow. Please have a read here: http://blogs.msdn.com/b/sqlcat/archive/2013/09/16/top-10-sql-server-integration-services-best-practices.aspx.
If you are working on Hybrid platform, you may also be interested in reading "SSIS for Azure and Hybrid Data Movement" white paper (https://msdn.microsoft.com/en-us/library/jj901708.aspx). This white paper has an additional link to "SSIS Operational and Tuning Guide" that would be useful as well.
In addition, you may also be interested in having a look at SSIS Reporting Pack available on CodePlex to get more visualization of SSIS executions on the server.
Hope this helps.
Julie

DataFlow task in SSIS is very slow as compared to writing the sql query in Execute SQL task

I am new to SSIS and have a pair of questions
I want to transfer 1,25,000 rows from one table to another in the same database. But When I use Data Flow Task, it is taking too much time. I tried using an ADO NET Destination as well as an OLE DB Destination but the performance was unacceptable. When I wrote the equivalent query inside an Execute SQL Task it provided acceptable performance. Why is such a difference in performance.
INSERT INTO table1 select * from table2
Based on the first observation, I changed my package. It is exclusively composed of Execute SQL Tasks either with a direct query or with a stored procedure. If I can solve my problem using only the Execute SQL Task, then why would one use SSIS as so many documents and articles indicate. I have seen as it's reliable, easy to maintain and comparatively fast.
Difference in performance
There are many things that could cause the performance of a "straight" data flow task and the equivalent Execute SQL Task.
Network latency. You are performing insert into table a from table b on the same server and instance. In an Execute SQL Task, that work would be performed entirely on the same machine. I could run a package on server B that queries 1.25M rows from server A which will then be streamed over the network to server B. That data will then be streamed back to server A for the corresponding INSERT operation. If you have a poor network, wide data-especially binary types, or simply great distance between servers (server A is in the US, server B is in the India) there will be poor performance
Memory starvation. Assuming the package executes on the same server as the target/source database, it can still be slow as the Data Flow Task is an in-memory engine. Meaning, all of the data that is going to flow from the source to the destination will get into memory. The more memory SSIS can get, the faster it's going to go. However, it's going to have to fight the OS for memory allocations as well as SQL Server itself. Even though SSIS is SQL Server Integration Services, it does not run in the same memory space as the SQL Server database. If your server has 10GB of memory allocated to it and the OS uses 2GB and SQL Server has claimed 8GB, there is little room for SSIS to operate. It cannot ask SQL Server to give up some of its memory so the OS will have to page out while trickles of data move through a constricted data pipeline.
Shoddy destination. Depending on which version of SSIS you are using, the default access mode for an OLE DB Destination was "Table or View." This was a nice setting to try and prevent a low level lock escalating to a table lock. However, this results in row by agonizing row inserts (1.25M unique insert statements being sent). Contrast that with the set-based approach of the Execute SQL Tasks INSERT INTO. More recent versions of SSIS default the access method to the "Fast" version of the destination. This will behave much more like the set-based equivalent and yield better performance.
OLE DB Command Transformation. There is an OLE DB Destination and some folks confuse that with the OLE DB Command Transformation. Those are two very different components with different uses. The former is a destination and consumes all the data. It can go very fast. The latter is always RBAR. It will perform singleton operations for each row that flows through it.
Debugging. There is overhead running a package in BIDS/SSDT. That package execution gets wrapped in DTS Debugging Host. That can cause a "not insignificant" slowdown of package execution. There's not much the debugger can do about an Execute SQL Task-it runs or it doesn't. A data flow, there's a lot of memory it can inspect, monitor, etc which reduces the amount of memory available (see pt 2) as well as just slows it down because of assorted checks it's performing. To get a more accurate comparison, always run packages from the command line (dtexec.exe /file MyPackage.dtsx) or schedule it from SQL Server Agent.
Package design
There is nothing inherently wrong with an SSIS package that is just Execute SQL Tasks. If the problem is easily solved by running queries, then I'd forgo SSIS entirely and write the appropriate stored procedure(s) and schedule it with SQL Agent and be done.
Maybe. What I still like about using SSIS even for "simple" cases like this is it can ensure a consistent deliverable. That may not sound like much, but from a maintenance perspective, it can be nice to know that everything that is mucking with the data is contained in these source controlled SSIS packages. I don't have to remember or train the new person that tasks A-C are "simple" so they are stored procs called from a SQL Agent job. Tasks D-J, or was it K, are even simpler than that so it's just "in line" queries in the Agent jobs to load data and then we have packages for the rest of stuff. Except for the Service Broker thing and some web services, those too update the database. The older I get and the more places I get exposed to, the more I can find value in a consistent, even if overkill, approach to solution delivery.
Performance isn't everything, but the SSIS team did set the ETL benchmarks using SSIS so it definitely has the capability to push some data in a hurry.
As this answer grows long, I'd simply leave it as the advantages of SSIS and the Data Flow over straight TSQL are native, out of the box
logging
error handling
configuration
parallelization
It's hard to beat those for my money.
If you are Passing SSIS Variables As Parameter in Parameter mapping Tab and assigning values to These Variables by Expression Then Your Execute SQL Task consume a lot of time in Evaluating that Expression.
Use Expression Task(Separately) To assign Variables Instead of using Expression in Variable Tab.

SQL Server 2008: A *Data File IO counter* for SQL Server

I make a stress test of the SQL Server 2008 and I want to know what is the data flow to tempdb because of usage of temporary tables and variables.
The statistics is also shown in Activity Monitor:
Is it possible somehow to record the data and afterwards analyse it?
I have 2 cases in mind:
Record an SQL Server counter (I don't know which is the name for it)
Record somehow data from Activity Monitor
Writing into a database does not equate 1 to 1 with disk IO. Database updates only dirty in-memory pages that are later copied to disk by the lazy writer or at checkpoint. The only thing written to disk is the Write Ahead Log activity, for which there is a specific per database counter: Log Bytes Flushed/sec. Note that tempdb has special logging requirements as it is never recovered so it only needs undo information. Whenever dirty pages are actually flushed, be it at checkpoint or by lazy writer, there are specific counters for that too: Checkpoint pages/sec and Lazy writes/sec. These are not per database because these activities themselves are not 'per database'. Finally there are the virtual file stats DMVs: sys.dm_io_virtual_file_stats which offer the aggregate number of IO operations and number of bytes for each individual file of each individual database, including tempdb.
You mention that you want to measure the specific impact of temp tables and table variables, but you won't be able to separate them from the rest of tempdb activities (sort spools, worktables etc). I recommend you go over Working with tempdb in SQL Server 2005, as it still applies to SQL 2008.
If you use Performance Monitor (perfmon.exe) to monitor the SQL Server counters, you can configure this to log to a .csv file to analyse in Excel (for example)
The performance counter in question is Data File(s) Size under SQLServer:Databases
I would take some regular interval "snapshots" (using the following DMVs) loaded into tables to determine your internal usages of tempDB.
sys.all_objects
sys.dm_db_file_space_usage
sys.dm_db_task_space_usage
sys.dm_db_task_space_usage will break down usages by SPID, etc.

Resources