We are seeing avg cpu time for the below query to be more than 100, occasionally in our production system. This is resulting in downtime for our services for few minutes.
Query:
INSERT INTO #mssqljdbc_temp_sp_columns_result
EXEC sp_columns_100 #P0,#P1,#P2,#P3,#P4,#P5
Related
Have you had experience with SQL queries in Power BI dataflows that run for longer than 10 minutes for one SQL query?
I have migrated an Excel Power Query script to PBI dataflows. One of the migrated queries fails and it fails consistently at 10:00 minutes exactly. But there is not "timeout" error displayed, just a message that says, "Query has cancelled". My query does have the [Command Timeout] property set to 60 minutes, but the property looks like it's being ignored in the PBI service dataflow.
I have written an Excel Power Query script with multiple SQL connections, most connection are stored procedure that return a dataset. One SQL proc takes about 30 minutes to complete. The entire of these queries takes about 50 minutes. We're trying to stage datasets in dataflows and improve performance for reporting for our end users, who are currently pulling datasets from SSRS reports. This dataflow query failure is a major road block.
I'm curious if anyone has had a similar experience. To me it seems ridiculous that a dataflow SQL connection can't run past 10 minutes.
Thanks for your input!
I have a query that creates 400 transactions, each transaction running an update on a table. Running this query on the development server on premise takes around 100 ms, versus over 500 ms on Azure. I ran some statistics and RESERVED_MEMORY_ALLOCATION_EXT came up higher than dev, with a wait count of 3523254 and 0.9 seconds wait time. Any suggestions on how to troubleshoot further?
I know in SQL Server that I can force an execution plan to recompiled when running a stored procedure.
However, I have an SSIS package that is running through the SQL Agent. After some big changes to the data which it operates on, the package went from taking 3 minutes to run to taking 3 hours. When I run the package manually in SSDT, or after the server has restarted, it runs fine. After some troubleshooting, I believe it's because the cached execution plan it had was no longer correct for the data it's operating on.
It there a way to tell SQL Server that when it runs this particular SSIS package, it should recompile everything and get a fresh execution plan? The data it operates on can vary wildly in size from day to day.
Many thanks.
If you want a query to regenerate the query plan each time it is run, use OPTION (RECOMPILE). You find this is commonly used for "Kitchen Sink" or "Catch-all Queries", to avoid bad query plan caching, such as like below:
SELECT *
FROM dbo.MyTable
WHERE (Column1 = #Variable1 OR #Variable1 IS NULL)
AND (Column2 = #Variable2 OR #Variable2 IS NULL)
AND (Column3 = #Variable3 OR #Variable2 IS NULL)
OPTION (RECOMPILE);
This will, of course, incur a cost to regenerate the query plan, which for complex queries can be costly as well (for "Catch-all" queries, it's then better to go down the dynamic SQL route).
I have a stored procedure which is loading a huge amount of data into a table. Suddenly it's run time has increased to a very high extent. Earlier it was taking ~2 minutes, now it's taking ~20 minutes. What could be the reason and how to debug the issue ? I am using SQL Server 2014.
I have a SQL Server 2016 environment with high availability on. When I check the query plan cache, I see that SQL Server is constantly clearing the cache. The query below returns only 5 to 10 records sometimes 0 records.
SELECT *
FROM sys.dm_exec_cached_plans decp
I scripted all database objects (stored procedures, triggers..) to see if there's a command anywhere that drops the cache, but I could not find any.
Any help in this regard is appreciated.
I experienced the same proble, reducing Max Memory from 60 GB to 55 GB make the server performing better.
(batch per seconds) / (compilation per seconds) is about 4% (before the changes was 15/20%)