I have one complex report which fetches records from multiple tables.
I saw at many places that SSRS does not allow multiple data tables returned from single stored procedure, that is the reason I created one stored procedure and I created six dataset for report that was filtered from shared dataset, but when I ran below query it shows that my procedure was executed for six times and that might causing the performance issue.
SELECT TOP 100 *,Itempath,parameters,
TimeDataRetrieval + TimeProcessing + TimeRendering as [total time],
TimeDataRetrieval, TimeProcessing, TimeRendering,
ByteCount, [RowCount],Source, AdditionalInfo
FROM ExecutionLog3 where ItemPath like '%GetServiceCalls%'
ORDER BY Timestart DESC
To get rid of this, I removed all dataset filters and applied filter on tablix. After that I can see that procedure was called only one time. But that does not affect the performance much.
Now, question that is coming to me is how exactly I can improve performance of SSRS report.
Note: My Query executes in 13 seconds of time and report takes almost 20 mins to execute.
Please help me to resolve this issue.
Regards,
Dhaval
I always found that SSRS filters on large tables to take forever and that any text wildcards performed even more poorly.
My advise would be to do all the "grunt work" except sorts in SQL and then do any sorts in SSRS.
Part of you problem may be that you have a large dataset and you are performing wildcard searches which don't play well with Indexes when you have the wildcard at the start of the like statement (e.g. like '%... ).
Related
I've recently hit a bottleneck situation in which if I keep a current version of a query inside a report (designed in Report Builder SSRS 2008) it will generate loading times of up to 15 minutes for a report with specific parameters. This JOIN represents a sub-query which I JOIN to the main query on a non-indexed column. Let's call this sub-query "Units".
If I delete the "Units" JOIN from the SQL Query and set it up as a separate Data Set inside the report, linking it using the SSRS Lookup function (same as the JOIN in SQL) to the Main Data Set (Query), the report runs smoothly, in under a minute (Approximately 3 to 5 miliseconds).
Keeping in mind that the "Units" sub-query, when ran separately runs in under 5 milliseconds for the same parameters that previously took 15 minutes, but when it is attached to the Main query causes severe performance issues.
Is there a clear benefit on doing this type of separation or should I just investigate further on how to improve the query? What are the performance benefits/downsides of using lookup versus improving the current query performance.
My concern is that this is a situational improvement and this will not represent a long term solution. I've used this alternative in the past to avoid tweaking the query and it did not backfire, but I do not fully understand the performance implications of using this workaround.
Thanks,
Radu.
There are a lot of things that could be causing the performance issues but here's a few simple things that might get the dataset back up to speed again with very little effort.
1. Parameter sniffing
You mention with specific parameters, if you mean that the query only performs badly with some parameters and performs well with other parameters, and assuming that the size of the data does not vary significantly based on these parameters then it's likely a parameter sniffing issue. This is caused by a query plan that was generated based on once set of parameters that is not suitable for other parameters. The easiest way to prove this is to simply add option (recompile) to the end of the query. This is not a permanent fix but it will force a new query plan to be generated. If you see an instant improvement then parameter sniffing is the most common cause.
2. Refactor dataset query
The other option is to redesign your query. I don't know what you query looks like but if we take a simple example based on the information you posted...
If you query looks something like..
SELECT * FROM tableA a
JOIN (SELECT * FROM tableB WHERE someValue=someOtherValue) b
ON a.FieldA = b.FieldB
then you could refactor it by putting the subquery into a temp table and joining to that, something like
SELECT *
INTO #t
FROM tableB WHERE someValue=someOtherValue
SELECT * FROM tableA a
JOIN #t b
ON a.FieldA = b.FieldB
This is an approach I often take and it can get round exactly these types of performance issues.
We are using EF Core 1.1, in an ASP.NET Core app, where the following LINQ query takes about 45 seconds to a minute on its first execution. After the first execution, the subsequent executions seem to work fine.
Question: How can we improve the performance of this query. User waiting for about 45 seconds or more gives him/her an impression that probably the ASP.NET page displaying the query is broken and user moves on to another page:
var lstProjects = _context.Projects.Where(p => p.ProjectYear == FY && p.User == User_Name).OrderBy(p => p.ProjectNumber).ToList();
Execution Plan in SQL Server Query Editor: The table has 24 columns one of which is of type varchar(255), four are of type varchar(75). Others are of types int, smalldatetime, bit etc. All of the columns are needed in the query. But the WHERE clause filters the data to return about 35 rows out of about 26,000.
More details on Execution Plan
Updated comment to answer.
When using Code First there still needs to be a consideration for indexing based on the common queries run in high-traffic areas of the application. The index scan across the PK amounts to little more than a table scan so an index across the Project Year + UserName would give a boost in performance and should be considered if this is expected to be used a bit or is performance sensitive. Regardless of DB First or Code First, developers need to consider profiler results against the database in order to optimize indexing based on realistic usage. Normally the execution plan will return back suggestions for indexing. Index suggestions should appear just below the SQL statement in the execution plan. From the screen shot it isn't clear whether one was suggested or not, as there might have been a scrollbar to the right of the SQL under the pop-up stats.
In cases where a query is returning slow results but no suggestions, try re-running the query with altered parameters with the execution plan to exclude SQL from picking up pre-compiled queries.
I'm trying to optimize a report that uses multiple stored procedures on the same table. Unfortunately, each procedure is reading millions of records and aggregating the results. It's a very intense read for a report, but each stored procedure is optimized to run pretty fast within SSMS.
I can run each stored procedure and get a result set within 10 to 20 seconds. When I put them all into one report within SSRS, the report times out.
There is a total of 4 parameters per stored procedure. All targeting the same table, just aggregating the data in different ways. Indexes on those tables are inline with the query. It's based on time, user and the one dimension I'm using to COUNT() both DISTINCT and NONDISTINCT.
I'm thinking the issue is the fact SSRS is running 4 procedures at the same time on the same table as opposed to one after the other. Is this true? If so, is there anyway to ensure SSRS does not run them in parallel?
My only option is to create summary table that is already preaggregated. Then just run the report off that table. Otherwise, I guess param sniffing is possible too.
By default, datasets in SSRS are executed in parallel.
If all of your datasets refer to the same datasource, then you can configure for serialized execution of the datasets on a single connection this way:
open the data source dialog in report designer
ensure that the Use Single Transaction checkbox is checked
Once that checkbox is selected, datasets that use the same data source are no longer executed in parallel.
I hope that solves your problem.
I am trying to execute one SSRS 2005 report. This report takes one parameter.
If I don't use the parameter and write the value directly then it runs in 10 sec. eg.
Select * from table1 where id = 122
If I use parameter then it takes long time like 10 to 15 min like
Select * from table1 where id = #id
I don't know why this thing is happening.
Thanks in advance.
It's impossible to answer the question as asked: only you have the info to determine why things aren't performing well.
What we can do however, is answer the question "How to investigate SSRS performance issues?". One of the best tools I've found so far is to use the ExecutionLog2 View in the ReportServer catalog database. In your case the important columns to look at:
TimeDataRetrieval, for time spent connecting to the data source and retrieving data rows
TimeProcessing, for time spent turning the data rows into the report
TimeRendering, for time spent creating the final output (pdf, html, excel, etc)
This will give you a starting point for investigating further. Most likely (from your description) I'd guess the problem lies in the first bit. A suitable follow up step would be to analyze the query that is executed by SSRS, possibly using the execution plan.
1) try replacing your sub queries with join logic. To the best possible. I know many a times sub-queries feel more logical as it makes the problem flow thru when we are thinking in macro view [this result set] gets [that result sets out put].
2) Can also put index as well. And since its int it will be faster.
I have a query with about 6-7 joined tables and a FREETEXT() predicate on 6 columns of the base table in the where.
Now, this query worked fine (in under 2 seconds) for the last year and practically remained unchanged (i tried old versions and the problem persists)
So today, all of a sudden, the same query takes around 1-1.5 minutes.
After checking the Execution Plan in SQL Server 2005, rebuilding the FULLTEXT Index of that table, reorganising the FULLTEXT index, creating the index from scratch, restarting the SQL Server Service, restarting the whole server I don't know what else to try.
I temporarily switched the query to use LIKE instead until i figure this out (which takes about 6 seconds now).
When I look at the query in the query performance analyser, when I compare the ´FREETEXT´query with the ´LIKE´ query, the former has 350 times as many reads (4921261 vs. 13943) and 20 times (38937 vs. 1938) the CPU usage of the latter.
So it really is the ´FREETEXT´predicate that causes it to be so slow.
Has anyone got any ideas on what the reason might be? Or further tests I could do?
[Edit]
Well, I just ran the query again to get the execution plan and now it takes 2-5 seconds again, without any changes made to it, though the problem still existed yesterday. And it wasn't due to any external factors, as I'd stopped all applications accessing the database when I first tested the issue last thursday, so it wasn't due to any other loads.
Well, I'll still include the execution plan, though it might not help a lot now that everything is working again... And beware, it's a huge query to a legacy database that I can't change (i.e. normalize data or get rid of some unneccessary intermediate tables)
Query plan
ok here's the full query
I might have to explain what exactly it does. basically it gets search results for job ads, where there's two types of ads, premium ones and normal ones. the results are paginated to 25 results per page, 10 premium ones up top and 15 normal ones after that, if there are enough.
so there's the two inner queries that select as many premium/normal ones as needed (e.g. on page 10 it fetches the top 100 premium ones and top 150 normal ones), then those two queries are interleaved with a row_number() command and some math. then the combination is ordered by rownumber and the query is returned. well it's used at another place to just get the 25 ads needed for the current page.
Oh and this whole query is constructed in a HUGE legacy Coldfusion file and as it's been working fine, I haven't dared thouching/changing large portions so far... never touch a running system and so on ;) Just small stuff like changing bits of the central where clause.
The file also generates other queries which do basically the same, but without the premium/non premium distinction and a lot of other variations of this query, so I'm never quite sure how a change to one of them might change the others...
Ok as the problem hasn't surfaced again, I gave Martin the bounty as he's been the most helpful so far and I didn't want the bounty to expire needlessly. Thanks to everyone else for their efforts, I'll try your suggestions if it happens again :)
This issue might arise due to a poor cardinality estimate of the number of results that will be returned by the full text query leading to a poor strategy for the JOIN operations.
How do you find performance if you break it into 2 steps?
One new step that populates a temporary table or table variable with the results of the Full Text query and the second one changing your existing query to refer to the temp table instead.
(NB: You might want to try this JOIN with and without OPTION(RECOMPILE) whilst looking at query plans for (A) a free text search term that returns many results (B) One that returns only a handful of results.)
Edit It's difficult to clarify exactly in the absence of the offending query but what I mean is instead of doing
SELECT <col-list>
FROM --Some 6 table Join
WHERE FREETEXT(...);
How does this perform?
DECLARE #Table TABLE
(
<pk-col-list>
)
INSERT INTO #Table
SELECT PK
FROM YourTable
WHERE FREETEXT(...)
SELECT <col-list>
FROM --Some 6 table Join including onto #Table
OPTION(RECOMPILE)
Usually when we have this issue, it is because of table fragmentation and stale statistics on the indexes in question.
Next time, try to EXEC sp_updatestats after a rebuild/reindex.
See Using Statistics to Improve Query Performance for more info.