Access 2007 Report Performance - sql-server

So I am developing reports in Access 2007 that use pass-through queries to retrieve data from SQL server. Some of the queries can take a second or two to run on the server side. When opening a report, in preview mode, based on one of these queries, the time required to render the report is much longer than the time required to simply run the query. I used SQL Profiler to watch what was happening and found that the underlying query is being executed multiple times (at least five) when the report runs. How can I get Access to cache the query results to increase the performance of these reports?

Related

Is there a way to see where SQL call is coming from without using SQL profiler?

We're using SQL Server 2012. When I have activity monitor running, I keep seeing a query pop up on the recent expensive queries list. But I can't figure out where this query is coming from.
I've run a search of all the jobs, functions, and stored procedures, and it isn't anywhere to be found.
This query only shows up during the day, and I can't turn on SQL Server Profiler because our system will grind to a halt.
Is there any system table I should search, or query I should try in order to determine where the call is coming from?

MS SQL Server: prevent execution plans going out-of-date

My application has some queries (complex ones with fulltext-search). The queries usually run fast and often (let's say 30x/h). But periodically (lets say every two or three weeks) it looks like Sql Server drops the execution plan and the queries are extremely slow.
After running "EXEC sp_updatestats" the queries are fast again.
Anyone has an idea of what I can do to find the reason for this problem?
Installed SQL Server version is 13.0.4224.16, running on Windows Server 2016. The application doesn't make use of stored procedures.

Generate SQL Server CPU usage for last 7 days

I have a requirement to generate CPU usage reports for my SQL server for previous 7 days. I will use a graph to represent it.
Also, I have to keep track of top 10 queries which consumed maximum CPU each day.
I got one post below but I have few doubts.
CPU utilization by database?
Doubt:
How I will know that, how was the overall CPU usage yesterday? Do I have to add all the AvgCPU time for distinct queries ran yesterday?
There is no reliable way in Getting cpu usage per day/last 5 days..I see SQLServer has below columns..
select
creation_time,
last_worker_time,
total_worker_time,
execution_count,
last_execution_time
from sys.dm_exec_query_stats
And those reported below on my test instance..
As you can see from Screenshot above..
We can't reliably get ,count of instances a particular query got executed on a particular day..And moreover you will see this entire data gets reset if you restart SQLServer
If you really want to show data on daily basis,you could use perfmon..Here are some tutorials which may help you..
1.Collecting Performance Data into a SQL Server Table
2.Using PerfMon for SQL Server Reporting Services Performance Management
You may also take a look on MS SQL Data collection sets. Easy to deploy, easy to keep necessary data (as long it stored on dedicated DB), and at least its really fits your requirements for top 10 CPU-expensive queries.
You can also slightly modify t-sql for collector agent and target tables on collector server in order to obtain some extra CPU info if you need it.

What's changed on Azure to slow my SQL sproc down to a crawl?

In December 2015 I deployed a small azure web app (webapi, 1 controller, 2 REST end points) along with an Azure SQL db (1 table, 1.7M rows, 3 stored procedures).
I could call my rest endpoints and get data back within a few seconds. Happy days.
Now I make the same call and my app throws a 500 error. Closer examination shows the SQL access timed out.
I can open the db (using Visual Studio data tools) and run the queries and call the stored procedures. For my main sproc execution time is about 50 seconds - way too long for the app to wait.
The data in the table has not changed since deployment, and the app and db have been untouched for the last few months, so how come it ran OK back in December but fails miserably now?
All help greatly appreciated.
The Query Store is available in SQL Server 2016 and Azure SQL Database. It is a sort of "flight recorder" which records a history of query executions.
Its purpose is to identify what has gone wrong, when a query execution plan suddenly becomes slow. Unlike DMVs, the Query Store data is persisted in tables, so it isn't lost when SQL Server is restarted, and can be retained for months.
It has four reports in SSMS. This picture shows the Top Resource Consuming Queries. The top left pane shows a bar graph where each bar represents a query, ordered by descending resource usage.
You can select a particular query of interest, then the top right pane shows a timeline with points for each execution. In this example, you can see that the query has got much worse, because the second dot is showing much higher resource usage. (Actually I forced this to happen by deliberately dropping a covering index.)
Then you can click on a particular dot and the graphical execution plan is displayed in the lower pane. So in this example, I can compare the two plans to see what has changed. The graphical execution plan is telling me there is a missing index (this feature in itself is not new), and if I clicked on the previous dot this message wouldn't appear. So that's a pretty good clue as to what's gone wrong!
The Regressed Queries report has the same format, but it shows only queries that have "regressed" or got worse. So it is ideal for troubleshooting.
I know this doesn't resolve your present situation, unless you happened to have Query Store enabled. However it could be very useful for the future and for other people reading this.
See MSDN > Monitoring Performance By Using the Query Store: https://msdn.microsoft.com/en-GB/library/dn817826.aspx

SQL 2005 Reporting Services seems to process reports sequentially

I'm trying to profile the experience of multiple users of a web application, all trying to generate reports at the same time. The reports are displayed on a web page using the report viewer control. The execution log on the report server seems to indicate that the reports are executed sequentially (one at a time).
Is this the expected behavior?
Is there a way to tweak this behavior? Maybe some configuration file on the report server. Or something in the way the requests for the reports are issued?
I know I can use report caching, and optimize the report execution itself. But I need to address the case where multiple users ask for a "fresh" copy of their report (different for each user), and the report execution takes 30-60 seconds.
Is there any other technique to speed things up?
Can you check you have accidentally checked the User Single Transaction option on the data source.

Resources