I want to find those queries in MS Access which are utilizing the CPU mostly and put them in a table in descending order.
I have checked the system tables of MS Access database, but can't find any clue for this.
I am new to MS Access, please help.
For 10 or even 15 years, Access has never been CPU bound. In other words, network speeds, disk drive speeds etc. are the main factor .
In the vast majority of cases throwing more CPU at a problem will not help improve performance. If 99% of time is network or other factors, then a double of CPU will only improve by 2%.
However, I will accept that if a query is using lots of CPU, then it stands somewhat to reason that such a query is pulling a lot of data. There is no CPU logger for the Access database engine. However, you can look at rows and the query plan, and that can be done with showplan. How this works and can be used is outlined here:
How to get query plans (showplan.out) from Access 2010?
And here is a older article on showplan and how to use it:
https://www.techrepublic.com/article/use-microsoft-jets-showplan-to-write-more-efficient-queries/#
So, showplan is somewhat similar to looking at the query plan used in SQL server. It will tell you things like if a full table scan is being used to get one row, or if indexing can or was used. So, looking at the query plan, be it sql server, or in this case Access is certainly possible. However, the status on CPU usage are slim, but how much data and things like if the query plan is doing full table scans is available in a similar fashion to query plans like one would see for server based systems such as SQL server.
Related
I have a requirement to generate CPU usage reports for my SQL server for previous 7 days. I will use a graph to represent it.
Also, I have to keep track of top 10 queries which consumed maximum CPU each day.
I got one post below but I have few doubts.
CPU utilization by database?
Doubt:
How I will know that, how was the overall CPU usage yesterday? Do I have to add all the AvgCPU time for distinct queries ran yesterday?
There is no reliable way in Getting cpu usage per day/last 5 days..I see SQLServer has below columns..
select
creation_time,
last_worker_time,
total_worker_time,
execution_count,
last_execution_time
from sys.dm_exec_query_stats
And those reported below on my test instance..
As you can see from Screenshot above..
We can't reliably get ,count of instances a particular query got executed on a particular day..And moreover you will see this entire data gets reset if you restart SQLServer
If you really want to show data on daily basis,you could use perfmon..Here are some tutorials which may help you..
1.Collecting Performance Data into a SQL Server Table
2.Using PerfMon for SQL Server Reporting Services Performance Management
You may also take a look on MS SQL Data collection sets. Easy to deploy, easy to keep necessary data (as long it stored on dedicated DB), and at least its really fits your requirements for top 10 CPU-expensive queries.
You can also slightly modify t-sql for collector agent and target tables on collector server in order to obtain some extra CPU info if you need it.
Possibly some of you don't even know about these features so you will learn a lot from this post which will in fact help me to optimize better and some of you probably use them on daily basis so you can help me and other less DBA proof users.
I'm using SQL-Server 2005 Standard
I run SQL Server Profiler a lot. Each time i find ad hoc queries or sps which execution time exceed my possible limits of under 100ms for complex queries and above 30ms for short ones (number does not mean a thing, just to make some sense). After i find possibly problematic queries i write them down so i can use Database Engine Tuning Advisor which executes overloaded queries on tables and at the result gives me indexes i need to build in order to improve performance. Each night i execute index rebuild function from Maintenance Plans.
Now question time!!!
1.if Database Engine Tuning Advisor gives me 10 indexes to create while improvement percentage is about 40% should i use it's advice or not? Better question is what is ratio number of indexes/improvement percentage i should follow. Indexes take space and time to rebuild.
2.If i create about 5-7 indexes for each problematic query, i can end up with 500 indexes per DB. How many indexes can i build so DB will perform normally? are there any limitations?
3.Is there any other way to optimize ( nor re-design ) your DB other than using my method or going sp by sp by your hands and eyes?
There's no right answer to this question as it depends heavily on your workload.
For workloads with a heavy ratio of reads (e.g. data warehouse) it might make sense to create an index which it would be positively counter productive to create for an environment with a greater amount of writes.
The DTA can help with this regard by assessing the impact on the overall workload but you would need to try and capture a representative sample (not just the poor performing queries). SQL Profiler is quite resource intensive so to do this with the least possible impact on your server you would need to use a server side SQL trace with appropriate filters to only log events related to the database of interest.
To identify the poorest performing queries in isolation If you have at least SQL2005 SP1 client tools installed you should be able to right click the database node in Management Studio and use the Reports -> Standard Reports menu to see the plans in the cache with highest CPU/IO.
If you are interested in this area I recommend the book SQL Server 2008 Query Performance Tuning Distilled (most of it applicable to SQL2005 as well)
You can get SQL Profiler to log to a table, so it will write the queries to a table you specify. If you can, leave it running for a few hours - Or however long it takes to cover as many queries/events as possible.
Next, use Database Engine Tuning Advisor - And get it to use this table of queries as its source input. You will find it looks at the whole pattern, and will recommend you create some indices, and remove others.
This is better than looking at queries one by one in isolation, although that still has its place.
I use an sql server regularly and have recently been getting frustrated by the performance. It would be difficult for me to get direct access to find out the hardware so:
Is there a direct way in management studio to assess performance or find out the exact hardware.
Alternatively does someone have a set of test sql procedures I could try and ideally compare to other results to get an idea of it's performance.
So far I have setup a few quick queries on my local machines sql express server just as test these seem to run quicker than the sql server on the network which is meant to be high performance although no one knows when it was last upgraded I have a feeling it hasn't been for 6 or 7 years. Obviously these test don't account for the possibility of others querying at the same time or network transfers of results... Hopefully someone has a better solution.
You can't just ask your server guys? Seems like there's a fair bit of mistrust if you can't get hardware metrics. Count of CPUs, total memory, etc.
If there's that amount of mistrust, even if you found the answer from the database server, rectifying it would be impossible. If you can't get the current parameters, how could you get a change of hardware passed the server guys?
Start building rapport. The best line in the world to get someone on your side is, "I'm in trouble and I need your help..." You've elevated them and subjugated yourself, you've put them in a position to save you. You'd be amazed at how much you can get out of people that way.
As far as standard queries. You could look at TPC queries.
IF you are on 2005:
SELECT * FROM sys.dm_os_performance_counters
That will give you some sql only stats. You will not find much info about the machine without at least terminal access. In the sql startup log you can see some info on processors as well.
You also might try updating your references in your server. I had an issue a while back that 1 query returned in 100ms and an identical query in 5+ minutes and the only difference between the 2 was a Capital letter in the table name in my query (whih obviously shouldn't matter).
After some searching and SO-Questioning, I found that I needed to update my statistics. Could it be something like this is needed for your database / SQL Server too?
This sort of thing can be very political, especially in a firm with an endemic CYA culture (which describes most financial services companies). If there's no reasonable
expectation of a good working relationship with the production staff, A few approaches are:
Look at the query plans of the
queries. Check that they are
sensible (using indexes when they
should etc.)
Make it formal. Ask their manager
to get the specifications of the
machine, the disk layout and server
configuration and the last time
statistics were updated on all
tables and indexes. Make it clear
that the machine appears to be
under-performing.
If the statistics are out of date,
get them updated.
and one more
SELECT * FROM sys.dm_os_sys_info
What techinques do you use? How do you find out which jobs take the longest to run? Is there a way to find out the offending applications?
Step 1:
Install the SQL Server Performance Dashboard.
Step2:
Profit.
Seriously, you do want to start with a look at that dashboard. More about installing and using it can be found here and/or here
To identify problematic queries start the Profiler, select following Events:
TSQL:BatchCompleted
TSQL:StmtCompleted
SP:Completed
SP:StmtCompleted
filter output for example by
Duration > x ms (for example 100ms, depends mainly on your needs and type of system)
CPU > y ms
Reads > r
Writes > w
Depending on what you want to optimize.
Be sure to filter the output enough to not having thousands of datarows scrolling through your window, because that will impact your server performance!
Its helpful to log output to a database table to analyse it afterwards.
Its also helpful to run Windows system monitor in parallel to view cpu load, disk io and some sql server performance counters. Configure sysmon to save the data to a file.
Than you have to get production typical query load and data volumne on your database to see meaningfull values with profiler.
After getting some output from profiler, you can stop profiling.
Then load the stored data from the profiling table again into profiler, and use importmenu to import the output from systemmonitor and the profiler will correlate the sysmon output to your sql profiler data. Thats a very nice feature.
In that view you can immediately identifiy bootlenecks regarding to your memory, disk or cpu sytem.
When you have identified some queries you want to omtimize, go to query analyzer and watch the execution plan and try to omtimize index usage and query design.
I have had good sucess with the Database Tuning tools provided inside SSMS or SQL Profiler when working on SQL Server 2000.
The key is to work with a GOOD sample set, track a portion of TRUE production workload for analsys, that will get the best overall bang for the buck.
I use the SQL Profiler that comes with SQL Server. Most of the poorly performing queries I've found are not using a lot of CPU but are generating a ton of disk IO.
I tend to put in filters on disk reads and look for queries that tend to do more than 20,000 or so reads. Then I look at the execution plan for those queries which usually gives you the information you need to optimize either the query or the indexes on the tables involved.
I use a few different techniques.
If you're trying to optimize a specific query, use Query Analyzer. Use the tools in there like displaying the execution plan, etc.
For your situation where you're not sure WHICH query is running slowly, one of the most powerful tools you can use is SQL Profiler.
Just pick the database you want to profile, and let it do its thing.
You need to let it run for a decent amount of time (this varies on traffic to your application) and then you can dump the results in a table and start analyzing them.
You are going to want to look at queries that have a lot of reads, or take up a lot of CPU time, etc.
Optimization is a bear, but keep going at it, and most importantly, don't assume you know where the bottleneck is, find proof of where it is and fix it.
Jeff mentioned in one of the podcasts that one of the things he always does is put in instrumentation for database calls, so that he can tell what queries are causing slowness etc. This is something I've measured in the past using SQL Profiler, but I'm interested in what strategies other people have used to include this as part of the application.
Is it simply a case of including a timer across each database call and logging the result, or is there a 'neater' way of doing it? Maybe there's a framework that does this for you already, or is there a flag I could enable in e.g. Linq-to-SQL that would provide similar functionality.
I mainly use c# but would also be interested in seeing methods from different languages, and I'd be more interested in a 'code' way of doing this over a db platform method like SQL Profiler.
If a query is more then just a simple SELECT on a single table I always run it through EXPLAIN if I am on MySQL or PostgreSQL. If you are using SQL Server then Management Studio has a Display Estimated Execution Plan which is essentially the same. It is useful to see how the engine will access each table and what indexes it will use. Sometimes it will surprise you.
Recording the database calls, the gross timing and the number of records (bytes) returned in the application is useful, but it's not going to give you all the information you need.
It might show you usage patterns you were not expecting. It might show where your using "row-by-row" access instead of "set based" operations.
The best tool to use is SQL Profiler and analyse the number of "Reads" vs the CPU and duration. You want to avoid high CPU queries, high Read's and long durations (duh!).
The "group by reads" is a useful feature to bring to the top the nastiest queries.
If you're writing queries in SQL Management Studio you can enter: SET STATISTICS TIME ON and SQl Server will tell you how long the individual parts of a query took to parse, compile and execute.
You might be able to log this information by handling the InfoMessage event of the SqlConnection class (but I think using the SQL Profiler is much easier.)
I would have thought that the important thing to ask here is "what database platform are you using?"
For example, in Sybase, installing MDA tables might solve your problem, they provide a whole bunch of statistics from procedure call usage to average logical I/O, CPU time and index coverage. It can be as clever as you want it to be.
I definitely see the value in using SQL Profiler while you're app is running, and EXPLAIN or SET STATISTICS will give you information about individual queries, but does anyone routinely put measurement points into their code to gather information about database queries ongoing - that would pick up on for example, a query on a table that performs fine initially, but as the number of rows grows, becomes slower and slower.
If you're using MySQL or Postgre there's various tools for seeing query activity in real time, but I haven't found a tool as good as the SQL Profiler for measuring query performance over time.
I'm wondering if there is (or should be?) something similar to ELMAH in the way it just plugs in and gives you information without much additional effort?
If you're into Firebird you may want to watch sinatica.com.
We'll soon launch a real-time monitoring tool for Firebird DBAs.
< /shameless plug>
If you use Hibernate (I use the Java version, I'd imagine NHibernate has something similar), you can have Hibernate collect statistics about lots of different things. See, for example:
http://www.javalobby.org/java/forums/t19807.html