How can I reduce my high page reads/sec in SQL Server? - sql-server

I have an extremely high page reads/sec on my SQL Server instance:
Memory looks good (64GB overall):
Most blogs/articles online point to increasing RAM but what else can I do to reduce these high page reads/sec?

Per DanGuzman, the query above is a cumulative count. Use Process Monitor for understanding the actual reads/sec.

Related

Are all available DTU used to exec a query?

I have a not simple query.
When I had 10 DTUs for my database, it took about 17 seconds to execute the query.
I increased the level to 50 DTU - now the execution takes 3-4 seconds.
This ratio corresponds to the documentation - more DTU = work faster.
But!
1 On my PC I can execute the query in 1 sec.
2 In portal-statistics I see that I use only 12 DTU (max DTU percentage = 25% ).
In sys.dm_db_resource_stats I see that MAX(avg_cpu_percent) is about 25% and the other params are less.
So the question is - Why my query takes 3-4 sec to exec?
It can be executed in 1 sec. And server does not use all my DTU.
How to make server use all available resources to exec queries faster?
DTU is a combined measurement of CPU, memory, data I/O and transaction log I/O.
This means that reaching a DTU bottleneck can mean any of those.
This question may help you to measure the different aspects: Azure SQL Database "DTU percentage" metric
And here's more info on DTU: https://learn.microsoft.com/en-us/azure/sql-database/sql-database-what-is-a-dtu
On my PC I can execute the query in 1 sec
We should not be comparing our Onprem computing power with DTU.
DTU is a combination of CPU,IO,Memory you will be getting based on your performance tier.so the comparison is not valid.
How to make server use all available resources to exec queries faster?
This is simply not possible,since when sql runs a query,memory is the only constraint ,that can prevent the query from even starting.Rest of the resources like CPU,IO speed can increase or decrease based on what query does
In summary,you will have to ensure ,queries are not constrained due to resource crunch,they can use up all resources if they need and can release them when not needed.
You also will have to look at wait types and further fine tune the query.
As Bernard Vander Beken mentioned:
DTU is a combined measurement of CPU, memory, data I/O and transaction
log I/O.
I'll also add that Microsoft does not share the formula used to calculate DTUs. You mentioned that you are not seeing DTUs peg at 100% during query execution. But since we do not know the formula, you may very well be pegging components of DTU, but not pegging DTU itself.
Azure SQL is a shared environment, and each tenant will be throttled to ensure that the minimum SLA for all tenants
What a DTU is is quite fuzzy.
We have done an experiment where we run a set of benchmarks on machines with the same amount of DTU on different data centers.
http://dbwatch.com/azure-database-performance-measured
It turns out that the actual performance varies by a factor of 5.
We have also seen instances where the performance of a repeated query on the same database varies drastically.
We provide our database performance benchmarks for free if you would like to compare the instance you run on your PC with the instance in the azure cloud.

How to fetch query execution statistics using Oracle DB?

I am new to database. I try to run a simple query on SQL Server 2014 and Oracle 12c.
This is the execution plan I get using SQL Server. It contains information about I/O cost and CPU cost in seconds.
However I can't find the same information using Oracle. The CPU cost shown in the execution plan is not based on execution time.
I want to do some comparison between the two databases. How I can obtain the same information in Oracle as in SQL Server? Besides, how I can know the cache hit ratio?
Thank you.
The cost estimate is in fact based on time.
It is a non-dimensionalised measurement that expresses the estimated time for the query to complete in terms of the equivalent number of logical reads, so if a logical read is expected to take 0.001 seconds then a cost of 12 is 0.012 seconds.
Although it is commonly stated that the cost between different queries cannot be compared, this was only definitively true in earlier versions. The difficulty in comparing query costs relates to how long single block and multiblock reads, writes and CPU operations take. This can depend on such a multitude of factors (other activity on the system, and activity immediately prior that affects the likelihood of blocks being cached by the instance or the i/o subsystem) that it is highly unlikely that you really expect to derive a time from a cost.
Cache hit ratios have been discredited for quite some time as a measurement of system efficiency. It is possible to improve the cache hit ratio to an arbitrary number by simply running particular types of highly inefficient queries.
Use the Oracle Database 12c: EM Express Performance Hub to get both estimates and actual values for queries and their operations. Regular explain plans are helpful, but they just show you what Oracle thinks will happen, not necessarily what will happen.
Specifically, use either the SQL Details (aggregate) or the SQL Monitor Details (last execution) information.
You're close, very close.
Run with AutoTrace.
I talk more about the feature here, or you can of course read up on the docs or the Help.

SQL Server long running query taking hours but using low CPU

I'm running some stored procedures in SQL Server 2012 under Windows Server 2012 in a dedicated server with 32 GB of RAM and 8 CPU cores. The CPU usage is always below 10% and the RAM usage is at 80% because SQL Server has 20 GB (of 32 GB) assigned.
There are some stored procedures that are taking 4 hours some days and other days, with almost the same data, are taking 7 or 8 hours.
I'm using the least restrictive isolation level so I think this should not be a locking problem. The database size is around 100 GB and the biggest table has around 5 million records.
The processes have bulk inserts, updates and deletes (in some cases I can use truncate to avoid generating logs and save some time). I'm making some full-text-search queries in one table.
I have full control of the server so I can change any configuration parameter.
I have a few questions:
Is it possible to improve the performance of the queries using
parallelism?
Why is the CPU usage so low?
What are the best practises for configuring SQL Server?
What are the best free tools for auditing the server? I tried one
from Microsoft called SQL Server 2012 BPA but the report is always
empty with no warnings.
EDIT:
I checked the log and I found this:
03/18/2015 11:09:25,spid26s,Unknown,SQL Server has encountered 82 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [C:\Program Files\Microsoft SQL Server\MSSQL11.HLSQLSERVER\MSSQL\DATA\templog.ldf] in database [tempdb] (2). The OS file handle is 0x0000000000000BF8. The offset of the latest long I/O is: 0x00000001fe4000
Bump up max memory to 24 gb.
Move tempdb off the c drive and consider mult tempdb files, with auto grow at least 128 Mbps or 256 Mbps.
Install performance dashboard and run performance dashboard report to see what queries are running and check waits.
If you are using auto grow on user data log and log files of 10%, change that to something similar to tempdb growth above.
Using performance dashboard check for obvious missing indexes that predict 95% or higher improvement impact.
Disregard all the nay Sayers who say not to do what I'm suggesting. If you do these 5 things and you're still having trouble post some of the results from performance dashboard, which by the way is free.
One more thing that may be helpful, download and install the sp_whoisactive stored proc, run it and see what processes are running. Research the queries that you find after running sp_whoisactive.
query taking hours but using low CPU
You say that as if CPU would matter for most db operations. HINT: They do not.
Databases need IO. RAM sin some cases helps mitigate this, but at the end it runs down to IO.
And you know what I see in your question? CPU, Memory (somehow assuming 32gb is impressive) but NO WORD ON DISC LAYOUT.
And that is what matters. Discs, distribution of files to spread the load.
If you look into performance counters then you will see latency being super high on discs - because whatever "pathetic" (in sql server terms) disc layout you have there, it simply is not up to the task.
Time to start buying. SSD are a LOT cheaper than discs. You may say "Oh, how are they cheaper". Well, you do not buy GB - you buy IO. And last time I checked SSD did not cost 100 times the price of discs - but they have 100 times or more the IO. and we talk always of random IO.
Then isolate Tempdb on separate SSD - tempdb either does no a lot or a TON and you want to see this.
Then isolate the log file.
Make multiple data files, for database and tempdb (particularly tempdb - as many as you have cores).
And yes, this will cost money. But at the end - you need IO and like most developers you got CPU. Bad for a database.

SQL Server CPU vs. Storage Bottlenecking

I've read quite a bit about SQL Servers using SSDs performing much better than traditional hard drives. In load tests with my app in a test environment, though, I'm able to keep my test DB server (SQL 2005) pegged between 75% and 100% CPU usage without much of a strain on disk access (as far as I can tell). My data set is still pretty small; database backups are under 100 MB. The test server I'm using is not new, but is also no slouch.
So, my questions:
Is the CPU the bottleneck (as opposed to the storage) because the dataset is small and therefore fits in memory?
Will this change once the dataset grows so paging is necessary?
Approximately how big (as a percentage of system memory) does the dataset have to get before SQL Server starts paging? Or does that depend on a lot of other factors?
As the app and its dataset grows, are there other bottlenecks that will tend to crop up besides CPU, storage, and lack of proper indexes?
Yes
Yes
If you have SQL Server configured to use as much memory as it can get, probably when it exceeds the max system memory. But it's very setup dependant on what causes paging (the query that is being executed is the most prevalent cause).
I/O between the request machine and server is the only one that I can think of, and that only matters if you are retrieving large datasets. I also would not group a lack of indexes as a bottleneck, rather indexes enable better performance with regard to searching.
As long as the CPU is the bottleneck on your dedicated SQL-Server machine, you don't have to worry about disk speed (assuming nothing's wrong with the machine). SQL-Server WILL use heavy memory caching. SQL-Server has built-in strategies to perform best under a given load and available resources. Just don't worry about it!

SQL DMV Queries & Cached Plans

My understanding is that some of the DMV's in SQL Server depend on query plans being cached. My questions are these. Are all query plans cached? If not, when is a query plan not cached? For ones that are cached, how long do they stay in the cache?
Thanks very much
Some of the SQL Server DMV's that capture tokens relating directly to the query plan cache, are at the mercy of the memory pressure placed on the query plan cache (due to adhoc queries, other memory usage and high activity, or through recompilation). The query plan cache is subject to plan aging (e.g. a plan with a cost of 10 that has been referenced 5 times has an "age" value of 50):
If the following criteria are met, the plan is removed from memory:
· More memory is required by the system
· The "age" of the plan has reached zero
· The plan isn't currently being referenced by an existing connection
Ref.
Those DMV's not directly relating to the query plan cache are flushed under 'general' memory pressure (cached data pages) or if the sql server service is restarted.
The factors affecting query plan caching have changed slightly since SQL Server 2000. The up-to-date reference for SQL Server 2008 is here: Plan Caching in SQL Server 2008
I just want to add some geek minutia: The Query plan cache leverages the general caching mechanism of SQL Server. These caches use the Clock algorithm for eviction, see Q and A: Clock Hands - what are they for. For query plan caches, the cost of the entry takes into consideration the time, IO and memory needed to create the cache entry.
For ones that are cached, how long do
they stay in the cache?
A valid object stays in cache until the clock hand decrements the cost to 0. See sys.dm_os_memory_cache_clock_hands. There is no absolute time answer to this question, the clock hand could decrement an entry to 0 in a second, in a hour, in a week or in a year. It all depends on the initial cost of the entry (query/schema complexity), on the frequency of reusing the plan, and the clock hands speed (memory pressure).
Cached object may be invalidated though. The various reasons why a Query plan gets invalidated are explained in great detail the white paper linked by Mitch: Plan Caching in SQL Server 2008.

Resources