I have a SQL Server 2016 environment with high availability on. When I check the query plan cache, I see that SQL Server is constantly clearing the cache. The query below returns only 5 to 10 records sometimes 0 records.
SELECT *
FROM sys.dm_exec_cached_plans decp
I scripted all database objects (stored procedures, triggers..) to see if there's a command anywhere that drops the cache, but I could not find any.
Any help in this regard is appreciated.
I experienced the same proble, reducing Max Memory from 60 GB to 55 GB make the server performing better.
(batch per seconds) / (compilation per seconds) is about 4% (before the changes was 15/20%)
Related
I have a document management system (like SharePoint), which works on top of Microsoft SQL Server. It generates SQL queries where I do not have much control, so I cannot change SQL queries directly in most of the cases.
In the past, this system was running on top of SQL Server 2014 and performance was ok. Later another instance were created with SQL Server 2017 and performance of some queries is very bad (like 1 minute compare to 0.2 seconds in the past). Query is a select from 8 tables, which are joined by foreign/primary key. It does not make much sense to put query here, because it is 140Kb in size and query execution plan is also huge. I guess nobody will have time to understand and analyze it. My database is not empty and has some data, around 100K records in some tables and up to 4M records in another table, which are joined in this query. The final query result is around 8000 rows. if I add ALL TOP 30 hint, with the value less than 30, then it slows down the query a lot. Any values more than 30 or removing ALL TOP hint does not affect the performance, except time to fetch 8000 rows, but I see the first records very fast.
I tried to switch new server into compatibility mode the same as old one, but it did not help. I found that if I add to the query these hints
OPTION (LOOP JOIN, FORCE ORDER)
then it becomes much faster. If I add one of these hints it does not help.
I cannot modify the queries generated by the system, which means I cannot add these hints to every query it generates. I only extracted query from trace and add hints for debugging purposes, when running them in SQL Server Management Studio.
My question:
Can this fact, that adding these 2 hints improve the performance tell me something about what is wrong in my SQL Server and gives some ideas what I can change in the SQL Server configuration to improve performance?
we're using SQL Server 2008 R2 Full-Text Search over a table with 2.6 million records. The search performance often is poor, it follows the commonly reported pattern: cold system/first run ~10+ sec, subsequent runs ~1-2 sec. This is inline with results reported in the following article dated of Feb, 2013:
So You Think You Can Search – Comparing Microsoft SQL Server FTS and Apache Lucene
The article shows the following speed comparison results using Wikipedia dump data:
Indexing speed, size and single query execution time using:
Lucene MS SQL FTS
Indexing Speed 3 MB/sec 1 MB/sec
Index Size 10-25% 25-30%
Simple query < 20 ms < 20 ms
Query With Custom Score < 4 sec > 20 sec
Parallel Query Executions (10 threads, average execution time per query in ms):
MS SQL FTS Lucene (File System) Lucene (RAM)
Cold System: Simple Query 56 643 21
Boost Query 19669* 859 27
Second executions: Simple Query 14 8 < 5
Boost Query 465 17 9
*average time, the very first query could be executed up to 2 min(!)
My questions are:
Since there were several major SQL Server releases since the article was published on Feb 8, 2013, can someone report any FTS performance improvements over same data (preferably of 1+ million records) when they migrated to more recent SQL Server versions (2012, 2014 and 2016)?
Do more recent SQL Server versions support FTS catalogs/indexes placed in RAM just as solr/lucene do?
UPDATE: in our scenario we seldom insert new data into FT catalog linked table, but run read only searches very often. So, I don't think SQL constantly rebuilding FTS index is the issue.
Fulltext Search Improvements in SQL Server 2012:
We looked at the entire code base from how queries block while waiting an ongoing index update to release a shared schema lock, from how much memory is allocated during index fragment population, to how we could reorganize the query code base as a streaming Table Value Function to optimize for TOP N search queries, how we could maintain key distribution histograms to execute search on parallel threads, all the way to how we could take better advantage of the processor compute instructions (scoring ranks for example)… End result is that we are able to significantly boost performance (10X in many cases when it comes to concurrent index updates with large query workloads) and scale without having to change any storage structures or existing API surface. All our customers going from SQL 2008 / R2 to Denali will benefit with this improvement.
I'd recommend you to dig a bit into SQL Server FTS internals. This will give you an idea how your query is executed and if this works for you, or not. I suggest to start from here: https://technet.microsoft.com/en-us/library/ms142505(v=sql.105).aspx and here: https://msdn.microsoft.com/ru-ru/library/cc721269.aspx. Internally FTS uses tables and indexes. With all their benefits and drawbacks. So, like any other table, if data of that internal table is not in Buffer Pool, SQL Server will read from disk to RAM. Once the data in the RAM, it will be read from the RAM.
I'm running some stored procedures in SQL Server 2012 under Windows Server 2012 in a dedicated server with 32 GB of RAM and 8 CPU cores. The CPU usage is always below 10% and the RAM usage is at 80% because SQL Server has 20 GB (of 32 GB) assigned.
There are some stored procedures that are taking 4 hours some days and other days, with almost the same data, are taking 7 or 8 hours.
I'm using the least restrictive isolation level so I think this should not be a locking problem. The database size is around 100 GB and the biggest table has around 5 million records.
The processes have bulk inserts, updates and deletes (in some cases I can use truncate to avoid generating logs and save some time). I'm making some full-text-search queries in one table.
I have full control of the server so I can change any configuration parameter.
I have a few questions:
Is it possible to improve the performance of the queries using
parallelism?
Why is the CPU usage so low?
What are the best practises for configuring SQL Server?
What are the best free tools for auditing the server? I tried one
from Microsoft called SQL Server 2012 BPA but the report is always
empty with no warnings.
EDIT:
I checked the log and I found this:
03/18/2015 11:09:25,spid26s,Unknown,SQL Server has encountered 82 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [C:\Program Files\Microsoft SQL Server\MSSQL11.HLSQLSERVER\MSSQL\DATA\templog.ldf] in database [tempdb] (2). The OS file handle is 0x0000000000000BF8. The offset of the latest long I/O is: 0x00000001fe4000
Bump up max memory to 24 gb.
Move tempdb off the c drive and consider mult tempdb files, with auto grow at least 128 Mbps or 256 Mbps.
Install performance dashboard and run performance dashboard report to see what queries are running and check waits.
If you are using auto grow on user data log and log files of 10%, change that to something similar to tempdb growth above.
Using performance dashboard check for obvious missing indexes that predict 95% or higher improvement impact.
Disregard all the nay Sayers who say not to do what I'm suggesting. If you do these 5 things and you're still having trouble post some of the results from performance dashboard, which by the way is free.
One more thing that may be helpful, download and install the sp_whoisactive stored proc, run it and see what processes are running. Research the queries that you find after running sp_whoisactive.
query taking hours but using low CPU
You say that as if CPU would matter for most db operations. HINT: They do not.
Databases need IO. RAM sin some cases helps mitigate this, but at the end it runs down to IO.
And you know what I see in your question? CPU, Memory (somehow assuming 32gb is impressive) but NO WORD ON DISC LAYOUT.
And that is what matters. Discs, distribution of files to spread the load.
If you look into performance counters then you will see latency being super high on discs - because whatever "pathetic" (in sql server terms) disc layout you have there, it simply is not up to the task.
Time to start buying. SSD are a LOT cheaper than discs. You may say "Oh, how are they cheaper". Well, you do not buy GB - you buy IO. And last time I checked SSD did not cost 100 times the price of discs - but they have 100 times or more the IO. and we talk always of random IO.
Then isolate Tempdb on separate SSD - tempdb either does no a lot or a TON and you want to see this.
Then isolate the log file.
Make multiple data files, for database and tempdb (particularly tempdb - as many as you have cores).
And yes, this will cost money. But at the end - you need IO and like most developers you got CPU. Bad for a database.
We use a SQL Server 2008 Web Edition on a Windows 2012 R2 server (32 GB RAM) to store data for an ASP.NET based web application. There are several dabases with news tables and different views which we query regularly (SqlDataReader, Linq-to-SQL) with different joins and filter conditions. The queries itself are longer and domain-specific so I skip an example.
So far everything worked fine.
Now we had to change such a query and extend it with a simple OR condition.
The result was that the number of reads and writes in the TempDB increased dramatically. Dramatically means 1000 writes of more than 100 MB per minute which results in a total tempdb file size of currently 1.5 GB.
If we remove the OR filter statement from the original query the TempDB file I/O normalizes instantly.
However, we do not have a clue what's going on within the TempDB. We ran the Query Analyzer several times and compared the results but its index optimization recommendations were only related to other databases stats and did not have any effect.
How would you narrow down this issue? Does anyone else experienced such a behavior in the past? Is it likely to be a problem with the news query itself or is it possible that we simply have to change some TempDB database properties to improve its I/O performance, e.g. autogrowth?
Start by analyzing your execution plans and run your queries with statistics (use the profiler). The problem is not in de tempdb, but in your queries. Then you will see where you select to many row which are temporary saved in de tempdb. Then you can change the queries or add the index you are missing.
I am using a SQL server 2008, which has databases mirrored in Synchronized mode.
I am trying to run some update stored procedures, with some nested joins and it runs fine (Obviously with a reduced performance compared to a server which is not mirrored).
The problem I am facing is that if I select the "show detailed plan" option. The query starts running and it virtually goes to a hung state and doesnt recover. I finally end task the SQL.
I have a public role for the databases and I cant access any stats.
Can you tell me what exactly (or in general) should I ask the DBA to look at?
The details of the SQL server is mentioned below.
Product - SQL Server Enterprise Edtn- 64 bit.
OS - WIndows NT 6.0
Memory -6143 MB
Processor -2
Maximum Server memory - 3072 MB
Minimum server memory - 16 MB
Any help on guiding me to a right direction will be appreciated.
Regards,
Dasso
Because
1) You have activated [Include actual execution plan] option and because
2) There is a WHILE statement
SQL Server will send to client - Sql Server Management Studio the actual execution plan of every SQL statement executed by every iteration of WHILE statement. So, if WHILE includes a simple UPDATE and it executes 100 iterations then Sql Server will send the execution plan of UPDATE 100 times!
You should decrease the number of iterations for WHILE or you could use estimated plans.