SQL Server : TempDB high number of writes - sql-server

We use a SQL Server 2008 Web Edition on a Windows 2012 R2 server (32 GB RAM) to store data for an ASP.NET based web application. There are several dabases with news tables and different views which we query regularly (SqlDataReader, Linq-to-SQL) with different joins and filter conditions. The queries itself are longer and domain-specific so I skip an example.
So far everything worked fine.
Now we had to change such a query and extend it with a simple OR condition.
The result was that the number of reads and writes in the TempDB increased dramatically. Dramatically means 1000 writes of more than 100 MB per minute which results in a total tempdb file size of currently 1.5 GB.
If we remove the OR filter statement from the original query the TempDB file I/O normalizes instantly.
However, we do not have a clue what's going on within the TempDB. We ran the Query Analyzer several times and compared the results but its index optimization recommendations were only related to other databases stats and did not have any effect.
How would you narrow down this issue? Does anyone else experienced such a behavior in the past? Is it likely to be a problem with the news query itself or is it possible that we simply have to change some TempDB database properties to improve its I/O performance, e.g. autogrowth?

Start by analyzing your execution plans and run your queries with statistics (use the profiler). The problem is not in de tempdb, but in your queries. Then you will see where you select to many row which are temporary saved in de tempdb. Then you can change the queries or add the index you are missing.

Related

SQL table Indexes and Spotfire performance

I have a spotfire project that references several large SQL Server based tables (One has 700,000 rows with 200 columns, another is 80,000,000 rows with 10 columns, a few others that are much smaller by comparison). Currently I use information links with prompts to narrow down the data before loading into spotfire. Still have issues sometimes with RAM usage creeping up and random CPU spikes after data has been loaded.
My questions are if I add indexes to the SQL tables:
Will the amount of RAM/CPU usage by spotfire get better (lower)?
Will it help speed up the initial data load time?
Should I even bother?
I'm using SQL Server 2016 and Tibco Spotfire Analyst 7.7.0 (build version 7.7.0.39)
Thanks
If you add indexes without logical reason, it actually makes your system slower because indexes constantly update themselves after each INSERT, UPDATE, DELETE. You can ignore my statement if your DB has static data and you won't change the content usually.
You need to understand what parts of your queries consume most of resources, then create indexes accordingly.
Following URLs will help you:
https://www.liquidweb.com/kb/mysql-performance-identifying-long-queries/
https://www.eversql.com/choosing-the-best-indexes-for-mysql-query-optimization/

VarBinary(max) updates very slow on SQL Azure

We store some documents in a SQL Server database VarBinary(Max) column. Most documents will be a few KB, but sometimes it maybe a couple of MB.
We run into an issue when the file becomes bigger than about 4MB.
When updating the VarBinary column in a on-prem SQL Server, it is very fast (0.6 seconds for a 8MB file).
When doing the same statement on a identical database on SQL Azure, it takes more than 15 seconds!
Also if the code is running from an Azure App Service it is very slow. So it's not our Internet connection that is the problem.
I know storing files in SQL Server is not the preferred way of storing and Blob storage would normally the best solution, but we have special reasons we need to do this, so I want to leave that out of the discussion ;-)
When investigating the execution plans, I see a "Table Spool" taking all the time and I'm not sure why. Below are the execution plans for on prem and Azure.
Identical databases and data. If someone can help, that would be great.
Thanks Chris
The table spool operator is caching the row to be updated (In tempdb) and then feeds it to the Table Update operator, Spool operators are a sign that the database engine is performing a large number of writes (8 KB pages) to TempDB.
For I/O intensive workloads you need to scale to Premium tiers. On Basic and Standard tiers those updates won’t have a good performance.
Hope this helps.

One database vs Multiple database in SQL SERVER 2014

I have a sql server running on my machine.It contains 10 data base file.
say
a
b
....
z
so my question is 10 or more database or 1 single database is best for sql server .Does more database cause more performance issue on single server machine? what is recommended?
You may think like:
"Using multiple databases helps like they are outer index and it can be helpfull for search times.
Think like that, when searching begins, your database server takes your query it will go the firstly to your table and it will execute query on that table and which helps for querying time because datas on other tables will not be looked only your table index will be looked at tables table. :)
In same manner when you group your tables on different dbs query will begin to look just table index of that table on tables table and because there will be less table in that table finding your tables table id will going to complete in less time. :) "
But that is not correct! If you dont have millions of tables it will not going to impact because datastructures used on dbs mostly acces data in O(log(n)) and that means that if(Big if) accesing in 1,000,000 input takes 6 step complete then 100,000 will take 5 step and 1,000 will take 3. As you can see it not makes difference.
On the other hand using 2 db guarantees that it has to be at least 2 connections and connections are expensive things and that is why connection pools are exist.
Mostly Common issue is for poor database design
The causes for performance problems can be various, but the most common are a poorly designed database, incorrectly configured system, insufficient disk space or other system resources, excessive query compilation and recompilation, bad execution plans due to missing or outdated statistics, and queries or stored procedures that have long execution times due to improper design
Memory bottlenecks are caused by limitations in available memory and memory pressure caused by SQL Server, system, or other application activity. Poor indexing requires table scans which in case of large tables means that a large number of rows is read from disk and handled in memory
Network bottlenecks are caused by overload on a server or network, so the data cannot flow as expected
I/O issues can be caused by slow hardware used, bad storage solution design, and configuration. Besides hardware components, such as disk types, disk array type, and RAID configuration that affect I/O performance, unnecessary requests made by a database also affect I/O traffic. Frequent index scans, inefficient queries, and out of date statistics can also cause I/O workload and bottlenecks
- See more at: http://www.sqlshack.com/dba-guide-sql-server-performance-troubleshooting-part-1-problems-performance-metrics/#sthash.QrzEyKbz.dpuf
Multiple Database is not a problem for performance.
you can see these links. I think it will help you about understanding performance tuning :D

Are query plans stored per database?

We have a multi-tenant environment with several hundred databases that are mostly identical schema-wise but I'm worried that a query plan may be the fastest for one database but not another. For example, if we have one database that doesn't have a lot of data and you run a query that is deemed to be fast enough to do a scan across all rows and saves that plan but then if you run the same query against a much large database will it generate/save it's own plan or use the one created against the much smaller database.
Yes!
When you have multiple instance of a database, they have the same schema, BUT each database different number records and statistics and ....
So it makes sense, SQL Server keeps execution plans per database.
Note: "storing" doesn't mean SQL Server, writes the query plans in database. They are stored in cache while SQL Server service is running and it has enough memory and the plan still need to be kept in cache for later use.

SQL Server 2005 64bit query blocking

We are experiencing some difficulties with SQL Server performance and wanted some help.
Our environment is: -
Windows 2003 Enterprise x64 Edition R2
Intel E5450 Quad Core 3ghz Processor
16GB RAM
SQL Server 2005 64bit Enterprise Edition (9.00.3282.00)
Database compatibility is 8 (but tested on 9 as well)
Hyperthreading is switched off
We have one database with a 1.2 million row table which is being queried (inefficiently), but is resulting in all 4 processors being saturated to the point where all other queries are blocked until the query is finished. This includes queries to separate databases and totally unrelated tables.
If the query is run with option MAXDOP 1 then all 4 cores run at 25% with the query taking 4 times as long, there is no blocking in this instance. In addition to this, we have run the same query on SQL 2000 and the response time is the same, but no CPU saturation.
We have a suspicion that the problem may be around contention over tempdb. In this particular instance we have a stored proc using a temp table and also the parallel query accessing the temp db I assume.
Obviously the standard response will be to re-write the queries. Firstly, this is not really a viable option and secondly this is only treating a symptom of the problem. Essentially the server is unable to process multiple requests, which is of great concern.
Does anyone know of any patches, config changes or known problems that might be causing this? Has anyone seen this before? Is it a 64bit nuance?
Regards
Lee
Sounds like the table isn't properly indexed. A table with 1.2 million rows shouldn't take anything to query. I've got tables with 60+ million rows and I query that table in milliseconds.
What's the query look like, and whats the table look like?
Sounds like locking on tempdb which effectively stops anything else that may use tempdb from running until it is finished. In particular it may be that the sysobjects table is locked.
The ideal step is to re-write the query to stop it locking tempdb for its entire duration, however, I'm guessing this is not easily possible.
You can try setting up a second instance of SQL to run this database. That way any temporary locking will only affect itself and not any other databases on the server.
To fix the problem for other queries running on the same database you can look into multiple files for the temp database.
However, once you're going this far for a solution you really need to go back to the problem query and try and make its footprint smaller. As Kristen commented, simply changing the way you create a temporary table can have a drastic effect on how things are locked.

Resources