SQL Server Performance Problem - sql-server

Our primary database server is an 8 core box with 8GB of RAM. The CPU is a Xeon E7330 # 2.4GHz. It runs Windows Server 2003 R2 (x64 edition) and SQL Server 2005
I wanted to do some testing so I set up SQL Server 2005 on another brand-new server which is an 8 core box with 4 GB of RAM. It has a Xeon X5460 # 3.16GHz and runs Windows Server 2003 R2 Standard. I Installed SQL Server 2005 out of the box and restored a backup of the primary database on to it, and did an UPDATE STATISTICS on all the tables.
The process I was testing executes the same stored proc many times. I was astounded to find from the profiler that this proc which executes with duration=0 or 1 on the primary server, was consistently executing with durations in excess of 130. This essentially makes the secondary server useless for testing, because it's just too slow.
No other apps run on either of these two boxes, just SQL server. And unlike the primary database server, the test server only had me accessing it.
I can't believe the difference in spec between these two machines explains this colossal difference in performance. Can anybody suggest any settings I may need to change?
Updates in answers to questions:
Second server is 32 bit Windows
I'm inquiring now about the disk arrays and how comparable they are
On the primary server, the data and logs are on the same drive (!) and it works fine
Looking in task manager on the test server, the CPU is running at like 10%, only one core even showing activity
Task manager on the test server (4GB RAM) shows "PF Usage 2.01GB" with SQL Server running. On the primary server (8GB RAM) it shows "PF Usage 6.67GB". How would I make SQL Server on the test box use more of the RAM? Maybe that would make a difference
Another update:
The primary server has a RAID-5 with 15,000 RPM drives. The test box has a RAID-5 with 10,000 RPM drives.

32 bit OS means 2 GB Virtual Address Space for your processes. Standard edition OS mean no AWE extensions either. So your test machine will be severely RAM deprived compared with the production one. Your buffer pool will suffer from premature eviction of the pages, your execution plans will not have the option to choose hash-joins for a lot of queries and so on and so forth. I doubt this explains the entire difference, I'm sure there must be something more at play. You say only 10 CPU usage during the query, is your MAXDOP setting 1 by any chance on the test server? Have you compared the output of sp_configure on the two machines? (make sure you enable 'advanced options' too).
Can you run the same problem query on the two machines, from a SSMS query window, with SET STATISTICS IO ON and SET STATISTICS TIME ON? Run it 2-3 times on each and write down the results. Does it show the same number of logical reads but vastly different number of physical reads? This would point to the RAM being insufficient to cache the needed pages. IS the number of logical reads very different? It probably means you get a bad execution plan on test.
Is the query write intensive by any chance? If so did you pre-grow the test database or is your execution blocked by log growth and database growth events?
There are plenty of places to look at to narrow down the issue, like SQL performance counters, sys.dm_os_wait_stats, check the sys.dm_exec_requests wait_type and wait_resource.

was the data in the memory cache yet? or was it all read from disk

You either have a different plan being generated or some hardware differences. For hardware you can check the disk seconds/[read,write] (edit to clarify - you do this in perfmon) and see if you have some massive differences from caching (e.g. high perf raid controller).
For the plan difference just check out the execution plans.
Also do set statistics io on and see if you are getting physical reads instead of logical reads. Maybe the mem difference is keeping your dataset from fitting in memory in secondary but not primary machine.

Although you may not be able to use AWE on your 32-bit server, you can provide SQL Server with a little more memory by adding the /3GB switch to the boot.ini file. Check out Books Online, it should give you more information.

Related

Queries slow when run by specific Windows account

Running SQL Server 2014 Express on our domain. We use Windows Authentication to log on. All queries are performed in stored procedures.
Now, the system runs fine for all our users - except one. When he logs on (using our software), all queries take around 10 times longer (e.g. 30 ms instead of 2 ms). The queries are identical, the database is the same, the network speed is the same, the operative system is the same, the SQL Server drivers are the same, connection pooling is the same, DNS is the same. Changing computer does not help. The problem seems to be linked to the account being used.
What on Earth may be the cause for this huge performance hit?
Please advise!
I would try rebuilding the SP (by running an ALTER statement that duplicates its existing structure) to force SQL Server to recompile. I don't know every way SQL Server caches things but it can definitely create distinct execution plans for different types of connections so I wouldn't be surprised if your slow user is running a version with an inefficient execution plan.
http://www.sommarskog.se/query-plan-mysteries.html

SQL Server long running query taking hours but using low CPU

I'm running some stored procedures in SQL Server 2012 under Windows Server 2012 in a dedicated server with 32 GB of RAM and 8 CPU cores. The CPU usage is always below 10% and the RAM usage is at 80% because SQL Server has 20 GB (of 32 GB) assigned.
There are some stored procedures that are taking 4 hours some days and other days, with almost the same data, are taking 7 or 8 hours.
I'm using the least restrictive isolation level so I think this should not be a locking problem. The database size is around 100 GB and the biggest table has around 5 million records.
The processes have bulk inserts, updates and deletes (in some cases I can use truncate to avoid generating logs and save some time). I'm making some full-text-search queries in one table.
I have full control of the server so I can change any configuration parameter.
I have a few questions:
Is it possible to improve the performance of the queries using
parallelism?
Why is the CPU usage so low?
What are the best practises for configuring SQL Server?
What are the best free tools for auditing the server? I tried one
from Microsoft called SQL Server 2012 BPA but the report is always
empty with no warnings.
EDIT:
I checked the log and I found this:
03/18/2015 11:09:25,spid26s,Unknown,SQL Server has encountered 82 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [C:\Program Files\Microsoft SQL Server\MSSQL11.HLSQLSERVER\MSSQL\DATA\templog.ldf] in database [tempdb] (2). The OS file handle is 0x0000000000000BF8. The offset of the latest long I/O is: 0x00000001fe4000
Bump up max memory to 24 gb.
Move tempdb off the c drive and consider mult tempdb files, with auto grow at least 128 Mbps or 256 Mbps.
Install performance dashboard and run performance dashboard report to see what queries are running and check waits.
If you are using auto grow on user data log and log files of 10%, change that to something similar to tempdb growth above.
Using performance dashboard check for obvious missing indexes that predict 95% or higher improvement impact.
Disregard all the nay Sayers who say not to do what I'm suggesting. If you do these 5 things and you're still having trouble post some of the results from performance dashboard, which by the way is free.
One more thing that may be helpful, download and install the sp_whoisactive stored proc, run it and see what processes are running. Research the queries that you find after running sp_whoisactive.
query taking hours but using low CPU
You say that as if CPU would matter for most db operations. HINT: They do not.
Databases need IO. RAM sin some cases helps mitigate this, but at the end it runs down to IO.
And you know what I see in your question? CPU, Memory (somehow assuming 32gb is impressive) but NO WORD ON DISC LAYOUT.
And that is what matters. Discs, distribution of files to spread the load.
If you look into performance counters then you will see latency being super high on discs - because whatever "pathetic" (in sql server terms) disc layout you have there, it simply is not up to the task.
Time to start buying. SSD are a LOT cheaper than discs. You may say "Oh, how are they cheaper". Well, you do not buy GB - you buy IO. And last time I checked SSD did not cost 100 times the price of discs - but they have 100 times or more the IO. and we talk always of random IO.
Then isolate Tempdb on separate SSD - tempdb either does no a lot or a TON and you want to see this.
Then isolate the log file.
Make multiple data files, for database and tempdb (particularly tempdb - as many as you have cores).
And yes, this will cost money. But at the end - you need IO and like most developers you got CPU. Bad for a database.

SQL Server 2014 standard edition slows the machine when Database size grows

I have a scenario where an application server saves 15k rows per second in SQL Server database. At first initial hours machine is still usable but whenever the database size increases ~20gig, it seems that machine is becoming unusable.
I saw some topics/forums/answers/blogs suggesting to limit the max memory usage of SQL Server. Any thoughts on this?
Btw, using SQL Bulkcopy to insert rows in the database.
I have two suggestions for you:
1 - Database settings:
When you create the database, try to use a large initial size, and consider to have a bigger autogrowth percentage/size.
You will want to minimize the times your filegroups need to grow.
2 - Server settings:
In your SQL Server settings I would recommend that you remove one logical processor from the SQL Server. The OS will use this processor when the SQL Server is busy with heavy loads on the other processors. In my experience, this usually gives a nice boost to the OS .

SQL Server 2005 Memory Pressure and tempdb writes problem

We are having some issues with our production SQL Server.
Server: Dual Quad Core Xeon
8 GB RAM
Single RAID 10 Array
Windows 2003 Server 64-bit
SQL Server 2005 Standard 64-Bit
There is about 250MB of free RAM on the machine right now. SQL Server has around 6GB of RAM, and our monitoring software says that only half of the SQL Server allocated RAM is actually being used.
Our main database is approximately 20GB, with about 12GB being used with any frequency. Our tempdb is at 700MB. Both are located on the same physical disk array.
Additionally, using Filemon, I was able to see that the tempdb file had 100's or 1000's of writes of length 65536. Disk queue length was over 100 nearly 80% of the time.
So, here are my questions-
What would cause all those writes on the tempdb? I'm not sure if we have always had that much activity, but it seems excessive and these problems are recent.
Should I just add more memory to the server?
On high load servers, should tempdb and db files be located on separate arrays?
A high disk queue length does not mean you have an I/O bottleneck if you have a SAN or NAS, you may want to look at other additional counters. Check out SQL Server Urban Legends discussed for more details.
1: The following operations heavily utilize tempdb
Repeated create and drop of temporary tables (local or global)
Table variables that use tempdb for storage purposes
Work tables associated with CURSORS
Work tables associated with an ORDER BY clause
Work tables associated with an GROUP BY clause
Work files associated with HASH PLANS
These SQL Server 2005 features also use tempdb heavily:
row level versioning (snapshotisolation)
online index re-building
As mentioned in other SO answers read this article on best practice for increasing tempdb performance.
2: Looking at the amount of free RAM on the server i.e. looking at the WMI counter Memory->Available Mbytes doesn't help as SQL Server will cache data pages in RAM, so any db server that's running long enough will have little free RAM.
The counters you should look at that are more meaningful in telling you if adding RAM to the server will help are:
SQL Server Instance:Buffer Manager->Page Life Expectancy (in seconds)
A value below 300-400 seconds will mean that Pages are not in memory very long and data continually is being read in from disks. Servers that have a low page life expectancy will benefit from additional RAM.
and
SQL Server Instance:Buffer Manager->Buffer Cache hit Ratio
This tells you the percentage of pages that were read from RAM that didn't have to incur a read from disk, a cache hit ratio lower then 85 will mean that the server will benefit from additional RAM
3: Yes, can't go wrong here. Having tempdb on a separate set of disks is recommended. Look at this KB article under the heading: Moving the tempdb database on how to do this.
Yes, the recommendation on high load servers is to put TempDB on a separate set of drives from the user databases:
SQL Server 2005 Books Online: Optimizing tempdb Performance
Not directly an answer on your question but this might be a good tip: Restarting your SQL Server instance will clear the tempdb, this might be a good start when investigating the actions which are done on the tempdb.
Excellent question, +1
tempdb is used far more heavily in SQL 2005+.
At least: Snapshot isolation levels, online index rebuild, reading INSERTED/DELETED in triggers(used to read the log file!)
This in addition to the usual order by clauses, temp tables etc.
You'd probably be better splitting your log and data files (also for recoverability).
More memory is always good but see this 64 bit specific stuff, Grumpy Old DBA below.
Finally, and maybe most important probably, you can have contention of space allocation in tempdb:
Explanations from Linchi Shea and SQL Server storage team
Late edit:
Paul Randall added an entry "Comprehensive tempdb blog post series" which offers good links
Writes to the tempdb can be anything. Internal hash tables, temp tables, table variable, stored procedure calls, etc.
If you only have 250 Megs of free RAM, then yes more RAM would be good.
It is always recommended that you split tempdb and user databases to different disks.
All writes to the tempdb will be 64k in size as that's the size of each database extent.

How to limit SQL queries CPU utilization?

After a large SQL Query is run that is built through my ASPX Pages I see the following two items listed in sql profiler.
Event Class TextData ApplicationName CPU Reads Writes
SQL:BatchCompleted Select N'Testing Connection...' SQLAgent - Alert Engine 1609 0 0
SQL:BatchCompleted EXECUTE msdb.sbo.sp_sqlagent_get_perf_counters SQLAgent - Alert Engine 1609 96 0
These CPU is the same as the query so does that query actually take 1609*3=4827?
Same thing happens with case :
Audit Logout
Can I limit this? I am using sql server 2005.
First of all, some of what you see in the SQL Profiler is cumulative, so you can't always just add the numbers up. For example, a SPCompleted event will show the total time of all the SPStatementCompleted events that make it up. Not sure if that's your issue here.
The only way to improve the CPU is to actually improve your query. Make sure its using indexes, minimize the number of rows read, etc. Work with an experienced DBA on some of these techniques, or read a book.
Only other mitigation I can think of is to limit the number of CPUs the query runs on (this is called Degree of Parallelism, or DOP). You can set this at the server level, or specify it at the query level. If you have a multiple processor server, this can ensure that a single long-running query doesn't take over all processors on the box--it will leave one or more processors free for other queries to run.
No, it takes 1609 milliseconds of CPU in total. What is the duration?
I bet the same or slighty more because I doubt SQL Agent queries use parallelism.
Are you trying to reduce background processes using CPU? If so, then you reduce functionality by disabling SQL Agent (no backups then for example) and restarting SQL Server with switch -x
You also can not stop "Audit logout" events... this is what happens when you disconnect or close a connection.
However, are you maxing the processors? If so, you'll need to differentiate between "user" memory for queries and "system" memory used for paging or (god forbid) generating your parity on RAID 5 disks.
High CPU can often be solved by more RAM and a better disk config.
SQL Server 2008 has a new "Resource Governor" that may help. I don't know if you're using SQL Server 2008 or not but you may want to take a look here
This is an issue of connection string. If audit logout takes too much of your cpu then try to play with different connection string.

Resources