I'm managing middle sized company database. Recently we have massive system upgrade and switched to virtual machine environment. After deployment, we are having performance issues, Apparently SQL Server on the upgraded system works slower than the old configuration.
Here are the configurations:
Old Server: SQL Server 2008, 10gb Ram, Intel Xeon E5420 x2 (Physical Machine) runs on Windows Server 2008
New Server: SQL Server 2014, 64gb Ram, Intel E5 2660 x4 (virtual machine) runs on Windows Server 2012
Very basic IO performance comparison follows as:
new server:
old server:
Even with the most basic operation:
select * from table
from most used tables runs more amount of time to retrieve result. Also stored procedures run slower.
Example:
new server: 01:39 minute, 3285365 rows
old server: 01:00 minute, 3339738 rows
I have no idea what could cause this problem. Any help will be appreciated.
Edit:
Both servers has same sql configuration
tempdb and datadb is seperated
This probably isn't what you want to hear, but VMs are always slower than physical servers, because of the overhead of implementing the VM. Also the 2660 has MUCH less L2 cache than the old processor.
I'm not sure what to tell you except to make sure that the VM for the SQL Server, has as much RAM and cores allocated to it as you can spare, and that SQL Server is configured to actually use them.
Also, disk I/O is a big deal. Are the drives and controllers for both systems similar?
VMWare has a whitepaper on the subject, so at least you're not alone. 8-)
http://www.vmware.com/files/pdf/solutions/SQL_Server_on_VMware-Best_Practices_Guide.pdf
Related
We have a RDP server which is running in 2008 version of SSMS and OS. Recently we migrated this server to 2016 version, both the OS(2016) and SSMS(2016).
The configured new machine(with ssms2016) is same to the old one(with ssms2008) in terms of system configuration. It has a 64-bit OS with x64-based processor. RAM memory is 64.0 GB and 2.39 GHz (32 processor).
We are facing severe performance issue while running stored procedures in SSMS 2016 version, as the same code base has been migrated from sql server 2008.We are loading data to these servers using SSIS ETL tools.
For example if we are running a stored procedure in old server(with ssms2008) it is taking 1 hour of time to complete the execution but the same SP is taking 10 hours of time in new server(with ssms 2016) sometimes even more.
To identify the root cause we have tried below approaches but so far nothing worked.
• After migration we changed the compatibility level of sql server from 2008 to 2016.
• Restored the database once again from old server (ssms 2008) to new server (ssms 2016 ) without changing the compatibility level.
• Recompiled the stored procedures in new server(ssms2016).
• updated the statistics in new server(ssms2016).
• Done the disc reconfiguration/thickening of windows new server drives also.
• While running time consuming stored procedures in new server(ssms2016), parallely ran sql server
profiler to identify the issue, but couldn't find anything
• Ran parallelly same query in ssms 2008 and ssms 2016 servers at the same time, in old
server(ssms2008) execution got completed much faster than new server(ssms2016)
Note: Is there any solution we can try to get the same execution time in both servers.
Thanks in Advance
Bala Muraleedharan
I'm going to assume the SQL server version got updated too, as SSMS version would not make any difference.
It's impossible to tell of course, but query execution times can be drastically effected by the use of the new cardinality estimator in SQL Server 2014 and above. 99% of the time, things run faster, but once in a while, the new CE gets crushed. Add this line to the stored procedure to run with the old 2008 CE and see if it makes any difference.
OPTION(QUERYTRACEON 9481);
This problem may have two reasons:
1- Control the setting of your SQL server. Specifically limit maximum server memory to 60% of your RAM and increase the number of your tempdb(system database) files to reach to your CPU cores number.
2- Check your SP syntax. If you are using table type variable(#Table) change them to temp table(#Table).
I've got SQL Server Express 2017 (RTM) 14.0.1000.169 installed on a low-powered W2019 server on AWS. It has 2GB of RAM and 2.40GHz processor. (t2.small).
I've had the same setup on other AWS machines with less power than this and they don't seem to have this problem. But those machines would have SQL Server 2014 and windows 2008R2.
The problem is that when I run specific queries, they are taking huge amounts of time. In my applications, I have a routine that sniffs parameters for the stored procedure being called so that the .NET can provide parameters as needed without me having to code every single procedure separately.
To do this, I have been running
[sys].[sp_procedure_params_100_managed] #procedure_name=#PRC
but this routine in this new machine is taking up to 30 seconds to run.
So I've manually created something simpler to see if I can view any issues with the execution plan, but it still runs extremely slowly:
select o.name, prm.*
from sys.parameters prm
inner join sys.objects o on prm.object_id=o.object_id
where o.type='P'
and o.name='prc_THEPROCNAME'
To be clear, there are only 30 stored procedures in the whole database. But the query above takes 2860ms to run in SSMS and is showing 1285 reads.
When I run that on my little win10 machine with SQL Server Express 2014, it takes 12ms with 568 reads on a similar (but larger) database.
So my question is this: what is the issue with this environment? Is a t2.small too underpowered for Win2019? Does SQL Server 2017 have that much more overhead than SQL Server 2014? Or is it possible that I have a bad configuration somewhere in my SQL Server setup?
I am using SQL Server on a very similar AWS platform (t2.small) and have no performance issues with your query (< 1s). You haven't mentioned whether any other queries are suffering performance problems, but that would be helpful. My answer then would be that the t2.small is not underpowered and it is highly unlikely that SQL '17 would have greater difficulty than SQL '14 when executing this query.
I am using Sqlserver express 2008 (64bit) on Windows Server 2012R2 with 56GB RAM.
I have a webapplication written in c# asp.net MVC5 hosted on IIS 8.5 64bit.
This application has been hosted as a 32 bit application as it has other 32 bit dependencies.
The data retrieval happens to be extremely slow, i.e. it takes approx 1.2 mins to run a simple query which returns 5 records. I have configured sql server min server memory to 8GB and max to 28 GB (if that matters, since its express edition I dont think it does matter)
The resource monitor shows the following statistics :
6,82,000 Virtual Memory
4,27,000 Working Virtual Memory
1467000 sharable memory
82000 Private memory
The problem is that the exact same setup works perfectly fine on same config with 8GB RAM.
I have 2 questions :
Could this be a SQLserver bottleneck? If yes then how to go ahead with the troubleshooting.
Does 32 bit application connecting to a 64bit instance of sqlserver have performance issues? Should I try a 32 bit instance instead.
This is where every SQL DBA lives is when a user says my application / query is running slow. Memory is most likely not your issue. Often it will be a parameter sniffing issue on the query, poor indexing on the table or some other issue. There are many ways of troubleshooting this and there are many scripts to run to find out what is going on. I would recommend Brent Ozar's toolkit available from GITHUB and use SP_BlitzFirst, SP_BlitzIndex and sp_BlitzCache.
Brent Ozar's First Responder Kit
I have a scenario where an application server saves 15k rows per second in SQL Server database. At first initial hours machine is still usable but whenever the database size increases ~20gig, it seems that machine is becoming unusable.
I saw some topics/forums/answers/blogs suggesting to limit the max memory usage of SQL Server. Any thoughts on this?
Btw, using SQL Bulkcopy to insert rows in the database.
I have two suggestions for you:
1 - Database settings:
When you create the database, try to use a large initial size, and consider to have a bigger autogrowth percentage/size.
You will want to minimize the times your filegroups need to grow.
2 - Server settings:
In your SQL Server settings I would recommend that you remove one logical processor from the SQL Server. The OS will use this processor when the SQL Server is busy with heavy loads on the other processors. In my experience, this usually gives a nice boost to the OS .
Our primary database server is an 8 core box with 8GB of RAM. The CPU is a Xeon E7330 # 2.4GHz. It runs Windows Server 2003 R2 (x64 edition) and SQL Server 2005
I wanted to do some testing so I set up SQL Server 2005 on another brand-new server which is an 8 core box with 4 GB of RAM. It has a Xeon X5460 # 3.16GHz and runs Windows Server 2003 R2 Standard. I Installed SQL Server 2005 out of the box and restored a backup of the primary database on to it, and did an UPDATE STATISTICS on all the tables.
The process I was testing executes the same stored proc many times. I was astounded to find from the profiler that this proc which executes with duration=0 or 1 on the primary server, was consistently executing with durations in excess of 130. This essentially makes the secondary server useless for testing, because it's just too slow.
No other apps run on either of these two boxes, just SQL server. And unlike the primary database server, the test server only had me accessing it.
I can't believe the difference in spec between these two machines explains this colossal difference in performance. Can anybody suggest any settings I may need to change?
Updates in answers to questions:
Second server is 32 bit Windows
I'm inquiring now about the disk arrays and how comparable they are
On the primary server, the data and logs are on the same drive (!) and it works fine
Looking in task manager on the test server, the CPU is running at like 10%, only one core even showing activity
Task manager on the test server (4GB RAM) shows "PF Usage 2.01GB" with SQL Server running. On the primary server (8GB RAM) it shows "PF Usage 6.67GB". How would I make SQL Server on the test box use more of the RAM? Maybe that would make a difference
Another update:
The primary server has a RAID-5 with 15,000 RPM drives. The test box has a RAID-5 with 10,000 RPM drives.
32 bit OS means 2 GB Virtual Address Space for your processes. Standard edition OS mean no AWE extensions either. So your test machine will be severely RAM deprived compared with the production one. Your buffer pool will suffer from premature eviction of the pages, your execution plans will not have the option to choose hash-joins for a lot of queries and so on and so forth. I doubt this explains the entire difference, I'm sure there must be something more at play. You say only 10 CPU usage during the query, is your MAXDOP setting 1 by any chance on the test server? Have you compared the output of sp_configure on the two machines? (make sure you enable 'advanced options' too).
Can you run the same problem query on the two machines, from a SSMS query window, with SET STATISTICS IO ON and SET STATISTICS TIME ON? Run it 2-3 times on each and write down the results. Does it show the same number of logical reads but vastly different number of physical reads? This would point to the RAM being insufficient to cache the needed pages. IS the number of logical reads very different? It probably means you get a bad execution plan on test.
Is the query write intensive by any chance? If so did you pre-grow the test database or is your execution blocked by log growth and database growth events?
There are plenty of places to look at to narrow down the issue, like SQL performance counters, sys.dm_os_wait_stats, check the sys.dm_exec_requests wait_type and wait_resource.
was the data in the memory cache yet? or was it all read from disk
You either have a different plan being generated or some hardware differences. For hardware you can check the disk seconds/[read,write] (edit to clarify - you do this in perfmon) and see if you have some massive differences from caching (e.g. high perf raid controller).
For the plan difference just check out the execution plans.
Also do set statistics io on and see if you are getting physical reads instead of logical reads. Maybe the mem difference is keeping your dataset from fitting in memory in secondary but not primary machine.
Although you may not be able to use AWE on your 32-bit server, you can provide SQL Server with a little more memory by adding the /3GB switch to the boot.ini file. Check out Books Online, it should give you more information.